Lan, Erich and Rajesh attended the iRODS User Group Meeting, 2016 (http://irods.org/ugm2016/), at Chapel Hill, NC where Rajesh gave a talk and live demo titled "Integrating HUBzero and iRODS: Geospatial Data Management for Collaborative Scientific Research". Lan and Rajesh attended the Advanced iRODS user training on the first day of the meeting where the features of the new (to be released) iRODS v. 4.2 were demonstrated.
Perhaps, the most directly applicable to us is the availability of pluggable rule engines which now means that there are a lot more control points that logic can be inserted at, termed "dynamic policy enforcement points (PEPs)" ; more importantly, rule engines can be written in Python. This opens up the possibility of writing all of our processing in the Python file containing the rule engine plugin itself or importing it from a separate file. This would allow us to avoid having to write microservices in C++ and build and install them as RPMs, simplifying development quite a bit. However, compiled C++ microservices still have an efficiency advantage over the Python rule engine. iRODS v 4.2 also ships with the latest versions of Clang and cMake enabling us to make use of the newer features of C++ 14 in any of the plugins (C++ based) we develop. Another new feature that will probably be relevant to us down the line when we have more data to manage and archive is the new composable resource hierarchies. iRODS resources (representing the actual physical storage) can be composed into tree structures in several master-slave configurations enabling rules and plugins to finely control replication and archiving across these resources.
While quite a few talks focussed on improvements to the storage underlying iRODS, there were two specially interesting categories of talks: applications and new iRODS clients. While we were hoping to be able to start using the NFS-iRODS client soon to replace our FUSE mount solution, it appears that further testing is required before we can reliably switch to using NFS mounts. Quite a few client applications (native to OS X and Linux) were demonstrated which could help users transfer files from their local machines to our iRODS storage. The main challenge there is still automating the process to not require users to be aware of our iRODS connection settings and the need to nurse the connection during transfer. Perhaps the most interesting of the application talks was about CyVerse, which seems to be engaging similar goals to the GABBS project (i.e. building blocks supporting compute, search, metadata and sharing). CyVerse grew out of the iPlant initiative and we had some interesting chats with the CyVerse director on potential collaborations with groups who are interested in geospatial data management.
We also had some interesting discussions with Arcot Rajasekar from DICE-UNC who was one of the developers of SRB which is the precursor to iRODS. In particular, we talked about their integration of OpenDAP with iRODS and confirmed that our current design was similar to what they had implemented as well. We also continued discussions on interoperating with the HydroShare project by using iRODS federation to enable data from one location to use tools at the other location (being MyGeoHub in this case). We have some initial design in mind for the interoperation, but will have to both test its feasibility and ensure that HydroShare is satisfied with our access control solution (due to our non-standard access via FUSE and bind mounts).