GeoNetwork from Scratch II : Attack of the IDEs

We have already seen how to compile and run a basic GeoNetwork instance. Although we know that real developers will probably skip this step too, for new developers in GeoNetwork, it will be relief to have an IDE to work with. I know that many GeoNetwork developers use NetBeans or Intellij but as I am used to work with Eclipse, that’s what we are going to explore on this post.

First of all: Eclipse has better support for Maven projects on each version. So, to avoid headaches, just download the latest eclipse available.Eclipse has many installer tutorials, so I won’t stop here explaining how to run eclipse. I will just assume you know how to do it.

To run GeoNetwork from eclipse is very very easy. Just right click on the Package Explorer view to import -> As Maven Project over the folder you already had cloned on the last post:

Import As Maven Project
Import As Maven Project
Import As Maven Project
Import As Maven Project

There is still something Eclipse does not support right about GeoNetwork: we have a classes folder that Eclipse tends to misconfigure. So, go to that folder, right click and remove as source folder. To do this, go to the “web-app” project and right click on src/main/webapp/WEB-INF/classes. Select Build Path > Remove From BuildPath.

Then, completely remove the folder from the source code. Don’t worry, it’s git, you can recover it later. You can also do this by right-clicking on the folder and selecting Delete. Yes, you are sure you want to delete folder “classes“.

Now, update as maven project right clicking on the project “web-app” and selecting Maven > Update Project …

update project

Once this finishes, you can restore the folder we previously removed. Go to the “web-app” project, right click on src/main/webapp/WEB-INF and select Replace With >HEAD Revision. Yes, you are sure.

Congratulations! You are ready to use Eclipse to modify GeoNetwork.

But wait, how do we run GeoNetwork inside Eclipse to be able to debug?

We have several approaches here. Remember the jetty command to run GeoNetwork from the console? It is available also inside Eclipse (right click on web-app and Run As > Maven Build) and you can add some maven variables to be able to run in parallel a debug watch to debug your code. You can also set up a Tomcat server inside Eclipse and run GeoNetwork from it. This second option is more easy for beginners, so that’s what we are going to do now.

First, you have to create a Tomcat server inside Eclipse. So, search for the “Servers” tab and right click on it. Select New > Server. You will see a windows offering different types of servers. We will select the Tomcat v.7.0 Server one. You will probably won’t have any server runtime environment configured for it, but you can “Add…” a new one. There are many tutorials[1][2] for this, so we won’t stop here.

On the following window, you can select which applications to run. Obviously, you choose the one called “web-app” and Finish.

Now, you will have a new Server on the Servers tab. select it (left-click) and click on the green arrow just on the top of that tab. You will see on the “Console” tab all the output of GeoNetwork starting up. Once it is started, you can enter GeoNetwork the same way as before, using http://localhost:8080/geonetwork

Have fun customizing GeoNetwork!

JIIDE 2015 – Sevilla

Last week I attended the JIIDE conference, that took place here in Sevilla. This is the official conference for both portuguese and spanish spatial data infraestructures. The presentations were diverse and rich in content and there were working groups for INSPIRE and conformance running in parallel.

You could see some trends in how SDIs are evolving through all the Iberian Peninsula. Geograma explained to us that hiding data behind paywalls or registering sites makes us less compliant. But on the other hand, maybe it doesn’t matter because as José Fernández (IECA) showed us, data is going more and more open and free. Why should someone pay for data generated on a public administration? It has already been payed by taxes and a paywall is just another stone on the way of generating added value to the data. And above all this, every country has a different payment and access system, so it is virtually impossible to query the same data on different countries easily, which was one of the goals for INSPIRE.

Transparency, interoperability, quality, conflation,… keywords through all the conference. As an example of conflation and reusability, IECA was created by the union of the geospatial information department and the statistics department of the government of Andalucía. This allows them to localize statistic data that, once the privacy details are removed, can be easily shared. Creative Commons is the main license for all their products.

On a statal level, now we have the CNIG, who unifies all the data from Spain and allows us to download (or buy) data. Here, the map is not the central issue, but just another product you can use.

And still, there are many things INSPIRE has yet to solve. There are a lot of abstract requirements the nodes are not sure how to solve. All data has to have quality metadata associated to it, but, is there any quality minimum required for the data? How close the scales should be? What precision? And above all, how are the different public administrations supposed to handle all this without specific financing from Europe? Or what is worst: why do Europe ignore the conclusions reached by several working groups on different countries? Why reinventing the wheel?

There was another subject running through all the conference: why do SDIs have less users than open data portals? Is it because the type of data? Is it because we don’t focus on usability? Why do they choose data from worse quality (or not government certified)? It looks like we have to work more on usability and user interaction.

Should SDI focus on developing applications around the data? Or should they just focus on being a data repository third party companies can query to generate added value? Should we merge with the opendata portals even if that means lose part of the focus on spatial data?

Javier López, explained the problem about persistent identifiers. We have to assume that the entities generating data will not be persistent. But their data should survive those entities and we should be able to trace back who created the data and who have been maintaining it. How to achieve this without being too dependent on some specific platform? How to create a standard that survives through the years?

We also had the visit of Rodrigo Barriga Vargas (IPGH) who told us about GeoSUR, an initiative to create, conform and share quality spatial data in America. He told us how lucky we are to have INSPIRE as a gubernamental initiative to force us to follow standards.

But the best thing was to see how GeoNetwork is being used more and more and we have happy users advocating that it is the only SDI that makes sense.