#ENVRiD: Integrating ORCID iDs in Environmental Research Infrastructures

THOR – ENVRIplus Bootcamp


On March 28 and 29, representatives from over twenty environmental research infrastructures gathered at Aalto University, Finland to discuss ORCID integrations and more.

Tweet Asmi
Tweet: Starting #ENVRiD!

After introductions to the organising projects (THOR and ENVRIplus) and a general introduction to ORCID, Markus Stocker (PANGAEA) kicked off the series of presentations on ORCID integrations with a live demo on how to connect your PANGAEA account with ORCID and log in with your ORCID iD. This demo immediately showcased one of the key benefits of integrating ORCID within your infrastructure: through linking with ORCID, PANGAEA automatically receives the information you have given ORCID permission to share, in particular your ORCID iD. This enables automated cross-linking of data DOIs and contributor ORCID iDs and sharing of such link information with PID infrastructure, specifically ORCID. Xiaoli Chen’s (CERN) presentation on ORCID integration at CERN also showed the benefits of integrating ORCID within the high energy physics community − for example: how to deal with a publication with no less than 2853 authors!

Photo: Markus Stocker welcoming participants in Helsinki

The ORCID integration talks continued with representatives from two environmental research infrastructures, namely ICOS and Argo, as well as the EGI e-Infrastructure. While these infrastructures have not started fully integrating ORCID within their systems, the talks gave an overview of their current plans.

  • At ICOS, ORCID has been integrated in Carbon Portal user profiles and the team is working to implement the integration following best practice (ie obtaining validated iDs from ORCID). As there are currently only a few people with ORCID iDs, the main challenge is to motivate people to create an ORCID account and link their user profiles.
  • After instructions on how to cite data were included in Argo’s user manual, more people have started to assign DOIs to their datasets. DOIs make citing much more efficient, but at present the DOIs used do not provide credit to the individual contributors, since Argo is listed as the single author of datasets. Argo has identified ORCID iDs as a tool to list and credit individual contributors. Argo’s metadata describing the different roles of contributors to the dataset will be pushed to DataCite. DataCite will then push the information to ORCID records automatically.
  • At EGI, users with ORCID iDs can use their iD to login to the EGI Checkin service, which enables them to get authenticated access to EGI resources and tools. Further plans for integration, which are already in development, include linking to articles and datasets.
Slide: Argo: Auto-update ORCID record through DataCite DOIs

After lunch, Tom Demeranville (ORCID) explained more about the ORCID API and ORCID’s collect and connect program. Laura Rueda (DataCite) stressed the importance of complete and interoperable metadata − it even got its own slide! (see below) – and Kristian Garza (DataCite) showed the importance of complete metadata in his demo of claiming published datasets to ORCID retroactively. Other topics discussed in the afternoon included the Scholix framework and DataCite event data.

metadata groot
Slide: Metadata!

Day one ended with a discussion on the motivations for research infrastructures to integrate ORCID iDs in their workflows. The main motivations are attribution and disambiguation. Other reasons mentioned by the participants are the benefits of automated workflows and interoperable research systems whereby information is pushed to and linked within different systems and repositories automatically. Another reason for ORCID integration is that ORCID iDs are required by some publishers. Some of the biggest challenges to integration, however, were identified as being social rather than technical: getting people to register, and making sure they will use their iDs, was cited as one of the biggest barriers. For journal articles, however, ORCID iDs are more accepted as common practice. Yet people need to be encouraged to use their iD when they are uploading their datasets as well. Funding and time constraints to build the integration itself also pose a challenge.

Photo: Hands on with the ORCID API

On day two, attendees split into two breakout sessions to attend either the infrastructure developers’ or managers’ track. In the developers’ room, Tom Demeranville took the participants through a hands-on session on the ORCID API. In the managers’ track there were more general presentations on new developments within the PID community, such as on the Organisation Identifier project, dynamic data citation, and PIDs for instruments. As these are all new initiatives, more work and discussions are needed to move forward to take into account the different requirements by different stakeholder communities. For example, within the environmental research community, more discussion is needed on how to describe instruments. Should DOIs be used? Or is it better to use serial numbers for physical objects? And what happens when organisations use the same instruments? One solution that was suggested was the adoption of a form of ISO standard that is recognised by different countries. For dynamic data citation, there is no standard solution in place yet.

For the environmental research institutions that want to take their ORCID integrations forward, the same also applies. A short exercise showed that most RIs think that pursuing ORCID integration is urgent. And as the closing summary of the participating infrastructures’ intentions towards ORCID integrations shows, most RIs are either thinking about it or are definitely going for it this year. Much work remains to be done but we are confident that at #ENVRiD Part Two we will see progress toward such integrations!

Much more than infrastructure: working together to connect research

Crossref/THOR Outreach Meeting, Warsaw, Poland

Monday, 24 April 2017

Digital Humanities Centre at the Institute of Literary Research, Polish Academy of Sciences

This outreach meeting aims to explore how the research community can work together to help connect research and improve discoverability of content – publications, data and more.

Representatives from Crossref and Project THOR partners (ORCID, DataCite and the British Library) will introduce and provide updates on their initiatives and services (and how they work together). We will also have panelists from Polish institutions join us to discuss how the research landscape looks for them, and how they might work with some of the services discussed.

The day aims to provide a deeper understanding of foundational scholarly infrastructure, but also to have the opportunity to discuss how that can be used in publisher and researcher workflows.

We welcome editors, publishers, librarians, researchers, funders and the wider community to come share their thoughts and ideas. There will be lots of time for discussion and questions, so please join us and register here.


08.30-9.00 Registration & coffee
09:00-09:10 Welcome from organisers
09:10-9.30 Opening remarks

Professor Łukasz Szumowski, Under-Secretary of State, Ministry of Science and Higher Education

Professor Paweł Rowiński, Vice-President of the Polish Academy of Sciences

9:30-11.00 Introduction to Persistent Identifiers

Crossref, ORCID, DataCite, Project THOR, PID Interoperability

11:00-11:20 Coffee
11:20-12:50 Persistent Identifier Services

Crossref services, THOR partner services, PID Integrations. Discussion

12.50-13.50 Lunch
13:50-15:20 What’s happening: Plans & Applications

  • Industry initiatives and how to get involved with Project THOR partners
  • Polish case-studies
15.20-15.40 Coffee
15:40-17:00 Panel Discussion: Let’s Link Research! Persistent Identifiers for Polish Scholarship.

Moderator: Dr. Maciej Maryl, Digital Humanities Centre at the Institute of Literary Research, Polish Academy of Sciences
Dr. Marta Hoffman-Sommer, Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, RepOD Repository for Open Data, OpenAIRE NOAD for Poland

Dr. Eng. Przemysław Korytkowski, West Pomeranian University of Technology in Szczecin, member of the Committee for Evaluation of Scientific Units at the Ministry of Science and Higher Education

Dr. Habil. Emanuel Kulczycki  Adam Mickiewicz University in Poznań, the chairman of the Specialist Team for the Evaluation of Scientific Journals at the Ministry of Science and Higher Education

Rachael Lammey, Member & Community Outreach, Crossref

Laura Rueda, Communications Director, DataCite

Josh Brown, Director of Partnerships, ORCID

17.00 Closing remarks

CRefThor combined



Challenges of Measuring PID Adoption

This has been cross-posted on the ORCID blog.

The THOR team is hard at work helping forge the path to sustainable persistent identifier (PID) services. As with any long-term goal, a bit of self-reflection is helpful for tracking your progress, considering your successes, and psyching yourself up to tackle challenges along the way. In the case of a project like THOR, we can help this self-reflection along by developing a structure to help us properly measure our success as we go. But this is often tougher than you might think.

In the early days of PID services, it was fine to be concerned only with uptake, since the priority was to get the word out. While we still have some work to do there, PID services have now matured to the point that we can no longer be satisfied solely with simply “getting the numbers up.” We need to tailor our messages in order to drive further innovation towards the interoperable future that THOR and our partners dream of. Having better information about underlying motivations for adopting PIDs and about who might be ready to do so will help us drive the creation of services that will make the whole system better. To further this warm and friendly mission, we need cold hard facts. So how do we go about finding those facts? And how do we turn them into something useful and, quite frankly, a bit less prickly?

What can be measured?

The first step in evaluating our progress was to set objectives that are actionable and measureable. Though it’s tempting to set strict performance targets, this is just setting yourself up for failure. If you define success as selling 50 widgets, and you only sell 48 then, by your own definition, you’ve failed. In THOR’s case, our driving purpose is infrastructure improvement, so we’re more interested in observable trends rather than concrete targets. Developing key performance indicators (KPIs) is helpful here. Remember that an indicator is just a way to consider trends (e.g. “number of widgets sold”), and it isn’t itself a target (e.g. 50 widgets).

How should it be measured? (With which indicators?)

The next step was to determine how to measure what we want to measure. The goal here is to select indicators that are valuable as well as meaningful. “Valuable” means that knowing the indicator’s status will help us to make a decision. “Meaningful” means that we understand what the indicator is actually tracking. If the trend line associated with our chosen indicator goes up, will we know what that means for us, and will we know how to react?

Part of the difficulty of selecting indicators in this way is that the most meaningful and valuable information for you might not be immediately available. When THOR first started down the indicator path, we just wanted easily gatherable quantitative measures; we weren’t looking to take on any complex user studies. However, some of the information we wanted wasn’t available, either because it wasn’t being tracked on a regular basis or because gathering it ourselves would have been a manual process we weren’t yet willing to take on.

How should it be measured? (Tool or no tool?)

Once you know what your objectives are and which indicators will help you track your progress to those objectives, you need a convenient way to monitor it all. Fancy tools may not be necessary, in fact most of the time they probably aren’t, depending on which indicators are important to your particular flavour of success. But we wanted to demonstrate some of the possibilities of having PID measures ready to aggregate — and if we’re honest, we do like fancy — so we developed a dashboard to keep everything in one place. (Read more about our process in our report.) Creating the dashboard was a good exercise in establishing what could be measured and how. It also gave us a chance to explore what meaningful metrics might be. For instance, we can see that PID uptake is on the rise, and we can see some information about the metadata that is associated with those PIDs, but this doesn’t actually give us any insight into causal relationships or let us know precisely why this trend is happening or even exactly who is involved.

Because we’re all about meaningful data, these adventures in measurement have led the THOR team to identify gaps in the available metrics surrounding PID service adoption and to consider which additional indicators might be useful for future work in the PID research space. We’ve now embarked on a more detailed gap analysis that will lead to a study of some of these missing measures. Since our goal is to drive PID service adoption, we’ve identified disciplinary coverage and geographic distribution as our most promising themes to pursue. We are now collecting the data we need to analyze PID adoption in X disciplines and Y countries – a full report will be available later this year.

Moving forward

So what have we learned throughout this process? First and foremost, not everything is as concrete as you might want it to be. When you’re dealing with humans and human behaviours, things get squishy. Second, since we’re only monitoring existing trends based on factors we don’t necessarily control, some information available to us will remain just “good enough” until others can do more detailed work to either improve the data or flesh it out. Our job for the remainder of the THOR project is to point out what would be most useful to know about interoperability, so that it can be studied.

The PID field is still evolving and has a lot of growth and changes left in store. Some potentially valuable information requires further study to tease out. Our service adoption study, beginning with the gap analysis, will help us make a start on that research, and we hope to gather some useful information that can set the stage for future work. We’ll also need help from the wider PID user and integrator community to improve existing metadata and to help us consider meaningful metrics.

As always, if you have questions or comments about THOR, please get in touch.

Giving Credit for Data with Claiming Services

Researchers demand credit for the work that they do. While there are well established practices and services in place to give credit for traditional publications, these are sorely lacking for the full range of research artefacts, including data and software.

THOR partners have been busy developing data claiming services. The results are published in our latest report, ‘Services that Support Claiming of Datasets in Multiple Workflows’ (10.5281/zenodo.290649), where you can read about the successful implementation of claiming services in the databases and services of disciplinary repositories as well as PID infrastructures of several THOR partners.

The report summarises progress on facilitating researchers and other contributors to associate research artefacts with their ORCID record, a process known as claiming. The dataset claiming process involves creating, maintaining, and sharing information about the relationship between researchers and datasets.

We describe our experience implementing claiming workflows at five organisations, identifying some of the shared challenges as well as the unique issues each organisation faced developing and successfully deploying the claiming process into a live operational production system.

This is an important advance in enabling unambiguous attribution and credit for research.

While technical challenges remain, such as synchronisation of claims, technical capabilities have substantially improved. The human and social challenges are now coming to the fore: we must ensure that claiming services are widely adopted and used across the research communities.

ORCID Integrations in Environmental Research Infrastructures

A THOR-ENVRIplus Bootcamp

Are you working in a technical or leading role within an Environmental Research Infrastructure? Join us at Aalto University in Finland on March 28-29, 2017 to learn more about ORCID integrations and discuss best practices with colleagues from other Environmental Research Infrastructures.

Project THOR supports seamless integration between articles, data and researchers across the research lifecycle. ENVRIplus brings together Environmental and Earth System Research Infrastructures, projects and networks to create an interoperable cluster of Environmental Research Infrastructures across Europe. These two H2020 projects are joining forces by organising a bootcamp focused on ORCID integrations in Environmental Research Infrastructures.

The two day event offers a unique opportunity for knowledge exchange between persistent identification experts from THOR partner organisations (in particular ORCID and DataCite) and the managers, as well as developers, of Environmental Research Infrastructures, in particular ENVRIplus partners.

The bootcamp has a strong emphasis on ORCID integrations and will touch upon the specific challenges Environmental Research Infrastructures are facing in regard of such integrations. The bootcamp also focuses on the technical aspects of implementing ORCID integrations. In addition to infrastructure managers, we thus strongly encourage developers to participate as well.

On March 28, we will give an introduction to ORCID and concepts. We also demonstrate various types of integrations in systems. The second day (March 29) is structured in two separated tracks: Research Infrastructure Developer and Research Infrastructure Manager. We still encourage participants to provide us with input on bootcamp topics. You can enter suggestions when you register for the bootcamp hereThe preliminary agenda topics are included below:

Tuesday March 28

  • Introductions to ORCID and concepts
  • ORCID Integrations in THOR partner systems
  • ORCID Integrations in environmental research infrastructures
  • Challenges and opportunities at the different research infrastructures
  • Q & A and discussion: ORCID integration, Metadata, identification of co-authors, crosslinking PIDs etc.

Wednesday March 29

Parallel track 1: Hands-on exercises for RI Developer

  • Coding ORCID integrations
  • Mining ORCID data dump
  • ORCID iDs in research infrastructure metadata (e.g. SensorML)
  • PID linking and link information exchange
  • ORCID and the ENVRI Reference Model

Parallel track 2 : Discussion/presentation sessions for RI Manager

  • Crosslinking data, author, publication
  • PIDs for instruments, platforms, deployments
  • Dynamic data identification
  • Cost of integrations
  • PIDs in workflows involving research infrastructures and e-infrastructures

The THOR-ENVRIplus team is looking forward to seeing you in Finland!


Hasta la Vista, THOR Bootcamp

With local support from THOR ambassador Eva Mendez, the first edition of the THOR Bootcamp was successfully carried out in Madrid, at Universidad Carlos III de Madrid on November 16-18. The Bootcamp is part of THOR’s outreach effort to engage and train local scholarly communication communities to further adoption of PID services. The full set of slides used can be found on the THOR Knowledge Hub.

THOR colleagues from different partner organizations and guest speakers from local research organizations joined forces to present a full curriculum on PID topics, from existing tools and services to technical and policy implementation. The event attracted more than 130 registrants in total and yielded valuable experience for both the attendees and the THOR project.

The Bootcamp consisted of 3 modules to cater tailored content to different audiences. The first half-day was organized as an integral part of research training for Ph.D. students and other young researchers at UC3M, focusing on Open Science recommendations and the incorporation of PIDs in existing research workflow. Students came from different disciplinary backgrounds and brought with them distinct questions, the Bootcamp provided a great opportunity for us to engage the young researchers’ community and address their concerns directly . 

“I consider the instruments presented along with the seminar an extremely powerful way to collect, share, exploit and advertise the work of a researcher in a way which is mostly new and free from older constraints. The value of the research itself is so, enhanced and collaborations are made way easier in benefits of the results.”

— Rocco Bombardieri, Ph.D. student at UC3M

Ph.D. Students attending THOR Bootcamp at UC3M

The second day was reserved for local information professionals and research data service stakeholders (librarians, researchers, research administrators and policy makers). Their day followed an intense schedule consisting of talks and a mini panel with service implementation experience by ORCID, DataCite and CERN.

Local information professionals at THOR Bootcamp General Day at UC3M

The final half-day offered a more technical tutorial. The self-contained programming module enabled participants to build a metrics dashboard that visualized data interactively, based on the technology used in the THOR dashboard. As a hands-on session designed for non-technical and technical savvy attendees alike, it was great to see how people from a variety of technical backgrounds approached the tutorial and contributed to the ensuing discussion.

Instructors of the Hands-0n Day, Ioannis Tsanaktsidis (left) and Kristian Garza (right).

We aim to establish ties with research organizations and institutions by providing tailored PID content via the Bootcamp series — two more Bootcamps will be held in March and May next year (2017). Stay tuned to find out if we are coming to your neighborhood soon! Or better yet, if you want to organize your own Bootcamp, sign up to be an ambassador and we will provide all the materials that are ready to be reused, plus event planning tips for bringing your local community up to speed with PIDs.

THOR at PIDapalooza

If November taught us anything, it’s that open identifiers clearly do deserve their own festival. On 9th and 10th November 2016, people from all over the world gathered in Reykjavik to share PID stories, demos, use cases, victories, horror stories, and new frontiers at PIDapalooza, the first conference dedicated to PIDs. The THOR team travelled to the country of glaciers and volcanoes to talk about project identifiers, persistent identifiers for instruments, PIDagogy and measuring PID adoption.

PIDs for Projects

Martin Fenner (DataCite) and Tom Demeranville (ORCID) presented their work on project identifiers to a full house. They proposed that project IDs should be used to link participants, outputs and funding. But the most suitable identifiers to describe projects? That was left open for discussion – a discussion that quickly turned heated. What, even, is the exact definition of a project? What would persist if the project ends? Would researchers be willing to share the information needed for the project ID? How would we describe the metadata, given that a project does not have a publication date? Clearly more research needs to be done to answer these important questions. Keep an eye out for the announcement of a THOR webinar on Project identifiers, which will be held early 2017, in which we will be resuming this discussion.


Tom Demeranville leading the discussion on PIDs for projects

Persistent Identification of Instruments

Markus Stocker (PANGAEA) continued to explore new frontiers with a presentation on PIDs for instruments, instrument platforms and their deployments. Beyond enabling the unambiguous identification of these entities as well as reference to them in articles and other research artefacts, Markus suggested that metadata preservation about these entities is critical for researchers to judge the fitness of observation data for reuse. He presented two examples for systems that already assign DOIs to deployments and platforms. A key challenge for the community is to decide on the required metadata for preservation.


Twitter activity during Markus Stocker’s presentation on PIDs for instruments

The Human Perspective

Building the technical infrastructure for open research was a clear theme at the conference, but how do we move from infrastructure to adoption? How do you teach, learn, persuade, discuss and grow the uptake of PIDs in everyday research practice? My presentation showcased the contribution that the THOR ambassador network is making to the human infrastructure around PIDs. By organising training activities within their own communities and sharing training materials, THOR ambassadors are helping to overcome the cultural barriers to PID adoption. These forms of collaboration are not only critical between THOR partners and ambassadors, but need to extend to other organisations and projects in order to integrate PIDagogy within the Research Data Science Curriculum. The importance of communication was also reiterated in other sessions on PIDagogy, in which participants designed infographics to promote and explain PIDs to different stakeholder groups. These materials will be developed further and made available for the community to (re-)use.


PIDapalooza crowd developing videos, infographics and quizzes for PID adoption

Challenges of Measuring PID Adoption

Salvatore Mele (CERN) discussed the challenges of measuring PID adoption. THOR has already developed a comprehensive dashboard, which shows ORCID and DOI uptake over time. But the ways in which we evaluate and interpret the results remain open for discussion. Salvatore explained that it is difficult not to get philosophical when talking about measurement of PID uptake. What information is missing? What do we not (yet) know? And what further steps can we take to know the unknowable?


Salvatore Mele explaining the THOR Dashboard

PIDapalooza definitely generated as many questions for THOR as we brought to the table. Participating and presenting at this event was a great opportunity for the team to discuss ideas and generate more thought for further research and future collaboration, complementing the PID frontiers already being explored by other organisations. And yes, THOR definitely believes identifiers deserve their own festival and is looking forward to PIDapalooza 2017!

Want to know more about PIDapalooza?

Identifying Interpretation

Scientific research infrastructures collect large quantities of values. Values are typically numbers that result from observation, experiment, or computing activities. For instance, plant scientists collect values that result from observing fluxes of carbon dioxide on the leaf-atmosphere boundary; high-energy physicists collect values that result from observing collisions of atomic particles; social scientists observe the interactions of human populations and individuals, collecting values both qualitative and quantitative.

The interpretation of values is central to research investigations. With interpretation, values are given meaning in the context of investigations. The result of interpretation activities is information, and research infrastructures integrate information into existing bodies of knowledge. Therefore, research infrastructures are knowledge infrastructures or “robust networks of people, artifacts, and institutions that generate, share, and maintain specific knowledge about the human and natural worlds” (Borgman, 2015).

At the International Workshop on Reproducible Science, we presented the possibility of aggregating machine readable information in Research Objects. We have proposed to extend a Research Object Model (Belhajjame et al., 2012) with a new Resource called Interpretation. Existing Resource types include Dataset, Software, and Paper. In our proposal, machine readable interpretations are additional research artefacts created in scientific investigations. Research Objects thus capture also the interpretations given to observational, experimental, or computational values in research investigations.

Just as with other research artefacts, interpretations could be unambiguously and persistently identified in a global way. DataCite digital object identifiers (DOIs) could be used to enable unambiguous reference to interpretations, and the resolution to human and machine readable interpretation descriptions. The approach would also enable the citation of interpretations, and thus the recognition of contributions toward interpretations. Cross-linking of interpretations with the ORCID iD of contributors would enable unambiguous attribution.

Between the numerical values and the abstract high-level information reported in scientific articles, the primary information obtained by interpretation is generally refined into secondary and tertiary information. For example, primary information about individual events occurring in the environment, such as event date, location and duration, may be refined into secondary information about the seasonal mean event duration, and into tertiary information about the statistical significance in difference of seasonal mean duration. Curating such information and its provenance arguably supports the reproducibility of scientific investigations, from numerical values to the natural language text in scientific articles. Advanced knowledge infrastructure may increasingly capture, curate, and provide access to such information, in standardised and unambiguously identified form.

Belhajjame, K., et al. (2012). Workflow-Centric Research Objects: A First Class Citizen in the Scholarly Discourse. In Proceedings of the ESWC2012 Workshop on the Future of Scholarly Communication in the Semantic Web (SePublica2012), Heraklion, Greece.
Borgman, C.L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. Cambridge, MA: MIT Press. ISBN 978-0-262-02856-1

Persistent Identifier Services for the Humanities

Persistent identifiers (PIDs) are increasingly embedded in the services that researchers use every day, enabling unambiguous attribution of the full range of scholarly outputs. This makes it easier for data producers and researchers to get credit for their contributions; for data centres, universities and funders to track the impact of the research they facilitate; for publishers to incorporate data into scholarly writing; and for researchers to discover and cite data through clear provenance of information and ideas. In short, they support an entirely new research infrastructure.

Within THOR we are working to realise this vision by improving interoperability and integration of PID services, and addressing the cultural barriers to adoption. Now over a year into the project, we have found that uptake in the humanities, in particular, lags behind other disciplines. In response to this, we will be running a series of workshops through which we hope to better understand the potential for persistent identifier services in the humanities, identifying requirements for and barriers to uptake, and creating a roadmap to guide future development.

The first workshop will take place at the British Library on Friday 9 December 2016, in which we will have a focused discussion around the role of PIDs in research using historical sources – fields in which digital data has taken on an increasingly important role. The workshop is by invitation only. However, we’re especially keen to hear from humanities researchers who are working with research data products. If you’re making data available or reusing historical data and are interested in attending, please contact us at events@project-thor.eu for more information.

THOR and the EC Catalogue of Services Framework

In November 2015 the “eInfrastructure” Unit at the European Commission Directorate General for Communication Networks, Content and Technology asked several e-Infrastructure providers to develop a framework for a service portfolio to describe services developed with funding from the directorate. The THOR project participated in the definition of the concepts underlying such a portfolio. The resulting framework can be found here.


One of the goals of THOR is to ensure access to the scholarly record. Research support services are an important component in the overall production of research outputs. They should be preserved, cited, credited, reused and validated just like the other pieces of the research landscape. A service portfolio can play an important role in this.

A central or distributed shared service portfolio can also:

  • assist users by
    • making services easier to discover and compare;
    • making it possible to determine the services’ relevance; and
    • identifying overlapping efforts or gaps in the catalogued service landscape. This is particularly true as the portfolio is to be linked to Key Performance Indicators (KPIs) that enable some evaluation of the services.
  • enable funding bodies and commercial providers to
    • understand needs for and availability, quality and impact of tools;
    • improve the visibility of their investments; and
    • improve their uptake.
  • assist service providers, such as THOR partners, by
    • providing a common interoperable language for our own service descriptions to be shared with others, and, in turn,
    • offering a competitive advantage by being able to showcase our products and services together with other EC-funded service providers.

Together with EGI, EUDAT, GEANT, OpenAIRE, and BlueBRIDGE, we have organised two workshops at which we presented the framework. Our workshop at the EGI annual conference in April 2016 was aimed at sharing our current practices, discussing how to harmonise them, and how they and our framework fit with the FitSM standard for IT service management. At DI4R 2016 in September, we continued the discussion by gathering current user experience and requirements for future portfolio development from different communities. This resulted in a set of recommendations to help shape future activities. The workshop at DI4R also enabled us to explore synergies with the MERIL project, which aims to develop a catalogue of openly accessible European research infrastructures (RIs) across disciplines and countries, and tools to analyse the described resources.

The Catalogue of Services framework can feed into the newly funded eInfraCentral H2020 project. eInfraCentral will develop an implementation of a common service catalogue, not just aimed at researchers, but also at industry, government, educators, and citizens; develop access and monitoring tools; and draw policy lessons.

Science today is “Open Science” − a global collaboration across institutions, borders and disciplines, underpinned by sharing scientific artefacts and resources at a scale hitherto inconceivable. Shared digital services are crucial to its success. They amount to a huge investment which must be responsibly developed. A service portfolio will be an important tool in improving the effectiveness and efficiency of service development and uptake.

Want to know more?
The Catalogue of Services can be found here: https://doi.org/10.5281/zenodo.165467
Example uses are also available here: https://doi.org/10.5281/zenodo.166513

Presentations from DI4R can be found here: