Release Note for the THOR Dashboard

Following the release of the THOR dashboard last year, conversations with internal users, stakeholders and even the project’s reviewers have highlighted additional data and features that might serve to make the dashboard more comprehensive for the wider PID community. At the same time, available resources have changed, and THOR’s own interests have evolved as the project has progressed. In light of all this, we’ve performed a series of updates to the dashboard during the second year of THOR.

A report released today outlines these updates, as well as our considerations during the process and how others can potentially benefit from the dashboard work. The most visible update is the inclusions of some basic measures from Crossref, but a number of operational improvements were made behind the scenes to improve the scalability and robustness of the dashboard. The report also highlights challenges and lessons learnt throughout the process of developing and maintaining the dashboard. These include design limitations and our experiences of designing tutorials to instruct others in creating similar services, as well as the common issue of scope creep.

It is our hope that others can follow our work on the THOR dashboard to develop other useful services for the PID community. Our experience with the dashboard indicates that there is still quite a way to go in pursuit of comprehensive PID metrics, but THOR is making big strides along this path.

The dashboard is available to view on the THOR project website.

Interim business plan for sustaining the THOR federated PID infrastructure and services

THOR’s approach to sustainability relies on our partners taking on the THOR outputs as part of their regular business activities. This means that these outputs will need to be folded into their operations and will need to be sustained by their regular business models. To facilitate this process and to provide some food for thought for other PID service providers, THOR’s sustainability team was tasked to give consideration to existing business models and our own plans for sustainability.

While we will be considering these things throughout the project, as the first half of the project wound down, our initial progress was collected into an interim business planning document. This first-stage effort outlines our approach to sustainability and discusses the factors that can influence the sustainability of PID programs and services generally. In this document we also raised several open questions that we thought were central to the issue of long term sustainability for PID services. As we progress through the project, we will develop answers to these open questions and present them as part of the final business planning document, to be released at the end of the project.

As always, our reports are archived in Zenodo and available on the THOR website.

Challenges of Measuring PID Adoption

This has been cross-posted on the ORCID blog.

The THOR team is hard at work helping forge the path to sustainable persistent identifier (PID) services. As with any long-term goal, a bit of self-reflection is helpful for tracking your progress, considering your successes, and psyching yourself up to tackle challenges along the way. In the case of a project like THOR, we can help this self-reflection along by developing a structure to help us properly measure our success as we go. But this is often tougher than you might think.

In the early days of PID services, it was fine to be concerned only with uptake, since the priority was to get the word out. While we still have some work to do there, PID services have now matured to the point that we can no longer be satisfied solely with simply “getting the numbers up.” We need to tailor our messages in order to drive further innovation towards the interoperable future that THOR and our partners dream of. Having better information about underlying motivations for adopting PIDs and about who might be ready to do so will help us drive the creation of services that will make the whole system better. To further this warm and friendly mission, we need cold hard facts. So how do we go about finding those facts? And how do we turn them into something useful and, quite frankly, a bit less prickly?

What can be measured?

The first step in evaluating our progress was to set objectives that are actionable and measureable. Though it’s tempting to set strict performance targets, this is just setting yourself up for failure. If you define success as selling 50 widgets, and you only sell 48 then, by your own definition, you’ve failed. In THOR’s case, our driving purpose is infrastructure improvement, so we’re more interested in observable trends rather than concrete targets. Developing key performance indicators (KPIs) is helpful here. Remember that an indicator is just a way to consider trends (e.g. “number of widgets sold”), and it isn’t itself a target (e.g. 50 widgets).

How should it be measured? (With which indicators?)

The next step was to determine how to measure what we want to measure. The goal here is to select indicators that are valuable as well as meaningful. “Valuable” means that knowing the indicator’s status will help us to make a decision. “Meaningful” means that we understand what the indicator is actually tracking. If the trend line associated with our chosen indicator goes up, will we know what that means for us, and will we know how to react?

Part of the difficulty of selecting indicators in this way is that the most meaningful and valuable information for you might not be immediately available. When THOR first started down the indicator path, we just wanted easily gatherable quantitative measures; we weren’t looking to take on any complex user studies. However, some of the information we wanted wasn’t available, either because it wasn’t being tracked on a regular basis or because gathering it ourselves would have been a manual process we weren’t yet willing to take on.

How should it be measured? (Tool or no tool?)

Once you know what your objectives are and which indicators will help you track your progress to those objectives, you need a convenient way to monitor it all. Fancy tools may not be necessary, in fact most of the time they probably aren’t, depending on which indicators are important to your particular flavour of success. But we wanted to demonstrate some of the possibilities of having PID measures ready to aggregate — and if we’re honest, we do like fancy — so we developed a dashboard to keep everything in one place. (Read more about our process in our report.) Creating the dashboard was a good exercise in establishing what could be measured and how. It also gave us a chance to explore what meaningful metrics might be. For instance, we can see that PID uptake is on the rise, and we can see some information about the metadata that is associated with those PIDs, but this doesn’t actually give us any insight into causal relationships or let us know precisely why this trend is happening or even exactly who is involved.

Because we’re all about meaningful data, these adventures in measurement have led the THOR team to identify gaps in the available metrics surrounding PID service adoption and to consider which additional indicators might be useful for future work in the PID research space. We’ve now embarked on a more detailed gap analysis that will lead to a study of some of these missing measures. Since our goal is to drive PID service adoption, we’ve identified disciplinary coverage and geographic distribution as our most promising themes to pursue. We are now collecting the data we need to analyze PID adoption in X disciplines and Y countries – a full report will be available later this year.

Moving forward

So what have we learned throughout this process? First and foremost, not everything is as concrete as you might want it to be. When you’re dealing with humans and human behaviours, things get squishy. Second, since we’re only monitoring existing trends based on factors we don’t necessarily control, some information available to us will remain just “good enough” until others can do more detailed work to either improve the data or flesh it out. Our job for the remainder of the THOR project is to point out what would be most useful to know about interoperability, so that it can be studied.

The PID field is still evolving and has a lot of growth and changes left in store. Some potentially valuable information requires further study to tease out. Our service adoption study, beginning with the gap analysis, will help us make a start on that research, and we hope to gather some useful information that can set the stage for future work. We’ll also need help from the wider PID user and integrator community to improve existing metadata and to help us consider meaningful metrics.

As always, if you have questions or comments about THOR, please get in touch.

Assessing the PID Landscape: Where is THOR in Context?

Part of knowing how well THOR is doing is knowing how our work fits into the overall context of persistent identifiers (PIDs) at large. This is why we began the project with an eye toward sustainability and also why we developed the metrics dashboard in the early days of the project. (That report is on Zenodo, if you’d like to read it again.)

Now that THOR has celebrated its first birthday, it’s time to pause and see what the PID landscape looks like now compared to when we first started. Assessing these changes now will help THOR tweak our roadmap for the future, making sure we stay on track for the remainder of the project. All of these assessment and evaluation efforts will eventually turn into a formal report at the end of the project, but we know how hard it is to wait. To tide you over, we’ve released a white paper based on our internal midtrack assessment.

Your feedback, questions, and comments are always welcome at

ORCID Integration Series: CERN

CERN is a hub for all things High-Energy Physics (or HEP for short). Nearly all researchers in the HEP field make CERN their home for all or part of their research careers. Most of these researchers maintain separate university affiliations as well, making the CERN research community a distributed decentralized global network. When we’re designing information services, we have to consider this global family and devise ways for them to keep track of all their research, all in one place, automatically. Fortunately for us, we can take advantage of third party services developed by our partners in THOR in order to add needed functionality in a way that’s consistent, reliable, and shares our Open Science values.

Inspire, the primary database for HEP literature, provides a number of ways for researchers at CERN and abroad to stay on top of what’s happening in their field. Inspire is a literature aggregator, meaning that it harvests metadata from a suite of HEP-relevant journals that users can then search for pertinent literature. This metadata then feeds other services, such as HEPData, the repository for supplementary publication data in HEP, and allows us to automatically generate author profiles. Handling much of this information automatically is a great benefit for our users, and it makes Inspire a rich source for information specific to research in HEP. But this usefulness naturally doesn’t extend to other systems or disciplines. Tapping into the ORCID iD system will let our users be identified in a variety of scholarly systems and will help them link their HEP work to any other area of their research life.

In the Inspire author profiles, we already had a homegrown system for pushing and pulling works information to and from ORCID. For those authors who have associated an ORCID iD with their profile (a process that formerly required manual entry and manual verification), we are able to append works information from Inspire to their ORCID record, and we are able to pull works information from their ORCID record to display on the External works tab in their Inspire profile. We have now extended this functionality with the ability to authenticate through ORCID for other Inspire functions. This authentication is in place for Inspire’s literature and author suggestion functions and for correction of authors. Further modification of Inspire data via ORCID authentication will be rolled out with the new release of Inspire slated for later this year.

This additional functionality is an extension of Inspire’s upgrade to an all-new version of its underlying Invenio platform. The completely overhauled Invenio 3 includes a module for ORCID authentication, making Inspire’s integration painless. And since Invenio is underneath all of CERN’s scientific information systems (Inspire, HEPData, and Zenodo), this means we’re one step closer to an interoperable platform for researcher outputs.

We’ve also implemented ORCID authentication in HEPData. HEPData gathers its bibliographic metadata from Inspire, and Inspire pulls information on data related to publications from HEPData and displays it in the relevant author’s profile. There is already a direct connection to Inspire, so logging in with ORCID isn’t necessary to make this author-publication-data triangle possible. However, users now have the option of logging in with ORCID to access HEPData’s review and submission functions, providing a third party authentication choice that’s compatible with other scholarly systems.

At CERN, we were able to implement ORCID authentication straight out of the box, making it a simple and practical choice to offer our users for unifying and managing their scholarly identification needs.

Monitor the Identifier Landscape with the THOR Dashboard

THOR is pleased to announce the launch of a new dashboard tool. Our dashboard will monitor the evolution of the persistent identifier (PID) landscape over the life of the THOR project. Try it out for yourself at

The dashboard is an aggregator of information about the landscape of PIDs. Through the dashboard, you can see how DataCite and ORCID identifiers connect, and over the life of THOR chart the growth of the network of connected identifiers.

The dashboard is meant to provide a baseline for the current state of PID infrastructures upon which THOR will build, giving us a way to track our progress in impacting the PID ecosystem. It currently incorporates statistics from our partners DataCite and ORCID, as exemplars of research object identifiers and researcher identifiers, respectively.

But what you won’t see is all the work that went on behind the scenes to bring you this first iteration of the dashboard. In addition to the development work, a lot of effort went into figuring out which data were – and should – be available for harvest and how best to harmonise the existing metadata between DataCite and ORCID. This effort goes directly into building the foundations of THOR, so it will be reflected in later project outputs as well. A longer report on the development of these tools and metrics is available at

The dashboard will continue to evolve alongside THOR. Check it out and send your feedback, questions and comments to

Sustainability: Key to THOR’s Success

As you may recall from the Knowledge Exchange Workshop presentation, the work of THOR is composed of several strands: Research, Integration, Outreach, and Sustainability. That last strand, Sustainability, fits with our overall theme of persistence. As we strive to optimize future services for and development of persistent identifiers, the THOR team wants to make certain that the work we do will have lasting effects and will be able to be carried on by our partners and the wider research community long after our project comes to a close. Because what good is working to ensure a future of connected research if your own research doesn’t last?

This is why the THOR team is focusing on sustainability right out of the gate. Through our Sustainability work package, we’re developing the means to evaluate, re-shape, and prolong our work from the very beginning, to make sure we’re always on track and working toward a sustainable future. To this end, we’re working on a metrics dashboard, a one-stop shop for tracking the increased interoperability of persistent identifiers over time and for making the efforts and outputs of THOR public. Monitoring persistent identifier development will enable us to chart the future course of Open Science and scholarly communication, and publicizing our self-evaluation will keep us transparent and accountable. Everyone will be able to follow our progress early on, and our project team will be able to iteratively evaluate our efforts to see what works and what doesn’t, and to adjust our course as we move forward.

To make sure that our dashboard is truly measuring the statistics and indicators that are most important to our stakeholder community, and to help inform our outreach efforts and future business planning, we need your feedback. We’ll be holding two half-day focus groups on September 21, the day before the 6th RDA Plenary in Paris. Each focus group session will be made up of experts from our stakeholder communities and will provide valuable input on THOR’s proposed evaluation metrics and indicators of success. If you’re interested in shaping the future for persistent identifiers, and you’ll be in Paris before RDA, contact us to participate in one of the focus groups.