Following on from the information on types of metrics and where you can access them, you may now want to use some of these metrics to demonstrate strengths in your research or monitor trends in a particular area.
The first thing you should consider is the specific question that you are trying to answer. What is it that you are interested in and value?
It is essential to identify what it is about an individual's, a group's or an institution's research performance that you are interested in. This may change over time and differ from the interests and values of others. For example:
If you are trying to establish a new research group as a centre of excellence, your focus may be on developing a critical mass in terms of volume of publications and income.
If you are trying to evaluate the quality of research within an established team, focusing on peer review scores may be more valuable.
Equally, if you are trying to identify which journal to submit to, you need to identify which are the most suitable for your research area, your intended audience and if they accept the type of article you wish to submit. Comparing journal metrics such as CiteScore available in Scopus or ABS rankings may help if you have multiple journals to choose from.
We cannot ignore external drivers such as University league tables, and we may want to know how we 'perform' using their values. However, we must be wary of basing evaluation on external drivers. These values may not line up with our internal values and consequently, we should not always base internal assessment on them, especially when looking at particular groups or subject areas where these measures may not be appropriate.
If you have been using metrics for a while to answer particular questions, consider whether there are alternative approaches available and if these would be more appropriate now. It is easy to fall into the trap of doing something in the same way it has always been done. There may be sources of metrics that are easier to access or approaches that are easier to undertake but this can result in the observational bias known as the 'streetlight effect' where people only search for something where it is easiest to look. Similarly, questions should never be retro-fitted based on data we already have.
Metrics should not be applied as a standard measure for the assessment of individual researchers or articles. Distinctions between metrics are important and it is essential to recognise the limitations of each type of metric as we consider how best to answer the original question.
There are many things that could be measured, but just because we can, doesn't mean we should. We should also consider any unintended side effects of measuring a certain value.
Be SMART
When considering what your question is and whether metrics are appropriate, the SMART acronym may be helpful.
Be Specific
Can the value be Measured quantitatively?
What are you trying to Achieve?
Is it Robust i.e. will it provide the information you need to prompt further queries and to explain results?
What is the Timeframe under consideration?
Remember: Be clear about the questions you are trying to answer and identify core values. Can these be measured? If so, identify relevant metrics that will support you in answering these questions.
If you’re using metrics to evaluate a group, where possible, you should involve the group in the process. For example, you could look at:
Reviewing values in your mission statement and creating additional ones where appropriate
Identifying top/core values for you group
Thinking about how (or if) these values might be measured, what specific questions would you ask to help evidence success in these aims?
If metrics will be used how will you approach this with the tools and data you have access to?
Follow the links below to more information on metrics:
What is OA?; OA Policies; APC Funding; Pure Repository; Rights Retention.
Creating and Preserving Data; Data Planning; DMPs; FAIR Data; Finding Data.
About Coventry Open Press; Contact Us; Submitting Proposals; Current Publications.
Publishing Advice; Predatory Publishers; Theses; Metrics; Persistent IDs.