An invitation to share what is really happening across our teams and institutions
If you work in research operations today, this will probably feel familiar.
AI has already entered your working life.
It might be helping you draft text, summarise long documents, organise information, analyse data, or navigate processes that are complex, fragmented, and time-consuming. Sometimes this use is intentional and supported. Sometimes it is quiet, experimental, or driven by individual curiosity. Most of the time, it is not coordinated across teams — and rarely across institutions.
What matters is not how advanced these uses are.
What matters is this: AI is no longer theoretical. It is already shaping how research support work gets done, even if we do not always talk about it openly.
For most institutions, the question is no longer whether AI will play a role in research operations. That question has already been answered. The question we now face is different — and much more important:
What is the easiest, most responsible, and most impactful way to adapt AI so that it genuinely improves research services and supports the people doing the work?
From everything I have seen, the answer is not a grand strategy or a perfect policy.
It starts with use cases.
Why Use Cases Are the Most Natural Place to Start
Use cases are simple. That is their strength.
They are easy to understand, easy to discuss, and relatively easy to test. They do not require everyone to agree on a shared vision of “AI transformation.” Instead, they ask a much more practical question:
Where could this actually help right now?
When AI is discussed at an institutional level, conversations often begin with tools, platforms, roadmaps, or governance frameworks. These are necessary — but for people doing the work every day, they are rarely where understanding begins.
What people understand are use cases.
Use cases are practical stories:
- Where is AI being used?
- Why was it introduced?
- What problem was it meant to solve?
- What changed for the people involved?
They address concrete needs, not vague promises of abstract efficiency. They make AI understandable, discussable, and assessable — which is exactly what responsible adoption requires.
This is why use cases matter. And this is why now feels like the right moment to focus on them together.
What Good Use Cases Are Really About
AI is often introduced with big promises: productivity, efficiency, scale. While these outcomes matter, they are rarely what motivates people working in research operations — especially when workloads are already heavy, complex, and deeply interconnected.
What resonates much more strongly is a pragmatic understanding of what AI can actually help with, such as:
- improving the quality and consistency of research services, not just speed
- making processes more equitable and accessible for staff and researchers
- reducing friction in tasks that drain time and energy
- creating space for judgment, relationships, and complex problem-solving.
This is not about making research operations colder or more automated.
Done well, AI should make the work more human, fairer, and more sustainable — including in terms of workload, wellbeing, and the ability to focus on what really matters.
But this only happens when AI is applied to real operational needs. Use cases are how we make that visible.
Working on Fewer, More Meaningful Use Cases
One of the clearest lessons emerging across institutions is that trust in AI is not built through grand launches or ambitious strategies. It is built gradually, through the careful introduction of a small number of meaningful use cases.
In practice, this often means:
- starting with use cases that clearly matter to teams
- being open about what AI is doing — and what it is not
- involving the people affected early in the process
- learning honestly from what works and what doesn’t.
Over time, this approach builds confidence and shared understanding. It also makes it easier to pause, adapt, or stop when something does not deliver the value we hoped for.
Many institutions understandably begin with lower-risk use cases. But experience shows that some of the most meaningful impact can come from carefully exploring higher-risk areas as well — where the potential gains are larger, and where shared learning is especially important.
Why We Need a Shared Use Case Library
This is where a shared use case library becomes essential.
A shared library is not about showcasing success stories or promoting tools. It is about making reality visible.
When we document and share use cases, several important things begin to happen:
- patterns emerge across teams and institutions
- duplication of effort is reduced
- risks and concerns are identified earlier
- leaders gain a clearer picture of where AI is actually making an impact.
Most importantly, use cases reflect real needs, not hypothetical ones.
They help us see where effort is being spent, where pain points keep recurring, and where AI might make a meaningful difference if applied thoughtfully.
Let’s Build This Together
This is why I am inviting members of the AIRON community to start sharing their AI use cases.
Not because everything is finished or perfect — but because it isn’t.
By sharing:
- the use cases you are exploring or already using
- the problems you are trying to address
- what has worked — and what has not
- how you are thinking about risk, transparency, and inclusion.
we can begin to see the bigger picture together.
How to Share — and What Happens Next
Here is the link to share your AI use case with the AIRON community:
https://tally.so/r/b5ZKO7
Once use cases are submitted, we will:
- analyse them collectively to identify emerging patterns and trends
- share a summary report back with the community
- bring together teams and practitioners working on similar challenges
- create workshops where we can learn from one another, across institutions and roles.
AI will move faster in 2026.
If we want AI to genuinely improve research services, enhance quality, and support the values at the heart of research operations, then use cases are where that future begins.
If you are already working on one — please share it with the community, so we can all learn from what you have done.
That is how progress happens: through shared, thoughtful practice.
Leave a comment