Where AI Really Stands —A reflection on the state of AI in research operations as we reach the end of 2025

As this year draws to a close, I find myself thinking about the conversations happening across our community — the ones in meetings, workshops, and webinars, but also the quieter exchanges between colleagues trying to make sense of what AI is becoming in their day-to-day work.

Across almost every corner of the sector, the same pattern keeps appearing. AI is no longer an abstract possibility sitting somewhere on the horizon. It has already entered the practical, everyday work of professionals supporting research — even if our institutions are still working to understand what that means.

A recent UNESCO survey put this into sharp focus: 90% of people working in higher education now use AI tools in their professional activities. Not for experimental side projects — but for the real work of writing, analysing, planning, and managing tasks that only a year or two ago looked very different.

At the same time, a global Oxford University Press study found that 76% of researchers are using AI in their research, yet 46% say their institution has no AI policy. Only 27% feel excited about AI’s impact on academic research; many others are cautious or uncertain. Not because they are anti-technology, but because they understand the pressures, responsibilities, and ethical weight carried by research.

So we arrive at the end of 2025 in a complex moment:

AI is being used widely by individuals, but unevenly by institutions.
The adoption is fast, but the frameworks are slow.
The potential is visible, but the pathways are still forming.

This gap — between what people are already doing and what institutions feel prepared to support — is one that almost everyone in research is navigating right now. It isn’t resistance or disinterest. It’s the sign of a sector trying to move forward responsibly while the landscape shifts beneath it.

How This Looks Inside Research Environments

Across AIRON’s conversations this year, people have described their relationship with AI in remarkably similar ways. Many of us are already using it in small, helpful, sometimes mundane ways: summarizing dense papers, shaping early drafts, checking policy language, mapping workflows, or organizing projects that sprawl across multiple teams. These are not dramatic reinventions — they are everyday practicalities, the kind that emerge because a tool helps with a task right in front of us.

But when we look beyond our own screens and back at the institutional systems that support research — funding pipelines, ethics processes, contracting, compliance, financial oversight, impact planning, library services, research information systems — the readiness gap becomes impossible to miss. AI may be supporting tasks, but it is not yet integrated into the systems shaping the work.

People often describe living between two realities:
One that is personal — curious experimentation, small wins, a sense of possibility.
And one that is institutional — policies still forming, governance still unsettled, workflows still largely manual.

AI becomes something people use, but not something the organization supports.
A tool for the individual, but not yet a capability of the institution.

This is why adoption feels uneven. Progress is happening — just not in the same way, or at the same pace, across our sector.

Why Adoption Is So Slow — The Real Barriers

It’s easy, from the outside, to assume universities or research organizations are slow to change. But when we listen closely, the barriers are rarely about reluctance. They are about responsibility, complexity, and care.

The work is deeply interconnected.
A single research proposal can pass through development, ethics, contracts, finance, governance, and impact planning. Introducing AI into any step requires understanding the entire system around it.

Governance remains unsettled.
Institutions are asking the right questions — about data protection, integrity, accountability, bias, transparency. With few established answers and limited policies, uncertainty becomes a pause, but not a refusal.

Confidence is uneven.
Researchers, research operations staff, librarians, analysts, and managers are interested in AI — but not always confident in expectations, guidance, or institutional direction.

Ethics and integrity weigh heavily.
AI is not just another tool. Its misuse, even unintentionally, can affect credibility, fairness, and trust — all core to research.

Capacity is stretched.
The people who keep research moving already carry significant workloads. Adoption requires time, training, and support — without which even promising efforts can stall.

None of this is resistance.
It is caution rooted in professionalism.
But caution without shared structure can easily slow progress.

What Needs to Change — And How We Move Forward

One of the clearest lessons emerging this year is that meaningful AI adoption begins not with the technology but with the work itself — the real, messy, interconnected work that defines research operations.

AI cannot fix what we have not taken time to understand.

We need governance frameworks that feel practical and lived, not abstract or idealized.
We need capability-building that grows confidence gradually and respects the realities of people’s day-to-day work.
We need clear communication about what AI is for — and what it is not.
We need measures of value that look beyond speed to understand quality, transparency, and fairness.
And we need to move beyond isolated pilots that never leave the boundary of their project teams.

No single institution can do this alone.
But across the AIRON community, the insight already exists.
What we need now is the structure, connection, and trust to surface it — and to act on it.

Where AIRON Fits Into This Story

AIRON’s work this year has been shaped by a simple recognition: people across research operations needed a space to explore AI together. Not a space for hype or certainty, but a space for honesty, questions, early experiments, and shared learning.

That is the foundation we have been building on.

Our regional groups and expert groups, now forming, will become places where colleagues can look closely at what AI actually means in their contexts. They will surface practical lessons that often stay hidden inside teams, and begin co-creating guidance grounded in lived experience rather than abstract frameworks.

The AIRO Leads Program has been another important part of this foundation — supporting professionals who want to develop the confidence and capability to guide AI adoption responsibly within their institutions. Through structured learning and peer collaboration, participants explore how to assess tools, redesign workflows, and work constructively with institutional leaders. The Alumni Community extends that learning long after the program ends.

Together, these emerging communities — our forming groups, learning programs, discussions, and shared reflections — are beginning to bridge the gap between individual experimentation and institutional readiness. They are helping transform isolated insights into collective understanding.

AIRON is not here to offer ready-made answers.
It is here to create the conditions through which better answers can emerge — collaboratively, carefully, and grounded in the realities of research operations.

Looking Toward 2026

As 2025 draws to a close, it has become clear that AI is no longer something sitting on the periphery of research. It is already shaping how work is written, analyzed, reviewed, managed, disseminated, and planned. The question facing us now is not whether AI will influence research operations, but how well we will navigate that influence — and how thoughtfully we will build the systems around it.

The future of AI in research will not be determined by the pace of technological change.
It will be determined by the people who understand research from the inside — the research managers, librarians, analysts, administrators, technologists, policy specialists, and leaders who carry both the opportunities and the responsibilities of this moment. Their judgment, their questions, and their care will shape what comes next far more than any model or tool.

And as always, it will depend on how we choose to move forward — together or alone.

The next chapter of responsible AI in research is not unfolding behind closed doors.
It is emerging in conversations happening across our sector every day.
It is forming through the groups, programs, and shared learning spaces we are building.
It is taking shape in our willingness to ask difficult questions, to learn from imperfect experiments, and to explore openly in a landscape that is still clarifying itself.

If you are part of this journey, there is space here for your insight, your uncertainty, and your perspective. The progress we make in 2026 will grow from the foundations we’ve laid together this year — foundations built slowly, carefully, and with a shared commitment to doing this work well.

The future of research will not simply arrive.
It will be built — thoughtfully, collaboratively, and with intention.
AIRON exists to help make that collective effort possible.

If you want to be part of this work,  join AIRON — and help shape what comes next.

aaston123 Avatar

Posted by

Leave a comment