INTRODUCTION
These are particularly important times for international collaboration, artificial intelligence (particularly AI Ethics), and transportation.
International cooperation is characterized by changes in international relationships and decreased U.S. Government funding for research which are demanded by the current U.S. administration. American global “soft power” is being reduced, which will also have an impact on international cooperation. We do not know where the latter drastic changes will take us, or how far those changes will go. And when they will end. My suggestion is that everyone working in these sectors adopt a very flexible mindset.
AI has grown in recognition and significance over the past decade due to significant technological advancements like large language models and software platforms such as ChatGPT (in August 2025 ChatGPT 5 was released). AI has entered all sectors of the global ecosystem at breakneck speed, faster than legal, cultural, and social systems may adjust to its possibilities. There are AI variants such as generative AI (GAI) and Agentic AI and having a clear understanding of both requires precise understanding of their details.[i] And to make matters worse (or more interesting) there is a discussion forming around AI agents and giving them real autonomy.[ii]
A major issue growing in the AI field is addiction or compulsion to using that technology.[iii] So, the question may be asked: What are research institutions doing to prevent their employees from becoming addicted to AI tools and programs? A significant issue, and one for further examination at another time. And add to that the impact upon human cognitive abilities. These are interesting times!
The author has been in the workforce since before the large-scale adoption of the Internet, and the current environment appears quite like that previous era. For people old enough to have worked in the real world in 1995, the Internet was sold to people as nirvana. Communication would be easier, work would be easier, and a reduction in hours worked during the week. With the Internet, it was implement first, worry about the implications later. The implementation of AI has followed a similar course, though there is more effort at regulating AI at the national and global levels. The United States does not have a national AI statute, nor an overarching privacy/data protection statute. And AI Ethics? Well, that discussion has only started in the past several years.
Transportation planning is always in a state of evolution, particularly due to engineering and technology changes. Transportation planning impacts energy, environmental, social, and cultural dimensions within every country. In the United States, the present administration is re-embracing carbon fuels at the expense of green energy initiatives developed over the past dozen years. The U.S. auto industry has abandoned the passenger car market, relying instead on larger and heavier SUVs and pickup trucks to hit profit margins. These U.S. Government steps are 180 degrees different than initiatives in most other parts of the planet. How these structural factors impact transportation systems remains to be seen. AI is embedded in the auto sector, impacting transportation planning. Hopefully, the benefits of AI in transportation planning will outweigh the costs.
These three areas are areas of challenge but also provide emerging opportunities for professionals who are flexible and humane in their approach to international cooperation, AI ethics, and transportation planning. The next section will give the reader an introduction to a January 2025 online seminar co-presented by the author and sponsored by the Government-University-Industry-Philanthropy Research Roundtable (GUIPRR), which is part of the National Academies of Science, Engineering, and Medicine (NASEM) in Washington, D.C. It was that online seminar that prompted the invitation to author this paper. The following section will provide the international cooperation context for the balance of this paper.
THE GUIRR INTERNATIONAL RESEARCH COLLABORATIONS PROJECT (2009-2018)
The GUIRR International Research Collaborations project (“I-Group) started in 2009, although the original discussions between GUIRR director Susan Sauer Sloan and the initial project co-Chairs (including the author) started in Fall 2008. Dr. C.D (Dan) Mote, then-president of the National Academy of Engineering and a GUIRR co-Chair, was the main supporter of the project and believed in the significant merits of international collaboration.[iv] The Transportation Research Board is a sister entity to GUIPRR/GUIRR within the National Academies of Science, Engineering, and Medicine (NASEM).
The GUIRR project released three advisory workshop proceedings:
2011 – Examining Core Elements of International Research Collaboration[v]
2014 – Culture Matters: International Research Collaboration in a Changing World[vi]
2018 – Data Matters: Ethics, Data, and International Research Collaboration in a Changing World[vii]
The first workshop focused on key issues in international cooperation, including cultural differences, ethical standards, research integrity, risk management, contract negotiation, and intellectual property. The second and third workshops then focused on specialized topics (culture, data). Each workshop brought together a global group of subject matter experts from government, university, industry, and philanthropy.[viii] Reading the reports today, the substance remains highly relevant. However, in the intervening years other issues have come to the fore, such as artificial intelligence (AI) and the ethics thereof, research security, and data protection. And since many issues are technology dependent, the development of innovative technologies will force further administrative and legal responses. The discussion will now turn to AI and transportation planning.
AI AND TRANSPORTATION PLANNING
AI is integral when it comes to highway and mass transit transportation planning, implementation, and the management of completed facilities. Transportation professionals know through education and experience that transportation development involves much more than civil engineering – transportation involves environmental, neighborhood, social, and cultural dimensions that are complex and most often messy. AI will assist in dealing with the messiness of transportation but has the potential to cause controversy by acting with a lack of humanity. Which is why transportation professionals must always work to ensure the humanity of transportation decisions even when AI tools are used to simplify – to the best it can – an inherently messy process. AI in transportation planning must exhibit empathy and humanity, and it is natural to point out that ethical AI must have those characteristics also.
AI tools are developing and improving at such a rapid rate that this fact alone is a challenge to transportation professionals – and communication to the public impacted by transportation issues. This speed also dictates the importance of empathy and humanity in the use of AI.
The discussion will now turn to AI ethics-key concepts and the management of an ethical AI program.
ETHICAL STANDARDS FOR INTERNATIONAL COOPERATION IN AI DEVELOPMENT
This section brings together the author’s experiences in the working world and material learned in an excellent master class in AI Ethics offered by the London School of Economics and Political Science (LSE).[ix]
Unified ethical standards for global cooperation in AI development must include the following and implemented on Day One. These are not optional or add-on later concepts when time and funding allow for their adoption:
Legitimacy. Legitimacy must be a central feature of AI systems. What does legitimacy mean? Legitimacy may be considered the acceptable use of power over other people. This is an important concept because governments exercise enormous power over people in their political jurisdictions .[x] Governments lacking legitimacy may be prone to using AI in unethical ways.
Transparency. Transparency may be considered the clarity of action, including both process and substance (the reasons and result(s) for a specific decision. Stakeholders must be able to access relevant information (the clarity dimension). In the public sector, assessment transparency (outcomes of public decisions) and deliberative transparency (the reasons behind a decision) are important.[xi] In the United States, transportation professionals are aware of the requirements for public hearings and multiple stakeholder input opportunities before the implementation of transportation projects.
Value Alignment.[xii] Are the values of your organization reflected in your AI system? What do you consider important? What is the priority? Does your utilization of AI reflect your corporate values? What does your organization NOT stand for? These questions, and how you answer them, are critical. Misalignment needs to be avoided.
Democracy. Democracy may be considered “a form of government that enacts rule by the people.”[xiii] This is the definition used in the LSE course, but keep in mind that there are many definitions. And for this discussion, democracy is discussed more as a concept and not a particular structure per se. In a democracy the people of the country bestow authority upon government leadership. The government derives its power from the people. This is fundamentally different than monarchical or autocratic systems, where powers and authority flow from the government to the people. And keep in mind one final thought – even monarchical and autocratic governments may have democratic elements.
Justice and Fairness. – Ethical AI systems that are implemented must be just and fair. AI systems that are just and fair are highly ethical.
The Equality/Efficiency Trade Off. How do goals of equality and efficiency in AI systems work together? Will ensuring equality in AI lead to a reduction in AI efficiency? Will making efficiency the primary goal in AI systems lead to inequality, or worse yet, discrimination and bias? The issues of discrimination and bias in AI more generally have been receiving attention over the past few years and will continue to increase. AI is based on large data sets, and if those sets contain discriminatory and biased information, the resulting AI output will suffer from such deficiencies.
In the ethics-adjacent profession of law, the American Bar Association recently discussed five scenarios where lawyers will encounter the ethical and unethical use of generative AI (GAI) tools.[xiv] Even for the non-lawyer, reading discussions on legal ethics may be an enlightening exercise.
Having ethical standards in AI development is a critical yet first step. The next step is the operational-developing and then managing the program. It is to the latter that this paper now turns.
DEVELOPING AND MANAGING AN ETHICAL AI PROGRAM
Ethics must be built into an organization’s AI program from Day One. It should not be added later, as an afterthought. To that extent, building an ethical AI program is no different than building a privacy or data protection program. And not that far off from building a university sponsored research operation, which the author has considerable experience with.
An ethical AI program needs ethics champions. Ethics champions will be the “tip of the spear” in ensuring that ethical considerations are always considered simultaneously with technical and operational aspects of AI. A leading practice is to ensure that the main AI champion for the organization is also part of organizational leadership and/or the board of directors of the organization. The author knows, through experience, that one of the important ways to get organizational buy in is through leadership at the top. If organizational leadership does not buy in, it will be harder to have buy in from the bottom up. This is why AI ethics champions should be present at every level of an organization.
One best practice mentioned in the literature is the creation of an AI ethics review board.[xv] Similar to institutional review boards governing the use of human subjects in research (IRBs), the purpose of such ethics review boards is to ensure that institutional/corporate ethical principles – as well as AI ethical principles – are completely followed within the organization. These ethics review boards should also engage in education across an organization. The latter may suggest the use of AI toolkits at the department or business level. The ideas of an AI ethics review board and AI toolkits must incorporate the idea of ethics champions and company-wide AI ethics principles mentioned earlier.
One excellent tool is the EU release of the AI Act Explorer, which helps companies and entities answer their questions about the EU AI Act.[xvi] The Explorer contains tools such as an overview of the Act and a compliance checker.
One area of emerging importance with the implementation of AI systems and the infusion of ethics into AI is the governance of AI systems. Like all other corporate or institutional organizations, governance of AI systems must be in accordance with corporate values (ethical and otherwise).
One such governance and innovation tool is AI standards. Standards are the norms and rules that guide the development of technology and are embedded in technology itself. Technical standards often encode technical specifications around the design or performance of AI systems or products. A simple, non-technical example is A4 paper: Since A4 has a standard definition of its size, any device around the world can print onto
A4 paper with correct text margins. Shared technical standards can increase compatibility and interoperability, for example. Other standards focus on processes: A standard for risk reporting, for example, ensures that all companies undertake the same activities in producing risk reports. For more information on AI standards, check out the Turing Institute’s AI Standards Hub,[xvii] NIST’s AI Risk Management Framework,[xviii] or Japan’s AI
Guidelines for Business.[xix]
CONCLUSION
These are exciting and challenging times for professionals working at the intersection of AI ethics, transportation planning, and international collaboration. Why is that the case? Well, because you have the intersection of technology, engineering, and philosophy in AI ethics. Transportation planning is constantly changing due to technological changes plus greater environmental-social-cultural awareness of the implementation of transportation choices. And international collaboration is always changing due to evolving financial, political, social, and cultural factors. The changes being brought about by the present U.S. Government are introducing levels of change and uncertainty that may not be fully realized or understood for years. These facts mean that working professionals in these fields must maintain flexible professional skills and dispositions that allow them to make successful transitions as circumstances dictate.
Author
James Casey, JD, MPA, MA, CPP, is an Academic Community Leader and Adjunct Associate Professor in the School of Professional Studies M.S. in Research Administration and Compliance Program at the City University of New York, and a Research and Data Protection Executive in San Antonio, Texas. Professor Casey is a valued AIRON expert. He has been active internationally since 1994 and an author and speaker on Milwaukee, Wisconsin, infrastructure and transportation issues since 1993. Professor Casey now writes on AI to examine how emerging technologies intersect with research operations, public policy, and global data governance — helping institutions prepare for the opportunities and risks ahead. He may be reached at james.casey@sps.cuny.edu and jcasey@caseypc.com.
[i] For a good introduction into the differences, see https://www.ibm.com/think/topics/agentic-ai-vs-generative-ai
[ii] Grace Huckins, Handing AI the Keys, MIT Technology Review 22-27 (July/August 2025).
[iii] For an excellent piece discussing AI addiction and associated issues, see https://www.theatlantic.com/technology/2025/12/people-outsourcing-their-thinking-ai/685093/?gift=y8lEx4l10rK_92Fskl8c5U_J_QtxGIMFVM5ueezfuBg&utm_source=copy-link&utm_medium=social&utm_campaign=share
[iv] https://me.umd.edu/clark/faculty/570/CD-Dan-Mote-Jr (Accessed 7/10/25)
[v] https://nap.nationalacademies.org/catalog/13192/examining-core-elements-of-international-research-collaboration-summary-of-a (Accessed 7/15/25)
[vi] https://nap.nationalacademies.org/catalog/18849/culture-matters-international-research-collaboration-in-a-changing-world-summary (Accessed 7/15/25)
[vii] https://nap.nationalacademies.org/catalog/25214/data-matters-ethics-data-and-international-research-collaboration-in-a (Accessed 7/15/25)
[viii] In 2024 GUIPRR released several workshop proceedings that are adjacent to international research collaborations: GUIRR at 40: Reimagining the Triple Helix of Innovation, Investments, and Partnerships (meeting held June 25-26, 2024, in Washington, DC); Incentivizing Urgency, Speed, and Scale to Support Future U.S. Innovation (meeting held October 15-16, 2024, in Washington, DC).
[ix] https://info.lse-online.getsmarter.com/presentations/lp/lse-ethics-of-ai-online-course/?gclsrc=aw.ds&cid=18966408142&utm_contentid=635895382935&ef_id=c%3A635895382935_d%3Ac_n%3Ag_ti%3Akwd-2057442157175_p%3A_k%3Alse+ai+ethics_m%3Ae_a%3A142308895903&utm_source=google&utm_medium=cpc&utm_campaign=&utm_term=lse+ai+ethics&utm_content=635895382935&utm_network=g&utm_device=c&utm_placement=&utm_adgroupid=142308895903&gad_source=1&gad_campaignid=18966408142&gclid=Cj0KCQjw18bEBhCBARIsAKuAFEYPNBy_4ZNraD_LtYlX_DoynBgqWuTZtm7_OzUqveVHpovTt4fvxwUaAnrIEALw_wcB&rv_source=PaidMedia (Accessed 8/5/25)
[x] LSE course, Module 1, Unit 2.
[xi] LSE course, Module 1, Unit 2.
[xii] LSE course, Module 3, Unit 1.
[xiii] LSE course, Module 1, Unit 1.
[xiv] https://www.americanbar.org/groups/real_property_trust_estate/resources/probate-property/2025-july-august/your-ethical-obligations-when-using-generative-artificial-intelligence/?utm_medium=social&utm_source=linkedin&utm_campaign=dclt&spredfast-trk-id=sf227716774
[xv] LSE Course, Module 3, Unit 2.
[xvi] https://artificialintelligenceact.eu/ai-act-explorer/
[xvii] https://www.turing.com/
[xviii] https://www.nist.gov/itl/ai-risk-management-framework
[xix] https://www.meti.go.jp/english/press/2024/0419_002.html
Leave a comment