The intersection of artificial intelligence (AI) and defense technology has raised new ethical and political questions, particularly as AI companies with government ties expand into military applications. Faculty AI, a prominent UK-based consultancy, has come under scrutiny for its involvement in developing AI for unmanned aerial vehicles (UAVs), commonly referred to as drones. Faculty, a company deeply embedded in the UK’s AI landscape, has also worked with the National Health Service (NHS) and educational institutions, and it plays a significant role in advising the government on AI safety.

Faculty AI: A Trusted Partner with Controversial Projects

Faculty AI has become a significant player in the UK’s AI ecosystem, but its dual role as a government advisor and developer of defense-related AI systems has raised eyebrows. Unlike companies such as OpenAI or DeepMind, Faculty does not develop proprietary AI models. Instead, it focuses on reselling models and consulting on their implementation in industries ranging from healthcare to defense.

Faculty gained prominence through its association with the Vote Leave campaign during the Brexit referendum and its subsequent work for Boris Johnson’s government during the COVID-19 pandemic. Marc Warner, Faculty’s CEO, was even invited to observe scientific advisory group meetings at the time. Since then, Faculty has secured significant government contracts, totaling at least £26.6m, including work with the NHS and various government departments.

Despite its broad portfolio, Faculty’s involvement in defense projects has sparked controversy. A partner company in the defense industry revealed that Faculty has experience deploying AI models onto drones. This raises concerns about the company’s role in developing technology that could, potentially, be used in autonomous weapons systems.

A Decade of AI Safety—But What About Autonomy?

Faculty markets itself as a leader in AI safety, citing experience in countering child sexual abuse and terrorism as evidence of its ethical approach. The company has worked closely with the UK government’s AI Safety Institute (AISI), established in 2023 under former Prime Minister Rishi Sunak. Faculty’s role includes testing AI models for safety and advising on their potential risks, a key part of the government’s efforts to navigate the rapidly advancing AI landscape.

However, Faculty’s partnership with defense-focused companies like Hadean has raised questions. According to a press release, Faculty and Hadean are collaborating on subject identification, tracking object movement, and exploring autonomous swarm operations for drones. While Faculty insists its work does not involve weapons targeting or lethal force, it has declined to provide details about its defense projects, citing confidentiality agreements.

The prospect of AI-powered drones capable of tracking and killing without human intervention—so-called lethal autonomous weapons systems (LAWS)—has alarmed many experts and politicians. While Faculty’s spokesperson emphasized the company’s adherence to ethical guidelines from the Ministry of Defence, critics worry that the technology could be a step toward fully autonomous weapons.

Ethical Questions and Policy Gaps

The UK government has yet to fully commit to requiring a human "in the loop" for decisions involving autonomous weapons, as recommended by a 2023 House of Lords committee. Many experts see this as a critical safeguard to ensure that lethal force is not delegated to machines. Natalie Bennett, a Green Party peer, expressed concern over Faculty’s dual role as a government advisor and defense contractor.

“This isn’t just a case of poacher turned gamekeeper—it’s poacher and gamekeeper rolled into one,” Bennett said. She also highlighted the broader issue of the “revolving door” between industry and government, citing examples from the energy and defense sectors.

Faculty’s relationship with the AISI further complicates matters. In November, Faculty won a contract to survey how large language models, like those developed by OpenAI, could facilitate criminal activity. The AISI described Faculty as a “significant strategic collaborator” in its safeguards team. While Faculty’s spokesperson emphasized that the company does not develop its own AI models, critics argue that its broad involvement in government and market-based AI activities poses potential conflicts of interest.

Albert Sanchez-Graells, a professor of economic law at the University of Bristol, warned that the UK’s reliance on companies’ self-regulation in AI development is a risky strategy. “Companies supporting AISI’s work need to avoid organizational conflicts of interest arising from their work for other parts of government and broader AI business,” he said. “Faculty’s extensive portfolio raises questions about how it ensures its advice to AISI remains independent and unbiased.”

Autonomous Weapons: A Global Debate

Faculty’s work with defense partners comes amid a broader global debate over the ethics of autonomous weapons. Some nations have called for international treaties to regulate their use. In 2023, the House of Lords recommended that the UK work toward a binding agreement clarifying how international humanitarian law applies to such systems. The Green Party has gone further, calling for a complete ban on lethal autonomous weapons.

The push for autonomous drones includes applications ranging from "loyal wingman" UAVs designed to assist fighter jets to loitering munitions that hover over targets before striking. While some proponents argue that AI could improve precision and reduce collateral damage, the prospect of drones making life-and-death decisions without human oversight has sparked widespread ethical concerns.

Faculty’s Business Model and Government Contracts

Faculty’s government ties extend far beyond its work with AISI. The company has secured contracts with multiple departments, including the NHS, the Department of Health and Social Care, the Department for Education, and the Department for Culture, Media, and Sport. These contracts represent a substantial portion of Faculty’s revenue, which totaled £32m in the year to March 2024. However, the company reported a loss of £4.4m during the same period, highlighting its reliance on government work to sustain operations.

Faculty’s biggest shareholder is a Guernsey-registered holding company, which has prompted additional scrutiny. The company’s connections to both the public and private sectors make it a central player in the UK’s AI landscape—but also a lightning rod for criticism.

Balancing Innovation and Ethics

The rapid pace of AI development has created opportunities for companies like Faculty to play influential roles in shaping the future of the technology. However, this influence comes with significant ethical and political responsibilities. Faculty’s work on AI safety is undoubtedly valuable, but its simultaneous involvement in defense projects raises questions about where the line should be drawn.

While Faculty insists that it follows rigorous ethical guidelines, the lack of transparency around its defense work has fueled concerns. As the UK government navigates the challenges of regulating AI, it will need to ensure that companies like Faculty are held to the highest standards of accountability.

The Road Ahead

The integration of AI into defense systems is likely to continue as governments and militaries seek to harness the technology’s potential. Faculty’s work with defense partners underscores the importance of striking a balance between innovation and ethical responsibility. Whether the company can maintain that balance while navigating its multiple roles in the AI ecosystem remains an open question.

For now, the debate over AI and defense technology highlights the need for clearer policies and international agreements. As AI continues to reshape industries and societies, the stakes could not be higher. The world will be watching closely as Faculty and other key players chart the course for this powerful and transformative technology.