March 14, 2025
In response to the White House's Request for Information on an AI Action Plan, the GliaNet Alliance has submitted recommendations to the Office of Science and Technology Policy.
The Two Dimensions of AI Agency
Our submission focuses on two interrelated dimensions of AI agency:
▪️ Agenticity: The functional capability to perform activities in the world.
▪️ Agentiality: The authorized representation to act for another person.
As AI capabilities increase, the need for stronger human representation increases proportionally. When agreement to AI-assisted decision-making becomes more consensual, AI systems should be permitted more advanced actions.
Greater agenticity can be achieved through the addition of AI-to-AI interoperability, while greater agentiality can be achieved through the addition of trustworthy intermediation. Together, they can provide tangible benefits, including greater competition, innovation, and consumer choice.
Agenticity: AI-to-AI Interoperability
We advocate for interoperability between AI systems, similar to Web technologies and email. This would allow personal AI agents to communicate with other AI systems across platforms, expanding their capabilities to serve users. We recommend open standards development through bodies like IEEE, with government serving as a backstop if needed.
Alliance members are already working toward this vision, with member companies like Personal AI developing small language models and Kwaai advancing open-source AI solutions that empower individuals at the edge of the network, innovation, and consumer choice.
Agentiality: Trustworthy Intermediation
We propose "Net Fiduciaries" - trusted digital intermediaries serving individuals as clients, and true patrons of their services, instead of only being seen as mere users of their products. These entities we call Net Fiduciaries, would voluntarily take on fiduciary duties of care and loyalty in support of their patrons.
Our "PEP” Model (for Protect, Enhance, and Promote) establishes three roles trusted digital intermediaries could possibly take on. This framework demonstrates what an ecosystem of Net Fiduciaries would entail, using common law principles as a guide.
▪️ Guardian: A duty of care to protect patrons (clients) from harm.
▪️ Mediator: A "thin" duty of loyalty with no conflicts of interest.
▪️ Advocate: "Thick loyalty" that actively promotes the client's best interests.
We suggest a two-tiered approach: Platform providers with access to personal data would be bound by a general duty of care, while Net Fiduciaries would operate with higher voluntary standards of fidelity and loyalty.
Potential Applications
Combining enhanced capabilities with deeper relationships creates powerful use cases:
▪️ Managing personal data flows and privacy
▪️ Broadcasting intentions through intentcasting
▪️ Creating independent decision engines
▪️ Managing universal shopping carts across sites
▪️ Mediating immersive digital experiences
▪️ Building communities of interest
▪️ Evaluating and valuing digital content
In each case, the individual remains the focal point, becoming the subject of their own digital destiny with an authentic and trusted AI agent.
U.S. Leadership Opportunity
As the AI landscape evolves in 2025, the United States has an opportunity to lead by embracing both AI interoperability and trustworthy intermediation. This balanced approach enhances human autonomy while fostering innovation, competition, and consumer choice without AI systems acting as "double agents" for platforms or third parties.
Our full submission, found here, offers comprehensive details into these recommendations, providing a practical framework for balancing AI capabilities with human representation while fostering innovation in the AI ecosystem. We look forward to continued engagement with OSTP as the AI Action Plan develops.
Note: View the original submission to the White House, HERE.