Governing AI Through SEC Disclosure
An analysis of how existing SEC disclosure frameworks can be leveraged to mandate transparency in AI development and deployment, addressing systemic risks through established regulatory mechanisms.
Systematic research and policy analysis drawing on large datasets, cutting edge statistical techniques, and inter-disciplinary theories from economics, computer science, and more.
An analysis of how existing SEC disclosure frameworks can be leveraged to mandate transparency in AI development and deployment, addressing systemic risks through established regulatory mechanisms.
Examining how technical protocols and standards in AI infrastructure concentrate power among a small number of corporate actors, and the implications for competitive markets and democratic oversight.
A practical examination of Model Context Protocol (MCP) implementations across enterprise AI deployments, analyzing transparency gaps and governance challenges in real-world settings.
Investigating how AI-powered search results fail to properly attribute sources, eroding the information commons and creating accountability gaps in the knowledge economy.
A systematic review identifying critical gaps between academic AI governance proposals and their practical implementation in corporate and regulatory contexts.
Documenting evidence of copyrighted O'Reilly Media content appearing in AI training datasets without authorization, with implications for intellectual property rights and creator compensation.