Neural Network: November Edition
A Monthly Newsletter on AI Policy and its Multistakeholder Regulation, from the AI Knowledge Consortium
Recent Indian Developments in AI Policy
MeitY to Release Voluntary AI Code of Conduct
Ethics guidelines aim to foster responsible AI practices
The Ministry of Electronics and Information Technology (MeitY) is drafting a voluntary code of conduct for AI companies in India, focusing on ethical practices and responsible development. While not legally binding, the code is expected to set a foundation for responsible and ethical AI innovation. Consultations with stakeholders are underway and the guidelines are anticipated early next year.
This effort aligns with global initiatives, such as the voluntary commitments from leading AI companies in the US and the joint US-EU AI Code of Conduct, demonstrating a growing recognition of the need for balancing AI governance and innovation.
India Hosts Virtual Convening on Climate Change and AI under GPAI
Global leaders discuss AI’s role in tackling climate challenges
India organised a Virtual Convening on Climate Change and AI as Co-Chair of the Global Partnership on Artificial Intelligence (GPAI) on November 25. This session invited the expanded GPAI-OECD membership of 44 countries to explore AI-driven solutions for climate action. The convening identified several joint opportunities for research and policy frameworks while addressing challenges like the carbon footprint of AI and barriers to data sharing, especially for the Global South.
India's tenure as Co-Chair of GPAI is drawing to a close and it is now preparing to hand over the presidency to Serbia.
MeitY and UNESCO Collaborate on Ethical AI Readiness
Stakeholder consultation explores AI governance frameworks
UNESCO and MeitY conducted a stakeholder consultation in Delhi on November 14 to evaluate India’s readiness for ethical AI deployment, using UNESCO’s Readiness Assessment Methodology (RAM). The RAM framework assesses institutional and regulatory gaps across themes, guiding nations toward ethical AI practices.The consultation highlighted institutional and regulatory gaps while emphasising the IndiaAI Mission’s focus on safe, trusted AI.
AI in Indian Courts
Delhi High Court Orders Swift Action on Deepfake Regulation
Centre to nominate panel members, examine global frameworks
The Delhi High Court, in an order dated November 21, 2024, directed the Centre to nominate members to a committee formed to tackle the menace of deepfake technology. Hearing writ petitions on the lack of regulation and threats of its potential misuse, the bench comprising Chief Justice Manmohan and Justice Tushar Rao Gedela emphasised the urgency of addressing deepfakes, and mandated member nominations within a week. MeitY has formed the committee, which will review international regulations and gather inputs from stakeholders like intermediaries, telecom providers, and victims. Its report is to be submitted within three months, with the next hearing scheduled for March 24, 2025.
India is witnessing a surge in deepfake cases, posing significant threats to individuals, society, and democracy. Recent cases have involved celebrities like Rashmika Mandanna and Sachin Tendulkar falling victim to deepfake technology, and the misuse of deepfakes during elections is also a concern.
ANI Sues OpenAI for Copyright Violation in Delhi High Court
Case raises key questions on AI training data and fair use of copyrighted material
Indian news agency ANI filed a lawsuit against OpenAI, accusing it of using its copyrighted content to train large language models like ChatGPT without authorization. ANI alleges that ChatGPT generates verbatim copies of its news and attributes fabricated statements to the agency, damaging its reputation. The Delhi High Court, led by Justice Amit Bansal, issued summons and appointed two experts to address the case's novel legal issues. OpenAI, citing the principle of fair use, maintains it complies with legal standards while raising jurisdictional concerns (as its servers are located in the USA). The next hearing will delve into these critical matters.
There has been a surge in similar cases globally with LLM providers like Meta, OpenAI, and Anthropic. These cases involve balancing author rights with the potential of LLMs to democratize knowledge, and many of these are likely to result in out-of-court settlements.
AI Standards
TEC Proposes New Standard for AI Robustness in Telecom Networks
Draft framework aims to enhance trust and safety in AI systems
The Telecom Engineering Centre (TEC), the technical body of the Department of Telecom, has released a draft voluntary standard for assessing and rating the robustness of AI systems in telecom networks and digital infrastructure. This follows TEC’s 2023 standard on fairness assessment for AI systems.
The draft focuses on evaluating a model’s robustness based on its resilience to shifts in the input or output data, its integrity and its reliability, while introducing a framework to identify vulnerabilities and mitigation strategies. A rating methodology quantifies robustness, providing a standardised approach for telecom operators and policymakers. The draft standards are open for public comments until 15 December 2024.
We support AI standards that prioritise safety, transparency and ethical governance while fostering innovation and enabling global competitiveness. However, such standards should ensure there is greater clarity for stakeholders. For example, the standards state that they are meant to evaluate “the robustness of AI models in critical applications”, but do not define which applications would fall under this categorisation.
Global AI Governance
G20 Rio Summit: AI Governance and Digital Transformation Take Center Stage
Leaders Declaration and Troika Joint Communiqué highlight inclusive AI and technology for global equity
At the G20 Leaders’ Summit in Rio de Janeiro (November 18-19, 2024), the Leaders Declaration and a joint communiqué by the G20 Troika (India, Brazil, and South Africa) emphasised the transformative potential of AI and Digital Public Infrastructure (DPI) in reducing inequality and advancing Sustainable Development Goals (SDGs).
The Leaders’ Declaration emphasises leveraging AI responsibly to drive global economic growth while safeguarding human rights, fairness, and privacy. The Troika’s joint communiqué further outlined principles for deploying DPI and AI, advocating for modular, interoperable, and scalable systems that prioritize privacy, data security, and intellectual property protection. Leaders stressed the need for culturally diverse AI models to enhance inclusivity and public trust while ensuring innovation and equitable growth globally. While these ideals are laudable, it remains to be seen whether they can also be made commercially viable.
Work by AIKC Members:
Aligning India’s AI Future with Climate Goals (Transitions Research): This article explores the dual nature of AI in climate action—its potential to optimize renewable energy grids and enhance climate modeling, alongside its growing environmental footprint. It proposes measures for India to align its technological ambitions with climate goals, ensuring sustainable AI adoption that supports both innovation and climate responsibility.
Integrating Sextortion into Trust and Safety Strategies at Marketplace Risk Global Summit (Social Media Matters): Social Media Matters hosted a session on combating sextortion, addressing challenges like cross-border cooperation, generative AI impacts, and the role of data in prevention. Key takeaways included the need for multifaceted strategies, continuous stakeholder collaboration, victim-centric resources, and community-driven, evidence-based policies informed by global best practices.
#IGPPExpertTalks-AI Special Series (The Institute of Government and Public Policy): IGPP hosted several expert-led sessions exploring AI’s evolving regulatory and policy landscape. Discussions covered the legal challenges and questions around intermediary liability arising from the Wikipedia-ANI case, as well as an analysis of the goals and implications of California’s vetoed AI regulation bill, shedding light on its potential impact on AI governance and the tech ecosystem.
Final Thoughts
A recent case at O.P. Jindal Global University, where a student challenged AI-based plagiarism detection in court, raises critical questions about the reliability of AI tools and copyright law. The student argues that AI-generated content still qualifies as original work under copyright law, sparking debate about authorship and intellectual input in the era of generative AI.
How should institutions adapt policies to balance academic integrity and technological advancements? We found this issue brief from one of our member organisations interesting in this context.
We’d love to hear your thoughts. Email us at: Secretariat@aiknowledgeconsortium.com.
(Cover image generated on Dall-E for the purposes of this newsletter).