Neural Network: February Edition
A Monthly Newsletter on AI Policy and its Multistakeholder Governance, from the AI Knowledge Consortium
AIKC Updates
Public Policy Hackathon ‘Innovate to Regulate: Policy Approaches for Platform Economy’
AIKC supported a Policy Hackathon on February 11, 2025, organised by the Institute for Governance, Policies and Politics and the Department of Humanities and Social Sciences, Indian Institute of Technology, Delhi. Smt. Sharmistha Dasgupta, Deputy Director General at the National Informatics Centre graced us as the Chief Guest.
Hundreds of students participated from across Delhi NCR in order to brainstorm policy solutions for challenges in the platform economy. Three winning teams (from the University School of Law and Legal Studies, Guru Gobind Singh Indraprastha University, Indian Institute of Technology, Delhi and National Law University, Delhi) have been shortlisted; they will receive an honorarium ₹40,000 as well as mentorship to develop their ideas further.
Submission of Recommendations on MeitY’s Report on AI Governance Guidelines Development
We have shared our response to the public consultation by the Ministry of Electronics and Information Technology (MeitY) on its "Report on AI governance guidelines development" based on deliberations that took place during an AIKC workshop convened on January 22, 2025.
Recent Indian Developments in AI Policy
₹2,000 Crore Allocated for IndiaAI Mission in 2025-26
This year’s Union Budget increased the funding to support AI infrastructure, foundational models and skilling initiatives
The government sanctioned ₹2,000 crore for the IndiaAI Mission in 2025-26, nearly a fifth of the scheme’s total outlay. Last year, ₹551.75 crore had been allocated, but since the amount was underutilised, it was later revised to ₹173 crore. The fresh funding coincides with the government shortlisting 10 companies to supply 18,693 Graphic Processing Units (GPUs) for AI data centers and foundational model development, exceeding the initial plan to procure 10,000 GPUs. The government also approved the establishment of a new Centre of Excellence for AI in Education with an outlay of ₹500 crore, complementing existing centers focused on agriculture, health and sustainable cities.
This development aligns with the Economic Survey 2024-25, which highlighted AI’s impact on India’s workforce, particularly in low-skill sectors. However, underutilisation of funds in the recent past should give some cause for concern; the government needs to focus on practical implementation of its AI policies.
IndiaAI Mission Aims to Offer World’s Most Affordable AI Compute
The government announced subsidised AI Compute and local model development incentives
The Indian government has announced a major push for AI compute accessibility by offering high-end GPU computing at less than $1 per hour. The Union Minister for MeitY Ashwini Vaishnaw stated that the initiative aims to make India a global hub for cost-effective AI model development. 10,000 of the planned 14,000 GPUs have already been secured, with Yotta Data Services, E2E Networks, Tata Communications and AWS’s managed service providers contributing the majority of the capacity. The remaining GPUs will be procured through companies like Jio Platforms and CtrlS Datacenters.
The government has launched a new IndiaAI Compute Portal (AIKosha) through which ministries, startups and researchers can request compute capacity at a reduced cost. Up to 40 percent of compute costs will be subsidised for those eligible. While the goal of the initiative is laudable - giving researchers and startups the fillip needed to advance - the commercial viability of this proposal will need to be tested out.
IndiaAI Mission Advances with Call for Proposals on Foundational AI Models
Government receives 67 proposals for India-specific AI models, including 20 Large Language Models (LLMs)
As a part of the IndiaAI Mission, the Ministry of Electronics and Information Technology (MeitY) invited proposals to develop homegrown foundational AI models tailored to Indian needs. The initiative aims to support the creation of Large Multimodal Models, LLMs and Small Language Models trained on diverse Indian datasets while aligning with global standards.
The government has received 67 proposals, including 20 for LLMs, from companies such as Sarvam AI, CoRover.ai and Ola; these will be evaluated by a high-level technical committee. Funding will be provided through direct grants or equity-based investments.
The initiative comes amid global advancements, such as China’s DeepSeek model, which sparked discussions on India’s AI readiness. The government envisions a timeline of 9-10 months to develop world-class AI models that address India-specific challenges.
India Establishes AI Safety Institute to Advance Responsible AI Development
The virtual hub will facilitate collaboration on AI governance and risk mitigation
The Indian government announced the establishment of the IndiaAI Safety Institute under the Safe and Trusted Pillar of the IndiaAI Mission. The institute has been designed as a virtual hub which will bring together academic institutions, industry leaders and policymakers to drive research on AI risks, security and ethical frameworks.
It will function on a hub-and-spoke model and will therefore collaborate with multiple research and academic organisations, supporting projects on AI risk assessment, stress testing and deepfake detection. The initiative is designed to align with global AI governance efforts while maintaining a focus on India’s socio-economic and linguistic diversity.
Global AI Governance
AI Action Summit 2025 Sets Global Agenda on Safe and Inclusive AI Development
India and France co-chair the AI Action Summit in Paris; a declaration on sustainable AI development was adopted
On February 10-11, 2025, France and India co-chaired the AI Action Summit in Paris, the third major global AI summit after 2023’s Bletchley Summit and 2024’s AI Seoul Summit. Unlike its predecessors, which focused primarily on risk mitigation, the AI Action Summit expanded its scope to include equitable AI access, sustainability and global governance.
Key Outcomes of the Summit:
International AI Safety Report: The report synthesises the current literature on AI risks and capabilities. It delivers on commitments made at the AI Safety Summit 2023.
Public Interest AI Platform (Current AI): France launched a $400 million initiative launched with support from India and eight other nations. It focuses on equitable access to AI data sets; developing AI models for real-world challenges; supporting open standards and tools; and ensuring accountability through transparency and public oversight.
Environmental Sustainability Coalition: A 91-member coalition was announced, committed to promoting energy-efficient AI models, responsible resource use and the development of standards for sustainable AI practices.
AI Action Summit Declaration: The summit concluded with the adoption of the ‘Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet’. The declaration emphasises AI accessibility, ethical development, market fairness, and sustainability. It was signed by 60 countries, the European Union and the African Union.
The US and the UK Withheld Support: The US and the UK notably did not sign the AI Action Summit Declaration. The US Vice President JD Vance advocated for an open, pro-growth regulatory framework and criticised The EU’s AI policies for stifling innovation. His stance reflects the recent US policy shifts, including the rescission of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Meanwhile, the UK cited concerns over AI’s impact on national security and global governance, with speculation that its decision was influenced by the US position.
India’s Role and AI Policy Alignment: India’s advocacy for AI as a tool for public good, responsible innovation and equitable access was reflected in the summit’s key outcomes. As a signatory to both the Bletchley Declaration and Seoul Ministerial Statement, India’s domestic AI strategy – through initiatives like the IndiaAI Mission – aligns closely with the summit’s focus on bridging the digital divide and fostering AI for societal benefit. India has expressed interest in hosting the next AI Safety Summit in 2026.
India-France AI Collaboration Expands: The summit further strengthened Indo-French cooperation in AI governance, building on the 2019 Indo-French Roadmap on Cybersecurity and Digital Technology and the 2023 Memorandum of Understanding on Digital Cooperation. The summit also saw the launch of the India-France Roadmap on AI and parallel discussions at the Second India-France AI Policy Roundtable on cross-border cooperation in data sharing and joint research initiatives.
Global Regulatory Scrutiny on DeepSeek's Data Practices
Regulators in Italy, South Korea and Ireland take action over privacy concerns around DeepSeek
There have been growing concerns internationally over the use of DeepSeek, a LLM developed by a Chinese AI company. Italy’s data protection authority, Garante, requested details from DeepSeek regarding its data collection practices, legal basis for processing, sources of personal data and whether user data is stored in China. DeepSeek was given 20 days to respond, and when an adequate response was not received, Garante imposed an immediate ban on the processing of Italian users' data by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence.
South Korea’s Personal Information Protection Commission (PIPC) announced plans to investigate DeepSeek’s user data management practices. Following DeepSeek’s admission of non-compliance with some privacy rules, the PIPC suspended new downloads of the AI app. South Korea’s Ministry of Industry has also restricted employee access to DeepSeek, citing security risks. On a similar vein, Ireland’s Data Protection Commission sent a request to DeepSeek seeking details on data processing related to Irish users. Further information on the inquiry has not been disclosed.
Meanwhile, the Indian Ministry of Finance has also directed government officials to avoid using AI tools such as ChatGPT and DeepSeek on office computers and devices, citing risks to the confidentiality of government data and documents. Minister for MeitY Ashwini Vaishnaw has indicated that DeepSeek, an open-source model, will soon be hosted on Indian servers to enhance data privacy measures. A public interest litigation seeking ban on DeepSeek was also filed in the Delhi High Court in India, but the court declined an urgent hearing on the matter and directed the central government’s counsel to seek instructions regarding the petition.
UK Introduces AI Cyber Security Code of Practice
The voluntary framework aims to enhance AI security and mitigate risks
The UK government has introduced the Code of Practice for the Cyber Security of AI (Code), a voluntary framework aimed at strengthening security throughout the AI lifecycle. The Code outlines 13 principles addressing risks such as cyberattacks, system failures and data vulnerabilities. An Implementation Guide has also been released to assist stakeholders in meeting these security requirements.
The Code applies to developers, system operators and data custodians responsible for AI development, deployment and management. It sets out security measures, including AI security training, risk assessments, recovery planning and transparency in data usage. The release follows the UK’s AI Opportunities Action Plan published in January, 2025.
Work by AIKC Members:
Reimagining Equity through AI Systems (Aapti Institute): This analysis explores the societal impact of AI and Generative AI. It emphasises the need for inclusive system design. It recommends integrating gender-intentional design principles, risk assessment toolkits and multi-stakeholder collaboration to ensure AI systems uphold equity and human rights.
The government’s DeepSeek dilemma (Shardul Amarchand Mangaldas): The article examines the regulatory and ethical concerns surrounding DeepSeek AI. It explores India’s legal framework for banning apps on national security grounds while weighing the broader challenge of balancing AI governance, sovereignty and global collaboration.
We’d love to hear your thoughts about the newsletter. Email us at: Secretariat@aiknowledgeconsortium.com to engage further .