Ten years from now we may look back at 2024 as the year when Artificial Intelligence was truly integrated within our lives. You buy any gadget today — AI will come as part of the overall experience, whether you need it or not. More importantly, whether you understand it or not. Think of the smartphone world, major companies such as Apple, Samsung, and Google have all integrated AI-powered features into their smartphones and announcing new reasoning models, tools, and standalone features.
AI has been around for more than 75 years, and it continues to improve with advanced machine learning and algorithms. It can now help you create videos that you never shot, write essays, get detailed answers to your query, or even summaries of big research papers in just a few seconds. It is not limited to this; you can get a variety of different tools catering to your requirements. Thanks to machine learning, devices including smartphone, laptops, or even refrigerators are also AI-powered and can do the tasks automatically which once required manual efforts.
While the pitch is that these features will make users’ lives easy, we don’t really know yet. In exchange, a lot of data is being used to train these models — make them better — and there are still a lot of grey areas as to how and where this data is being stored. Then, there are also concerns around bias, misinformation, copyright violations, and obscene content. There have also been reports of users around the world misusing AI platforms to create deepfakes, scam people, or even promote racism and nudity. While the results can be fantastic for your work, they can be dangerous if misused.
Many platforms, including Google Gemini, have faced criticism for such results while hiding under cover of a ‘developing platform’. Everyone maintains that the user data is safe but these models definitely demand more robust regulations and guidelines.
Several authorities from all over the world including European Union (EU), China, the US, and India have announced their commitments to regulate AI but could not come up with an organised plan to regulate AI but have a single vision, ‘we must leverage the power of AI while mitigating its risks’.
The Ministry of Electronics and Information Technology (MeitY) announced in 2022 that the government will develop a comprehensive regulation plan under the Digital India Act, but the draft legislation has yet to be drafted. This means that India is working on AI regulations, and AI remains unregulated.
However, in 2024, MeitY proposed a Digital India Act blueprint that identified high-risk AI systems. It was an advisory stating that all companies would require government approval before deploying any AI models and addressing issues such as algorithmic discrimination and deepfakes, but it received criticism and was replaced.
According to reports, MeitY favors a light-touch strategy, whereas other government bodies prefer more proactive intervention. Overall, the regulations are said to be determining the AI trajectory in India. However, certain factors should be considered, such as key risks, innovation, privacy, and human rights before framing the AI regulations for the country.
AI regulations in other countries are completely based on their legal terms, governance models, and socio-economic priorities. For instance, the EU has regulated AI in a rights-based approach focusing more on health, safety, and fundamental rights. On the other hand, China is focusing on state control and social order while Japan follows human-centric goals in line with the welfare of individuals. Interestingly, Singapore and the UK have adopted a principle-based approach, tailoring regulations to some specific industries.
The European Union’s regulations, considered as world’s first comprehensive AI regulations, were proposed in 2021. It has been categorised into different risk levels to ensure safe, transparent, and ethical use.
AI systems that are considered unacceptable risks such as cognitive manipulation, biometric identification for surveillance, and others will be banned but with some exceptions. High-risk AI systems that are used in critical infrastructure, law enforcement, and medical devices will first have to undergo assessment.
Generative AI, including ChatGPT, will have to comply with the transparency requirements, including disclosure of AI-generated content and adherence to copyright law. The new framework also helps the startups and other companies to continue their innovation.
India can use the EU’s initial framework as a model to help shape its AI policy by enforcing moral principles, encouraging innovation and growth, fostering accountability, safeguarding individual privacy, and preserving a growing economy but certainly with some tweaks and add-ons.