TFD Stocks Overview

Laws on Artificial Intelligence: Ensuring the safeguarding of our future

“An AI that could design novel biological pathogens. An AI that could hack into computer systems. I think these are all scary.” ~Sam Altman, CEO of OpenAI

Artificial Intelligence (AI), often hailed as the “Miracle of the Century,” has profoundly transformed the lives of millions and holds the potential to revolutionize our lifestyle and work patterns. For instance, AI has made significant strides in healthcare, where it’s used to predict diseases like cancer at early stages, and in the automotive industry, where self-driving cars are becoming a reality. However, alongside these advancements, there are growing concerns.
One of the major apprehensions is not just the potential of AI replacing human jobs, but also the fear that the human imagination driving AI could spiral out of control. For example, autonomous weapons powered by AI could change the nature of warfare and lead to unintentional escalations, and to address these concerns, we might need to regulate artificial intelligence.
The question that arises in our minds is, is it possible to regulate artificial intelligence without affecting innovations? Talking about the AI Safety Summit with Rishi Sunak, Elon Musk said that he felt it was good for governments to play an active role when it comes to public safety which depicts that we need to regulate AI in a way that preserves the balance between AI innovations and human safety.
Today, countries like the UK and the European Union are working on amending ‘grade based’ legislation regarding the regulation of AI to eliminate the threats that it may pose while also helping in the growth of innovations, we need lawmakers, individuals, and corporations to share a platform and learn from amendments around the world and mend those laws to make those laws country specific. In Pakistan Ministry of IT & Telecom’s Digital Pakistan goal includes a Draft National AI Policy. This policy intends to make Pakistan a knowledge-based economy and foster responsible AI usage.
Regulations for AI are necessary for numerous reasons. First and foremost, AI systems must be used ethically. Regulatory frameworks can ensure AI systems follow digital laws, protecting user privacy. Biased search results and AI gender bias are ethical issues. Secondly, it is imperative to protect human rights. AI may construct deep fakes and spread misinformation, which can alter public opinion and election results we have seen the case of Cambridge Analytica in which data from millions were taken without their consent from Facebook and was used for political advertising showing how technology can violate basic Human rights.
Another discussion around the world that is happening is about how we can regulate AI. Numerous approaches are being adopted around the world to make it possible to regulate artificial intelligence. The earliest one is the AI Act by the European Union, which includes the assessment of AI model inventories and classifies AI into four levels of risk: Unacceptable risk level to the AI that will be prohibited, High-risk level in which AI models will be permitted but has to comply with multiple requirements; the rest of the models will be considered a minimal risk and limited risk; these models will be permitted, and they only require transparency to the users. Other global entities, such as the UN and OECD, are also working on laying out a framework to regulate AI and having a platform to share research and knowledge regarding AI.
A prominent debate across the globe revolves around who is going to regulate AI. In the United States, the regulation of artificial intelligence (AI) is a topic of significant interest. For instance, Congress leader Chuck Schumer has called for preemptive legislation to impose regulatory ‘guardrails’ on AI products and services. Furthermore, the Biden administration has proposed an AI Bill of Rights. In addition, the Commerce Department and the National Telecommunications and Information Administration (NTIA) are exploring the possibility of AI system audits and certifications. This multi-faceted approach demonstrates the nation’s proactive stance towards AI and how different government bodies can help create laws and regulations for better use of AI.
Pakistan, like many other countries, is actively acknowledging and utilising the potential of artificial intelligence. The Ministry of Information Technology and Telecommunication (MoITT) recently released the National Policy of AI, a comprehensive 41-page document that outlines a strategic roadmap for transforming Pakistan into a nation driven by AI technology. It addresses areas such as public awareness, developing a skilled workforce, investing in research and development, establishing a national AI fund, and improving infrastructure.
Despite this positive stride, Pakistan faces multiple concerns, such as a lack of specific data laws and a shortage of trained AI professionals. Additionally, there is a lack of infrastructure in rural areas. In order to overcome these challenges, Pakistan needs to have a multitask-holder approach in which the government, academia, industry, and civil society need to work together to make Pakistan an AI-enabled country and unleash the AI potential of Pakistan.