|
The US Senate has called for “comprehensive legislation” to address the risks posed by AI, saying the government needs to step in because “individuals and the private sector will not be able to do the job of protecting the country.”
At the same time, there is an opinion that large IT companies are trying to do everything to ensure that legislative regulation of neural networks is as lenient as possible. For content writing service example, OpenAI proposed not to classify general-purpose AI systems, which include ChatGPT, DALL-E, as high-risk, so that they are not subject to the strictest obligations to ensure security and transparency. Their opinion was heard, and in the latest version of the bill, such systems are not classified as high-risk.

Following this, the Norwegian Consumer Council called on politicians and regulators to resist attempts by tech companies to weaken any laws aimed at protecting people from the harmful use of AI. Fourteen other consumer advocacy groups from across Europe and the US have joined the initiative.
In Russia, attitudes toward neural networks are ambiguous. Thus, according to a survey by the Public Opinion Foundation (FOM), 33% of Russians fear the negative consequences of the rapid development and spread of neural networks. Russians are afraid of possible risks, such as unemployment due to the automation of professions, dependence on AI, and the likelihood of dangerous errors. However, 29% of Russians believe that the rapid development of AI technologies will benefit humans.
At the end of April, Deputy Chairman of the State Duma Committee on Information Policy Anton Gorelkin announced on the Telegram channel that the United Russia faction was working on a bill that would regulate the activities of neural networks. It is planned to assess changes in the labor market related to the spread of AI solutions.
|
|