The U.S., UK, and 16 other countries have joined hands and signed a new agreement to make AI "secure by design." Even though it is just being considered a basic statement of principles, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has mentioned how this is a step in the right direction.
AI needs to become a lot more secure than it is, and we need laws governing the creation of new systems based around it
So far, we have seen several uses of AI, from basic tasks such as taking percentages between two numbers to some complicated answers. The applications of AI are numerous and simply cannot be defined within a range of numbers that we can come up with. The more parameters a model has, the better it gets, and with AI becoming a native feature on 2024 phones, it's important.
This is what the report reads:
The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.”
In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.
Jen Easterly, CISA director, also talked about how it is important for countries to understand that AI development should have a safety-first approach and even went on to encourage other countries so they can join this, too.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”
Germany, Italy, Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore also signed the agreement. Meanwhile, to ensure that AI does not become a threat, Europe will start working on specific laws that govern the development and release of new AI systems. This means that all companies realizing their systems in the EU will have to be sure that there are no vulnerabilities allowing users to misuse AI.
There is no denying that the use of AI is not to be stopped, especially if we are expecting advancements and improvements in our technology and, to some extent, our daily lives. This agreement is a lot more important. At the same time, it is crucial to understand that creating laws that govern the use of artificial intelligence is not an easy task and can take years before it actually comes to some fruition. Yet, you must realize that AI will have capabilities that it will develop over its course. This means that creating regulations and rules that apply to the current AI model might not be enough in a couple of years when the same model has learned more.
News Source: Reuters









