Call for a Ban on SuperIntelligent AI Development
More dangerous models need more safeguards
Anyone who doesn’t believe in Sci-fi, I guess it’s time to buckle up, as ignorance has its own power, and it’s becoming more evident with the progress of AI.
The Future of life institute has published a letter concerning, AI systems could surpass the human intelligence in all cognitive tasks and that there should be prohibition on the development of artificial super-intelligence (ASI), signed by several public/political figures, scientists, prominent artificial intelligence pioneers, and technology experts, also, including high profile names and called for strict ban until there is a scientific consensus that it’s under control and is not putting human-out-of-the-loop.
* Experts generally categorize AI into three categories based on its capabilities.
Artificial narrow intelligence (ANI):- almost all AI to date, specialized in specific tasks, e.g., Google Translate, GPS, self-driving cars, and facial recognition.
Artificial General Intelligence (AGI):- still theoretically, would match human intelligence/reasoning and would be able to learn, adapt, and apply or conclude on what it’s trained on, and is considered a major leap.
Artificial Superintelligence (ASI): It is beyond human comprehension; theoretically, it can discover strategies and manipulate things accordingly, improving beyond real-time checks. In particular, there is some apprehension about the development of a system that could outthink us in every possible dimension.
However, the main concern is that as the more complex and advanced AI models evolve, it the more difficult to anticipate and control their consequences. This challenge is sometimes referred to as the AI alignment (works to reduce side effects to ensure AI systems behave as expected and in line with human values and goals) problem, and here is where the debate about existential risks to extinction begins.
Will AI Start to Think About Thinking
Theoretically, an intelligence explosion is a discrete point where AI will become independent in research and development, and will start creating its better version, no longer in need of any external input, and this is a point that is to be said about no return, as technically, AI will grow explosively, too fast to control.
But there are several counterarguments found among the scientific community; some believe AI will achieve this point in a few decades, while some skeptics say that only raw intelligence is not enough to dominate, but then a superintelligence could surpass the limitations.
For now, regulation remains thin, and the race for capability continues to outrun caution. Despite growing concerns, the pursuit of advanced AI remains a human choice, one that demands meaningful regulation and the deliberate design of systems that are fundamentally incapable of producing harm.
…
thanks

AI will take over everything one day...
Why would a business owner pay multiple salaries when AI can to the same thing for free and much faster...??
I believe it should be regulated as soon as possible
There is always a concern about this and I am sure many are already working towards this but Regulations is not the Solution. We as Humans are also evolving and AI is needed as we are losing the ability to think or maybe have lost it. I think we need to adapt with AI.