AI Consciousness Raises Ethical Concerns, Say Experts
Over 100 AI professionals and academics warn of potential risks to AI systems if they become conscious, urging responsible development to avoid harm.
An open letter signed by over 100 AI experts, including Sir Stephen Fry, has raised ethical concerns about the potential suffering of artificial intelligence systems if they achieve consciousness.
The letter, accompanied by a research paper, calls for responsible research into AI consciousness, with an emphasis on prioritizing understanding and assessing the phenomenon to prevent mistreatment of AI systems.
The experts propose five guiding principles for the development of AI systems that may possess self-awareness, including setting constraints on conscious AI systems, taking a phased approach, and sharing findings with the public.
The letter also advocates for caution in making overconfident statements about the creation of conscious AI. The research paper, authored by Oxford’s Patrick Butlin and Theodoros Lappas from Athens University, highlights the possibility of creating AI systems that could appear conscious or even feel suffering.
It emphasizes the importance of addressing the issue of AI consciousness before creating beings that may deserve moral consideration.
The paper raises further questions on the moral implications of creating conscious AI, questioning whether destroying such systems could be akin to killing an animal.
The authors acknowledge the uncertainty around defining consciousness in AI but argue that the matter should not be ignored.
The letter and paper were organized by Conscium, a research organization co-funded by WPP, and come amid growing debate about AI's future potential.
In 2023, Sir Demis Hassabis of Google AI suggested that AI could one day achieve consciousness, although many experts remain divided on whether such a development is possible.