OpenAI, creator of ChatGPT, has issued a statement, saying that ‘now is a good time to start thinking about the governance of superintelligence’. Superintelligence is a term for future AI systems that will be ‘dramatically more capable than even AGI’ (artificial general intelligence).
Key points from the statement include:
- It’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains and carry out as much productive activity as one of today’s largest corporations.
- In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.
- We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.
- Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate. By contrast, the systems we [OpenAI] are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.
- The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. OpenAI believes that people around the world 'should democratically decide on the bounds and defaults for AI systems'.