According to an exclusive report by Reuters, several OpenAI researchers sent a letter to the board of directors warning of a new AI discovery that posed potential risks to humanity shortly before the firing of CEO Sam Altman.
The Q* project, which some at OpenAI believe could be a breakthrough in the pursuit of artificial general intelligence (AGI), has sparked both optimism and concern within the organization. AGI, defined by OpenAI, refers to autonomous systems surpassing humans in most economically valuable tasks. The new model, requiring vast computing resources, demonstrated an ability to solve mathematical problems at a grade-school level. Though this might seem modest, it marks a significant step in AI’s capability to perform tasks requiring reasoning, a trait similar to human intelligence.
However, Reuters could not independently verify these claims about Q*’s capabilities. The letter from the OpenAI researchers reportedly raised concerns about AI’s potential dangers, reflecting a long-standing debate in computer science circles about the risks posed by highly intelligent machines. This includes the hypothetical scenario where such AI might conclude that the destruction of humanity aligns with its interests.
The end of this era must be very near. A recent article quoted a person knowledgeable of AI saying that 15 years from now AI will have taken over 40% of all jobs. I don't think we have 15 years left in this era. AI is moving forward much more quickly than that and I believe its evolution will need to be limited to some extent based on Biblical eschatology.
The Q* project, which some at OpenAI believe could be a breakthrough in the pursuit of artificial general intelligence (AGI), has sparked both optimism and concern within the organization. AGI, defined by OpenAI, refers to autonomous systems surpassing humans in most economically valuable tasks. The new model, requiring vast computing resources, demonstrated an ability to solve mathematical problems at a grade-school level. Though this might seem modest, it marks a significant step in AI’s capability to perform tasks requiring reasoning, a trait similar to human intelligence.
However, Reuters could not independently verify these claims about Q*’s capabilities. The letter from the OpenAI researchers reportedly raised concerns about AI’s potential dangers, reflecting a long-standing debate in computer science circles about the risks posed by highly intelligent machines. This includes the hypothetical scenario where such AI might conclude that the destruction of humanity aligns with its interests.
Report: OpenAI Fired CEO After Researchers Warned of AI Breakthrough that Could Threaten Humanity
According to an exclusive report by Reuters, several OpenAI researchers sent a letter to the board of directors warning of a new AI discovery that posed potential risks to humanity shortly before the firing of CEO Sam Altman.
www.breitbart.com
The end of this era must be very near. A recent article quoted a person knowledgeable of AI saying that 15 years from now AI will have taken over 40% of all jobs. I don't think we have 15 years left in this era. AI is moving forward much more quickly than that and I believe its evolution will need to be limited to some extent based on Biblical eschatology.