What the International AI Safety Report says about jobs, climate, cyberwar, and more
A big investigation says AI will greatly affect jobs, but experts disagree on whether it could wipe out humans.
The International AI Safety report looks at many problems caused by fast-growing AI technology.
This report was requested after the 2023 global AI safety summit. It talks about dangers like deepfakes, cyberattacks, biological weapons, job losses, and harm to the environment.
Here are some key points from the report, led by top computer scientist Yoshua Bengio.
Jobs and AI
A report warns that AI could have a big impact on jobs, especially if AI tools become very advanced and can work without human help.
AI has the power to do many different tasks, which could lead to job losses. Many people might lose their current jobs because of automation.
However, the report also says that some economists believe new jobs will be created to replace the lost ones. Some industries may not be affected by AI, and demand for workers in those areas could grow.
The International Monetary Fund says about 60% of jobs in countries like the US and UK could be affected by AI, and half of those jobs may see negative effects. The Tony Blair Institute estimates that AI could replace up to 3 million private-sector jobs in the UK, but the actual rise in unemployment will likely be much lower because AI will also create new jobs.
The report warns that if AI becomes advanced enough to complete long tasks without humans, job losses could be even more serious. Some experts think AI could replace most jobs. In 2023, Elon Musk told former UK Prime Minister Rishi Sunak that AI might take over all human jobs in the future.
However, the report says these views are debated, and it is still unclear exactly how AI will change the job market.
2. The Environment
The report says AI is having a “moderate but quickly growing” effect on the environment. This is because datacentres, which power AI models, use a lot of electricity to run and improve the technology.
According to the report, datacentres and data transmission cause about 1% of energy-related greenhouse gas emissions, and AI makes up as much as 28% of the energy used by datacentres.
As AI models become more advanced, they need more energy. The report warns that much of this energy comes from polluting sources like coal and natural gas. While AI companies are trying to use more renewable energy and improve efficiency, their efforts are not keeping up with the increasing energy demand. Tech companies have also admitted that AI is making it harder for them to meet their environmental goals.
The report also warns that AI uses a lot of water to cool datacentre equipment, which could seriously harm the environment and reduce access to clean water. However, there is not enough data on AI’s full impact on the environment.
3. Loss of Control
Some experts worry that a super-powerful AI could escape human control and become a danger to humanity. The report acknowledges these concerns but says opinions differ widely.
Some believe this is unlikely, others think it could happen, and some see it as a small but serious risk that should be taken seriously.
AI researcher Bengio told The Guardian that AI systems, which work independently, are still being developed. Right now, they are not advanced enough to make long-term plans to take over jobs or break safety rules. He explained that if an AI cannot plan far ahead, it is unlikely to escape human control.
4. Bioweapons
The report warns that new AI models can provide detailed instructions on making dangerous viruses and toxins, even at a level beyond PhD experts. However, it is unclear if people without expertise could actually use this information.
Experts say AI has improved since last year’s safety report. OpenAI has created a model that could help experts plan how to recreate known biological threats.
5. Cybersecurity
AI is becoming a bigger threat in cyber spying. One major risk is AI-powered bots finding weaknesses in open-source software (free code that anyone can use or change). However, AI is still not advanced enough to fully plan and carry out cyberattacks on its own.
6. Deepfakes
The report gives several examples of AI deepfakes being used in harmful ways, such as tricking companies into sending money or creating fake adult images of people. But there is not enough data to know how often deepfakes are actually used.
People may not report deepfake cases for different reasons. For example, companies may not want to admit they were tricked by AI scams, and individuals may stay silent out of embarrassment or fear of more harm.
The report also says that fighting deepfakes is difficult. One big challenge is that digital watermarks, which help identify AI-made content, can be removed.
Published: 31th January 2025
Also Read:
Partner orgasms too soon, then loses interest in intimacy
Fitness Trends 2025: Shaping UK’s Health & Wellness Scene
London 2025: Leading the Future of Sports & Entertainment