AI

Google Warns UK Could Fall Behind in AI Race Without More Data Centres

20 September 2024

|

Zaker Adham

Summary

Google has issued a warning that the UK risks losing its competitive edge in the global artificial intelligence (AI) landscape unless more data centres are built and copyright laws are adjusted to support AI model training. According to Google UK’s managing director, Debbie Weinstein, the country could miss out on significant AI advancements if swift policy actions are not taken.

The tech giant points to research ranking the UK seventh globally in terms of AI readiness for data and infrastructure. Weinstein emphasized that while the UK government recognizes the potential of AI, more focused policies are needed to bolster AI deployment and ensure the UK remains a leader in this space.

“We have a strong history of leadership in AI, but without proactive measures, there’s a risk we’ll fall behind,” said Weinstein. AI has seen a surge in global investment following advancements like OpenAI’s ChatGPT and Google’s own Gemini AI model. However, cost-cutting measures by Keir Starmer’s government have put critical projects on hold, including a £800 million exascale supercomputer and £500 million for the AI Research Resource, which are essential for advancing AI research in the UK.

Weinstein expressed optimism about the government’s upcoming “AI action plan,” led by tech entrepreneur Matt Clifford, but emphasized the need for clear and substantial investment in AI infrastructure.

Google’s policy recommendations, outlined in a document titled "Unlocking the UK’s AI Potential," call for the creation of a national research cloud, which would provide the necessary computing power and data to help startups and academics build advanced AI models. The company also highlighted the UK's struggle to attract data centre investment compared to other countries, and welcomed Labour's plans to address this with a new planning and infrastructure bill.

Additional recommendations include setting up a national skills service to prepare the workforce for an AI-driven future and introducing AI into public services. Google also called for changes to UK copyright laws to allow for the use of text and data mining (TDM), which would enable the use of copyrighted material for academic and non-commercial purposes in AI training.

The UK government had previously dropped plans to allow TDM for commercial purposes due to pushback from creative industries and news publishers. Weinstein stressed that the unresolved copyright issue is hindering AI development and urged the government to revisit its stance on TDM for commercial use.

The report also advocates for "pro-innovation" regulation, favoring the current regulatory framework where public bodies like the Competition and Markets Authority and Information Commissioner’s Office oversee AI. Google supports the government's current approach and encourages a continuation of this model, rather than introducing new regulations.

The UK is in the process of drafting new AI legislation, which could make existing voluntary AI model testing agreements legally binding. It also includes plans to establish the AI Safety Institute as an independent government body. A government spokesperson confirmed that the AI opportunities action plan, driven by Matt Clifford, is intended to ensure the UK has the right infrastructure, skills, and data access to fully leverage AI's potential, while also focusing on safety to build public trust.