Technology
Open source LLMs hit Europe’s digital sovereignty roadmap

Large language models (LLMs) landed on Europe’s digital sovereignty agenda with a bang last week, as news emerged of a new program to develop a series of “truly” open source LLMs covering all European Union languages.
This includes the current 24 official EU languages, as well as languages for countries currently negotiating for entry to the EU market, such as Albania. Future-proofing is the name of the game.
OpenEuroLLM is a collaboration between some 20 organizations, co-led by Jan Hajič, a computational linguist from the Charles University in Prague, and Peter Sarlin, CEO and co-founder of Finnish AI lab Silo AI, which AMD acquired last year for $665 million.
The project fits a broader narrative that has seen Europe push digital sovereignty as a priority, enabling it to bring mission-critical infrastructure and tools closer to home. Most of the cloud giants are investing in local infrastructure to ensure EU data stays local, while AI darling OpenAI recently unveiled a new offering that allows customers to process and store data in Europe.
Elsewhere, the EU recently signed an $11 billion deal to create a sovereign satellite constellation to rival Elon Musk’s Starlink.
So OpenEuroLLM is certainly on-brand.
However, the stated budget just for building the models themselves is €37.4 million, with roughly €20 million coming from the EU’s Digital Europe Programme — a drop in the ocean compared to what the giants of the corporate AI world are investing. The actual budget is more when you factor in funding allocated for tangential and related work, and arguably the biggest expense is compute. The OpenEuroLLM project’s partners include EuroHPC supercomputer centers in Spain, Italy, Finland, and the Netherlands — and the broader EuroHPC project has a budget of around €7 billion.
But the sheer number of disparate participating parties, spanning academia, research, and corporations, have led many to question whether its goals are achievable. Anastasia Stasenko, co-founder of LLM company Pleias, questioned whether a “sprawling consortia of 20+ organizations” could have the same measured focus of a homegrown private AI firm.
“Europe’s recent successes in AI shine through small focused teams like Mistral AI and LightOn — companies that truly own what they’re building,” Stasenko wrote. “They carry immediate responsibility for their choices, whether in finances, market positioning, or reputation.”
Table of Contents
Up to scratch
The OpenEuroLLM project is either starting from scratch or it has a head start — depending on how you look at it.
Since 2022, Hajič has also been coordinating the High Performance Language Technologies (HPLT) project, which has set out to develop free and reusable datasets, models, and workflows using high-performance computing (HPC). That project is scheduled to end in late 2025, but it can be viewed as a sort of “predecessor” to OpenEuroLLM, according to Hajič, given that most of the partners on HPLT (aside from the U.K. partners) are participating here, too.
“This [OpenEuroLLM] is really just a broader participation, but more focused on generative LLMs,” Hajič said. “So it’s not starting from zero in terms of data, expertise, tools, and compute experience. We have assembled people who know what they’re doing — we should be able to get up to speed quickly.”
Hajič said that he expects the first version(s) to be released by mid-2026, with the final iteration(s) arriving by the project’s conclusion in 2028. But those goals might still seem lofty when you consider that there isn’t much to poke at yet beyond a bare-bones GitHub profile.
“In that respect, we are starting from scratch — the project started on Saturday [February 1],” Hajič said. “But we have been preparing the project for a year [the tender process opened in February 2024].”
From academia and research, organizations spanning Czechia, the Netherlands, Germany, Sweden, Finland, and Norway are part of the OpenEuroLLM cohort, in addition to the EuroHPC centers. From the corporate world, Finland’s AMD-owned AI lab Silo AI is on board, as are Aleph Alpha (Germany), Ellamind (Germany), Prompsit Language Engineering (Spain), and LightOn (France).
One notable omission from the list is that of French AI unicorn Mistral, which has positioned itself as an open source alternative to incumbents such as OpenAI. While nobody from Mistral responded to TechCrunch for comment, Hajič did confirm that he tried to initiate conversations with the startup, but to no avail.
“I tried to approach them, but it hasn’t resulted in a focused discussion about their participation,” Hajič said.
The project could still gather new participants as part of the EU program that’s providing funding, though it will be limited to EU organizations. This means that entities from the U.K. and Switzerland won’t be able to take part. This flies in contrast to the Horizon R&D program, which the U.K. rejoined in 2023 after a prolonged Brexit stalemate and which provided funding to HPLT.
Build up
The project’s top-line goal, as per its tagline, is to create: “A series of foundation models for transparent AI in Europe.” Additionally, these models should preserve the “linguistic and cultural diversity” of all EU languages — current and future.
What this translates to in terms of deliverables is still being ironed out, but it will likely mean a core multilingual LLM designed for general-purpose tasks where accuracy is paramount. And then also smaller “quantized” versions, perhaps for edge applications where efficiency and speed are more important.
“This is something we still have to make a detailed plan about,” Hajič said. “We want to have it as small but as high-quality as possible. We don’t want to release something which is half-baked, because from the European point-of-view this is high-stakes, with lots of money coming from the European Commission — public money.”
While the goal is to make the model as proficient as possible in all languages, attaining equality across the board could also be challenging.
“That is the goal, but how successful we can be with languages with scarce digital resources is the question,” Hajič said. “But that’s also why we want to have true benchmarks for these languages, and not to be swayed toward benchmarks which are perhaps not representative of the languages and the culture behind them.“
In terms of data, this is where a lot of the work from the HPLT project will prove fruitful, with version 2.0 of its dataset released four months ago. This dataset was trained 4.5 petabytes of web crawls and more than 20 billion documents, and Hajič said that they will add additional data from Common Crawl (an open repository of web-crawled data) to the mix.
The open source definition
In traditional software, the perennial struggle between open source and proprietary revolves around the “true” meaning of “open source.” This can be resolved by deferring to the formal “definition” as per the Open Source Initiative, the industry stewards of what are and aren’t legitimate open source licenses.
More recently, the OSI has formed a definition of “open source AI,” though not everyone is happy with the outcome. Open source AI proponents argue that not only models should be freely available, but also the datasets, pretrained models, weights — the full shebang. The OSI’s definition doesn’t make training data mandatory, because it says AI models are often trained on proprietary data or data with redistribution restrictions.
Suffice it to say, the OpenEuroLLM is facing these same quandaries, and despite its intentions to be “truly open,” it will probably have to make some compromises if it’s to fulfill its “quality” obligations.
“The goal is to have everything open. Now, of course, there are some limitations,” Hajič said. “We want to have models of the highest quality possible, and based on the European copyright directive we can use anything we can get our hands on. Some of it cannot be redistributed, but some of it can be stored for future inspection.”
What this means is that the OpenEuroLLM project might have to keep some of the training data under wraps, but be made available to auditors upon request — as required for high-risk AI systems under the terms of the EU AI Act.
“We hope that most of the data [will be open], especially the data coming from the Common Crawl,” Hajič said. “We would like to have it all completely open, but we will see. In any case, we will have to comply with AI regulations.”
Two for one
Another criticism that emerged in the aftermath of OpenEuroLLM’s formal unveiling was that a very similar project launched in Europe just a few short months previous. EuroLLM, which launched its first model in September and a follow-up in December, is co-funded by the EU alongside a consortium of nine partners. These include academic institutions such as the University of Edinburgh and corporations such as Unbabel, which last year won millions of GPU training hours on EU supercomputers.
EuroLLM shares similar goals to its near-namesake: “To build an open source European Large Language Model that supports 24 Official European Languages, and a few other strategically important languages.”
Andre Martins, head of research at Unbabel, took to social media to highlight these similarities, noting that OpenEuroLLM is appropriating a name that already exists. “I hope the different communities collaborate openly, share their expertise, and don’t decide to reinvent the wheel every time a new project gets funded,” Martins wrote.
Hajič called the situation “unfortunate,” adding that he hoped they might be able to cooperate, though he stressed that due to the source of its funding in the EU, OpenEuroLLM is restricted in terms of its collaborations with non-EU entities, including U.K. universities.
Funding gap
The arrival of China’s DeepSeek, and the cost-to-performance ratio it promises, has given some encouragement that AI initiatives might be able to do far more with much less than initially thought. However, over the past few weeks, many have questioned the true costs involved in building DeepSeek.
“With respect to DeepSeek, we actually know very little about what exactly went into building it,” Peter Sarlin, who is technical co-lead on the OpenEuroLLM project, told TechCrunch.
Regardless, Sarlin reckons OpenEuroLLM will have access to sufficient funding, as it’s mostly to cover people. Indeed, a large chunk of the costs of building AI systems is compute, and that should mostly be covered through its partnership with the EuroHPC centers.
“You could say that OpenEuroLLM actually has quite a significant budget,” Sarlin said. “EuroHPC has invested billions in AI and compute infrastructure, and have committed billions more into expanding that in the coming few years.”
It’s also worth noting that the OpenEuroLLM project isn’t building toward a consumer- or enterprise-grade product. It’s purely about the models, and this is why Sarlin reckons the budget it has should be ample.
“The intent here isn’t to build a chatbot or an AI assistant — that would be a product initiative requiring a lot of effort, and that’s what ChatGPT did so well,” Sarlin said. “What we’re contributing is an open source foundation model that functions as the AI infrastructure for companies in Europe to build upon. We know what it takes to build models, it’s not something you need billions for.”
Since 2017, Sarlin has spearheaded AI lab Silo AI, which launched — in partnership with others, including the HPLT project — the family of Poro and Viking open models. These already support a handful of European languages, but the company is now readying the next iteration “Europa” models, which will cover all European languages.
And this ties in with the whole “not starting from scratch” notion espoused by Hajič — there is already a bedrock of expertise and technology in place.
Sovereign state
As critics have noted, OpenEuroLLM does have a lot of moving parts — which Hajič acknowledges, albeit with a positive outlook.
“I’ve been involved in many collaborative projects, and I believe it has its advantages versus a single company,” he said. “Of course they’ve done great things at the likes of OpenAI to Mistral, but I hope that the combination of academic expertise and the companies’ focus could bring something new.”
And in many ways, it’s not about trying to outmaneuver Big Tech or billion-dollar AI startups; the ultimate goal is digital sovereignty: (mostly) open foundation LLMs built by, and for, Europe.
“I hope this won’t be the case, but if, in the end, we are not the number one model, and we have a ‘good’ model, then we will still have a model with all the components based in Europe,” Hajič said. “This will be a positive result.”

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
Technology
GTC felt more bullish than ever, but Nvidia’s challenges are piling up

Nvidia took San Jose by storm this year, with a record-breaking 25,000 attendees flocking to the San Jose Convention Center and surrounding downtown buildings. Many workshops, talks, and panels were so packed that people had to lean against walls or sit on the floor — and suffer the wrath of organizers shouting commands to get them to line up properly.
Nvidia currently sits at the top of the AI world, with record-breaking financials, sky-high profit margins, and no serious competitors yet. But the coming months also hold unprecedented risk for the company as it faces U.S. tariffs, DeepSeek, and shifting priorities from top AI customers.
At GTC 2025, Nvidia CEO Jensen Huang attempted to project confidence, unveiling powerful new chips, personal “supercomputers,” and, of course, really cute robots. It was an exhaustive sales pitch – one aimed at investors reeling from Nvidia’s nosediving stock.
“The more you buy, the more you save,” Huang said at one point during a keynote on Tuesday. “It’s even better than that. Now, the more you buy, the more you make.”
Inference boom
More than anything, Nvidia at this year’s GTC sought to assure attendees – and the rest of the world watching – that demand for its chips won’t slow down anytime soon.
During his keynote, Huang claimed that nearly the “entire world got it wrong” on traditional AI scaling falling out of vogue. Chinese AI lab DeepSeek, which earlier this year released a highly efficient “reasoning” model called R1, prompted fears among investors that Nvidia’s monster chips may no longer be necessary for training competitive AI.
But Huang has repeatedly insisted that power-hungry reasoning models will, in fact, drive more demand for the company’s chips, not less. That’s why at GTC, Huang showed off Nvidia’s next line of Vera Rubin GPUs, claiming they’ll perform inference (that is, run AI models) at roughly double the rate of Nvidia’s current best Blackwell chip.
The threat to Nvidia’s business that Huang spent less time addressing was upstarts like Cerebras, Groq, and other low-cost inference hardware and cloud providers. Nearly every hyperscaler is developing a custom chip for inference, if not training, as well. AWS has Graviton and Inferentia (which it’s reportedly aggressively discounting), Google has TPUs, and Microsoft has Cobalt 100.

Along the same vein, tech giants currently extremely reliant on Nvidia chips, including OpenAI and Meta, are looking to reduce those ties via in-house hardware efforts. If they – and the aforementioned other rivals – are successful, it’ll almost assuredly weaken Nvidia’s stranglehold on the AI chips market.
That’s perhaps why Nvidia’s share price dipped around 4% following Huang’s keynote. Investors might’ve been holding out hope for “one last thing” — or perhaps an accelerated launch window. In the end, they got neither.
Tariff tensions
Nvidia also sought to allay worries about tariffs at GTC 2025.
The U.S. hasn’t imposed any tariffs on Taiwan (where Nvidia gets most of its chips), and Huang claimed tariffs wouldn’t do “significant damage” in the short run. He stopped short of promising that Nvidia would be shielded from the long-term economic impacts, however — whatever form they ultimately take.
Nvidia has clearly received the Trump Administration’s “America First” message, with Huang pledging at GTC to spend hundreds of billions of dollars on manufacturing in the U.S. While that would help the company diversify its supply chains, it’s also a massive cost for Nvidia, whose multitrillion-dollar valuation depends on healthy profit margins.
New business
As it looks to seed and grow businesses other than its core chips line, Nvidia at GTC drew attention to its new investments in quantum, an industry that the company has historically neglected. At GTC’s first Quantum Day, Huang apologized to the CEOs of major quantum companies for causing a minor stock crash in January 2025 after he suggested that the tech wouldn’t be very useful for the next 15 to 30 years.

On Tuesday, Nvidia announced that it would open a new center in Boston, NVAQC, to advance quantum computing in collaboration with “leading” hardware and software markers. The center will, of course, be equipped with Nvidia chips, which the company says will enable researchers to simulate quantum systems and the models necessary for quantum error correction.
In the more immediate future, Nvidia sees what it’s calling “personal AI supercomputers” as a potential new revenue-maker.
At GTC, the company launched DGX Spark (previously called Project Digits) and DGX Station, both of which are designed to allow users to prototype, fine-tune, and run AI models in a range of sizes at the edge. Neither is exactly inexpensive – they retail for thousands of dollars – but Huang boldly proclaimed that they represent the future of the personal PC.
“This is the computer of the age of AI,” Huang said during his keynote. “This is what computers should look like, and this is what computers will run in the future.”
We’ll soon see if customers agree.

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
Technology
Gmail’s new AI search now sorts emails by relevance instead of chronological order

Google is rolling out a new Gmail update that is designed to help you find the email you’re looking for more quickly. The company announced on Thursday that it will now use AI to consider factors like recency, most-clicked emails, and frequent contacts when surfacing emails based on your search query.
Up until now, Gmail has simply displayed emails in chronological order based on keywords.
“With this update, the emails you’re looking for are far more likely to be at the top of your search results — saving you valuable time and helping you find important information more easily,” the company wrote in a blog post.
Google is also introducing a new toggle so people can switch between “Most relevant” or “Most recent” emails on a search results page. The toggle is aimed at users who prefer seeing search results displayed in chronological order, rather than the new “Most relevant” default option.

The update is rolling out globally for users with personal Google accounts and is available on the web and in the Gmail app for Android and iOS. Google plans to expand the feature to business users in the future.
The launch of the new search functionality comes as Google has been building out its email offering to better compete with Apple’s Mail app, which got a slew of Gmail-like features with iOS 18 last year. For instance, Gmail recently gained a Gemini-powered feature that lets users add events to a Google Calendar directly from an email.
A few months ago, Gmail rolled out “Summary cards” that allow users to take actions in their inbox, like tracking packages, checking in for flights, setting reminders, marking bills as paid, and more.
Gmail also introduced the ability for users to chat with Gemini about their inbox directly within the app on both iOS and Android.

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
Technology
SoftBank to acquire semiconductor designer Ampere in $6.5B all-cash deal

SoftBank Group announced on Wednesday that it will acquire Ampere Computing, a chip designer founded by former Intel executive Renee James, through a $6.5 billion all-cash deal as a strategic move to broaden its investment in AI infrastructure.
Ampere will be operate as a wholly-owned subsidiary of SoftBank after the deal, which is expected to close in the second half of 2025.
Carlyle and Oracle, Ampere’s lead investors, will sell their shares in the Santa Clara, California startup. According to SoftBank’s statement, Carlyle holds a 59.65% stake while Oracle holds 32.27%. The startup employs 1,000 semiconductor engineers.
In 2021, SoftBank considered acquiring a minority stake in Ampere, which was then valued at $8 billion, per Bloomberg.
SoftBank is the largest shareholder of Arm Holdings, and Ampere has developed a server chip based on the ARM compute platform, positioning the two companies are strong partners. (Softbank acquired a British chip designer Arm for $32 billion in 2016, and it became publicly traded in 2023.) Ampere’s customers include Google Cloud, Microsoft Azure, Oracle Cloud, Alibaba, and Tencent, as well as companies like HPE and Supermicro.
SoftBank stated the Ampere acquisition will bolster its capabilities in key areas like AI and compute and expedite its growth initiatives. The most recent acquisition announcement follows a string of deals made by the Japanese tech mogul over the past few months, including its partnership with OpenAI to develop Advanced Enterprise AI called “Cristal intelligence.” SoftBank has also invested in the AI infrastructure project Stargate, which is building data centers for OpenAI across the U.S., and purchased an old Sharp factory in Japan.
“The future of Artificial Super Intelligence requires breakthrough computing power,” said Masayoshi Son, Chairman and CEO of SoftBank Group Corp. “Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision and deepens our commitment to AI innovation in the United States.”
Ampere was founded in 2017 by James, who previously worked at Intel and private equity firm Carlyle and served on the board of Oracle. The company initially specialized in cloud-native computing but has since expanded its scope to include sustainable AI compute.
“With a shared vision for advancing AI, we are excited to join SoftBank Group and partner with its portfolio of leading technology companies,” said James. “This is a fantastic outcome for our team, and we are excited to drive forward our AmpereOne roadmap for high-performance Arm processors and AI.”

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
-
Entertainment3 weeks ago
Lizzo twerks in new video after showing off dramatic weight loss
-
Life Style3 weeks ago
175 Good Night Quotes for Him, Her and Friends (Beautiful Wishes and Messages)
-
Entertainment3 weeks ago
Adrien Brody Tossed His Gum at Girlfriend Before Winning Best Actor
-
Technology3 weeks ago
Signal is the number-one downloaded app in the Netherlands. But why?
-
News3 weeks ago
Good News! The Subaru Telescope Confirms that Asteroid 2024 YR4 Will Not Hit Earth.
-
Travel3 weeks ago
14 “Polite” Remarks Virginians Use That Mask Hidden Criticism
-
Technology3 weeks ago
The TechCrunch AI glossary | TechCrunch
-
Travel3 weeks ago
7 Things to Put in Your Coffee That Isn’t Milk or Creamer to Spice Up Your Morning Commute to Washington, D.C.