Technology
The TechCrunch AI glossary | TechCrunch

Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.
We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks.
An AI agent refers to a tool that makes use of AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so different people can mean different things when they refer to an AI agent. Infrastructure is also still being built out to deliver on envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multi-step tasks.
Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller between a giraffe and a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows).
In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be right, especially in a logic or coding context. So-called reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.
(See: Large language model)
A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain.
Deep learning AIs are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). It also typically takes longer to train deep learning vs. simpler machine learning algorithms — so development costs tend to be higher.
(See: Neural network)
This means further training of an AI model that’s intended to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e. task-oriented) data.
Many AI startups are taking large language models as a starting point to build a commercial product but vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.
(See: Large language model (LLM))
Large language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters.
AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product.
LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words.
Those are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.
(See: Neural network)
Neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models.
Although the idea to take inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, whether for voice recognition, autonomous navigation, or drug discovery.
(See: Large language model (LLM))
Weights are core to AI training as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output.
Put another way, weights are numerical parameters that define what’s most salient in a data set for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target.
For example, an AI model for predicting house prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached, semi-detached, if it has or doesn’t have parking, a garage, and so on.
Ultimately, the weights the model attaches to each of these inputs is a reflection of how much they influence the value of a property, based on the given data set.

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
Technology
Mystery will may reveal Zappos founder’s final wishes

According to the WSJ, a recently discovered will suggests late Zappos co-founder Tony Hsieh had concrete plans for his fortune despite previous beliefs that he died without leaving instructions for an estate that’s estimated to be worth $1.2 billion.
Among other things, the document, signed in 2015 and included in a recent court filing, contains a striking no-contest clause directed at Hsieh’s family: if any of his four family members challenges his wishes, all will receive nothing. The will also allocates over $50 million and several Las Vegas properties to undisclosed trusts tied to recipients he aimed to surprise.
Notably, Hsieh also earmarked $3 million for his alma mater Harvard University, the storied institution that’s currently battling with the Trump administration, which has frozen billions of dollars in federal funding and is reportedly giving Harvard’s endowment a closer look.
The will’s discovery adds another bizarre element to the already strange legal battle over Hsieh’s estate following his November 2020 death in a house fire at age 46. Hsieh reportedly crafted the will to create a “WOW factor” for beneficiaries, wanting them to “live in the wow.”

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
Technology
Fluent Ventures backs replicated startup models in emerging markets

A new venture firm aims to prove that the most successful startup ideas don’t have to be born or scaled in Silicon Valley.
Fluent Ventures, a global early-stage fund, is backing founders replicating proven business models from Western markets in fintech, digital health, and commerce across emerging markets. The more cynical might describe this as a clone factory, but founder and managing partner Alexandre Lazarow calls the firm’s strategy “geographic alpha.”
Fluent’s premise is that many of the world’s most valuable startups are not entirely new concepts that haven’t been tried before, but more simply, local adaptations of models that have already succeeded elsewhere.
The San Francisco-based firm, founded in 2023, is deploying $40 million across a fund, an incubator, and a structured co-investment vehicle with limited partners. It is writing initial checks of $250,000 to $2 million from pre-seed to Series A and plans to make 22–25 investments, with follow-ons.
“We are contrarians at heart,” said Lazarow, who previously invested at Omidyar Network and Cathay Innovation. “We believe the world’s best innovations are not the exclusive purview of Silicon Valley.”
Fluent is not exactly working in a bubble: the last decade has seen a massive decentralization in the technology industry. In 2013, just four cities had produced a unicorn. Today, that number exceeds 150.
And that has been on the back of rinse and repeat, with many of the top tech players in emerging markets mirroring successful startups that have been built elsewhere, such as Amazon clones in e-commerce, Stripe clones in payments, and neo-banking apps in fintech. The first breakout neo-bank was Tinkoff from Russia. “That movement scaled globally, and [it] was one of the insights that motivated my investments in Chime in the U.S. and Banco Neon in Brazil,” said Lazarow.
Lazarow insists Fluent doesn’t just copy-paste.
“That rarely works, in our opinion. Local adaptation is critical,” he said.
The firm points to ride-hailing as an example. Uber may have pioneered the category, but in Indonesia, Go-Jek localized it by incorporating motorcycle taxis and super app functionality similar to China’s WeChat. Now Uber Eats is essentially chasing that evolution, Lazarow argues.
To that point, Fluent Ventures, in addition to finding adapted models, screens for local product-market fit and founder-market alignment.
While the firm passed on several construction marketplaces globally, it backed BRKZ in Saudi Arabia, a localized take on India’s Infra.Market. The founder, a former Careem executive, was a strong operator in a region with surging infrastructure demand, Lazarow noted.
Despite calling itself a global fund, Lazarow says Fluent doesn’t aim for equal allocation across every geography. Instead, it goes deeper in the regions where it sees the most potential. Right now, that means a focus on Latin America, MENA, Africa, Southeast Asia, and selective U.S. markets.
Its current portfolio includes Minu, a Mexican employee wellness platform; Sabi, a Nigerian B2B commerce startup; Prima, a Brazil-based industrial marketplace; and Baton, a U.S. M&A platform for SMBs.
The firm says these companies have raised multiple follow-on rounds since Fluent’s early checks. Collectively, startups from Lazarow’s prior and current portfolios have generated over $30 billion in enterprise value, with seven reaching unicorn status.
Skeptics still question the exit landscape in emerging markets, perhaps especially since valuations have gone up in these markets, with more unicorns than a decade ago. Yet Fluent sees momentum building. IPOs of startups like Nubank, UiPath, Swiggy, and Talabat prove that global outcomes can emerge outside the U.S. and Europe — and then, as in the case of Nubank and UiPath, those companies can still go public in the U.S. if they choose.
“Exit markets are also maturing in these regions,” Lazarow remarks. “New secondary firms are rising. Stock markets are looking to build local listing capabilities. Yes, the U.S. has much more developed IPO and M&A markets. But under the hood, some of the largest and most profitable exits are already happening outside.”
Fluent has also built out a different kind of network around the kinds of founders it invests in. More than 75 unicorn founders and VCs back the fund, including David Vélez (Nubank), Nick Nash (Sea Group), Akshay Garg (Kredivo), and Sean Harper (Kin), alongside institutional LPs and family offices from around the world. According to Lazarow, many are active contributors, helping portfolio companies with talent, fundraising, and expansion.
The firm also relies on a small group of venture partners from ZenBusiness, Terminal, Kin, and Dell, bringing both sector depth and geographic reach.
In a world where venture capital might be rethinking overexposure to the U.S. and China, Fluent believes its approach offers LPs something few firms can: diversification.
“We believe the best ideas come from anywhere and scale everywhere,” says the partner whose firm claims a spot on Kauffman Fellows’ top‑returner index, thanks to his earlier personal stakes in Chime, ZenBusiness and Sidecar Health.

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
Technology
Tesla profits drop 71% on weak sales and anti-Elon Musk sentiment

Tesla’s flailing sales figures have put the company closer to the red than it has been in years, according to financial results released Tuesday, threatening one of its biggest advantages over other EV players.
The electric automaker reported $409 million in net income on $19.3 billion in revenue after delivering almost 337,000 EVs in the first quarter of the year. The company’s net income reflects a 71% drop from the same quarter last year.
It was the worst quarter for Tesla deliveries in more than two years and came on the heels of the company’s first-ever year-to-year drop in sales. Tesla’s income was buffered by selling $595 million in zero-emissions tax credits, according to its earnings report — without those, it would have posted a loss.
And yet, Tesla stock rose in after-hours trading as investors put more weight on the company’s plans to begin production on an affordable EV in June and CEO Elon Musk’s comments during an earnings call that he would reduce his role with the Department of Government Efficiency to focus more attention on Tesla. Musk did not commit to ending his DOGE work altogether though, noting he may continue in some capacity through the remainder of President Donald Trump’s second term.
TechCrunch published a roundup of other Musk comments covering tariffs, robotaxis, AI, and EVs, during Tesla’s earnings call.
Tesla also cautioned shareholders about how the trade war may affect its business moving forward. The company said President Trump’s tariffs and “changing political sentiment” could have a “meaningful impact on demand for our products.”
The company noted the current tariffs, the bulk of which are directed at China, will have “a relatively larger impact on our Energy business compared to automotive.” Tesla said it is taking actions to stabilize the business in the medium to long term and focus on maintaining its health, but it also cautioned investors that it can’t say whether it will be able to grow sales this year.
Tesla is sticking to its ambitious (but mysterious) plans around making more affordable models, stating it remains on track for start of production of these vehicles in the first half of 2025. During the earnings call, Musk was more specific, stating production would begin in June.
These vehicles will use aspects of a next-generation platform that powers the robotaxi, but will rely on its existing one that powers the Model Y and Model 3, the company said in its shareholder’s letter. As such, these cheaper vehicles will be produced on the same manufacturing lines as the current vehicle lineup, the company said.
This flies in the face of a Reuters report from last week that claimed the first of these new EVs is delayed by months.
Tesla’s sales are up against a number of headwinds.
The company’s EV lineup is aging (though the sedans and SUVs have now all gotten face-lifts) and its newest product, the Cybertruck, is nowhere near the hit that CEO Elon Musk thought it could be. And Musk’s far-right politics, along with his involvement in the Trump administration, have created a sizable backlash to Tesla’s brand.
At the same time, Musk has oriented the company toward its Robotaxi and Optimus robot projects.
He has promised to launch an initial version of the Robotaxi service in Austin this June, with other cities potentially coming by the end of this year, but has been light on details about how it will work.
Musk has yet to demonstrate that Teslas are capable of driving themselves without human intervention despite years of making that promise. What’s more, The Information recently reported that an internal analysis done at Tesla showed the Robotaxi program would lose money for a long period of time even if it were to work.
At this time last year, Tesla was grappling with some gloomy numbers. In case you forgot, the company’s profits fell 55% to $1.13 billion in the first quarter of 2024 from the same period in 2023. Tesla said it was due to a protracted EV price-cutting strategy and “several unforeseen challenges” cut into the automaker’s bottom line.
Tesla tried to turn that profit ship around, but faced continued pressure. In Q2 of 2024, Tesla reported $1.5 billion in profit, down 45% from the same period in 2023. Profits were hit by a $622 million restructuring charge. Although it’s worth noting, that profit was padded by a record $890 million in regulatory credit sales.
This article originally published at 1:15 pm PT. It has since been updated with comments from Elon Musk and other executives from the earnings call.

A blog which focuses on business, Networth, Technology, Entrepreneurship, Self Improvement, Celebrities, Top Lists, Travelling, Health, and lifestyle. A source that provides you with each and every top piece of information about the world. We cover various different topics.
-
Entertainment3 weeks ago
Mel Gibson Can Own Guns Again After DOJ Removes Domestic Violence Restrictions
-
Entertainment2 weeks ago
Lady Gaga pays homage to past music videos in nearly 2-hour Coachella 2025 headlining set
-
Technology3 weeks ago
TechCrunch Mobility: Tesla takes a hit, tariff chaos begins, and one EV startup hits a milestone
-
Life Style2 weeks ago
160 Inspirational Birthday Quotes for a Happy, Fun and Meaningful Celebration
-
Entertainment3 weeks ago
Mexico’ actor Manuel Masalva ‘fighting for his life’ in coma after bacterial infection
-
Entertainment2 weeks ago
MGK Makes Surprise Appearance at Coachella After Welcoming Baby with Megan Fox
-
Entertainment2 weeks ago
Hailey Bieber shows off skimpy animal-printed bikini ahead of Coachella 2025
-
Life Style3 weeks ago
101 Short Mother’s Day Quotes for a Happy, Loving and Grateful Day