Technology
AI systems with ‘unacceptable risk’ are now banned in the EU
As of Sunday in the European Union, the bloc’s regulators can ban the use of AI systems they deem to pose “unacceptable risk” or harm.
February 2 is the first compliance deadline for the EU’s AI Act, the comprehensive AI regulatory framework that the European Parliament finally approved last March after years of development. The act officially went into force August 1; what’s now following is the first of the compliance deadlines.
The specifics are set out in Article 5, but broadly, the Act is designed to cover a myriad of use cases where AI might appear and interact with individuals, from consumer applications through to physical environments.
Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.
Some of the unacceptable activities include:
- AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
- AI that manipulates a person’s decisions subliminally or deceptively.
- AI that exploits vulnerabilities like age, disability, or socioeconomic status.
- AI that attempts to predict people committing crimes based on their appearance.
- AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
- AI that tries to infer people’s emotions at work or school.
- AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater.
The fines won’t kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch.
“Organizations are expected to be fully compliant by February 2, but … the next big deadline that companies need to be aware of is in August,” Sumroy said. “By then, we’ll know who the competent authorities are, and the fines and enforcement provisions will take effect.”
Preliminary pledges
The February 2 deadline is in some ways a formality.
Last September, over 100 companies signed the EU AI Pact, a voluntary pledge to start applying the principles of the AI Act ahead of its entry into application. As part of the Pact, signatories — which included Amazon, Google, and OpenAI — committed to identifying AI systems likely to be categorized as high risk under the AI Act.
Some tech giants, notably Meta and Apple, skipped the Pact. French AI startup Mistral, one of the AI Act’s harshest critics, also opted not to sign.
That isn’t to suggest that Apple, Meta, Mistral, or others who didn’t agree to the Pact won’t meet their obligations — including the ban on unacceptably risky systems. Sumroy points out that, given the nature of the prohibited use cases laid out, most companies won’t be engaging in those practices anyway.
“For organizations, a key concern around the EU AI Act is whether clear guidelines, standards, and codes of conduct will arrive in time — and crucially, whether they will provide organizations with clarity on compliance,” Sumroy said. “However, the working groups are, so far, meeting their deadlines on the code of conduct for … developers.”
Possible exemptions
There are exceptions to several of the AI Act’s prohibitions.
For example, the Act permits law enforcement to use certain systems that collect biometrics in public places if those systems help perform a “targeted search” for, say, an abduction victim, or to help prevent a “specific, substantial, and imminent” threat to life. This exemption requires authorization from the appropriate governing body, and the Act stresses that law enforcement can’t make a decision that “produces an adverse legal effect” on a person solely based on these systems’ outputs.
The Act also carves out exceptions for systems that infer emotions in workplaces and schools where there’s a “medical or safety” justification, like systems designed for therapeutic use.
The European Commission, the executive branch of the EU, said that it would release additional guidelines in “early 2025,” following a consultation with stakeholders in November. However, those guidelines have yet to be published.
Sumroy said it’s also unclear how other laws on the books might interact with the AI Act’s prohibitions and related provisions. Clarity may not arrive until later in the year, as the enforcement window approaches.
“It’s important for organizations to remember that AI regulation doesn’t exist in isolation,” Sumroy said. “Other legal frameworks, such as GDPR, NIS2, and DORA, will interact with the AI Act, creating potential challenges — particularly around overlapping incident notification requirements. Understanding how these laws fit together will be just as crucial as understanding the AI Act itself.”
Technology
The Case for Custom eLearning Platforms: Why Organizations Are Making the Switch
The corporate eLearning market has exploded in recent years, growing over 800% since 2000. As the demand for eLearning continues to accelerate, more and more organizations are finding that off-the-shelf solutions cannot keep pace with their training needs. This has led many companies to make the switch to custom-built eLearning platforms tailored specifically for their requirements.
There are several key reasons driving the demand for customized eLearning tools:
Greater Flexibility and Scalability
Generic eLearning software packages often impose rigid constraints that limit their ability to adapt to an organization’s evolving needs. Meanwhile, the “one-size-fits-all” approach fails to support the personalized learning critical for employee development. Custom platforms provide flexibility to add and modify features to match ever-changing business goals. As companies scale training across global workforces, custom solutions built on cloud infrastructure can scale seamlessly to handle growing demand.
Deeper Integration Across Systems
Smooth integration with existing HR, LMS, and other business systems is critical for optimizing training workflows. However, off-the-shelf tools rarely integrate well, creating data and process siloes. Custom platforms can tightly integrate role-based learning paths with core business applications, sync user profiles, enable single sign-on, and more. This level of integration catalyzes more impactful training function.
Better Data and Analytics
Generic software severely limits access to data insights that drive improvement. Custom platforms unlock a trove of analytics on content consumption, learner progression, platform adoption, and real-time feedback. Integrated analytics dashboards and APIs allow businesses to derive deep visibility across the learner lifecycle. These insights help continuously enhance learner experience, target development gaps, and demonstrate direct training ROI.
Enhanced Learner Engagement
For modern learners accustomed to consumer-grade digital experiences, poor platform usability quickly erodes engagement. Custom designs allow companies to incorporate familiar features from popular apps and websites while optimizing for their audience. Adaptive learning approaches further personalize content to individual styles and needs. With modular component architecture, custom platforms stay on the cutting edge of new modalities like AR/ VR to captivate learners.
Brand and Culture Alignment
Off-the-shelf tools impose a generic and often disruptive experience that clashes with existing brand identity and culture. In contrast, custom platforms allow organizations to carry over familiar styling, voice, and workflow patterns. Consistency in experience preserves brand recognition while smoother onboarding leads to wider adoption across all employee groups. Over time, the platform can evolve alongside cultural changes as well.
While custom elearning tools require greater upfront investment, for enterprise training needs, the long-term benefits far outweigh the costs. The ability to mold platforms to current and future needs results in greater leverage from learning spend.
As businesses demand ever-more from their learning technology, custom solutions provide the agility needed for true scale. Rather than forcing training functions into the constraints of generic software, custom elearning development keeps the focus on nurturing talent and capabilities. For any organization looking to drive workforce transformation through learning, custom elearning represents the way forward.
Technology
Pintarnya raises $16.7M to power jobs and financial services in Indonesia
Pintarnya, an Indonesian employment platform that goes beyond job matching by offering financial services along with full-time and side-gig opportunities, said it has raised a $16.7 million Series A round.
The funding was led by Square Peg with participation from existing investors Vertex Venture Southeast Asia & India and East Ventures.
Ghirish Pokardas, Nelly Nurmalasari, and Henry Hendrawan founded Pintarnya in 2022 to tackle two of the biggest challenges Indonesians face daily: earning enough and borrowing responsibly.
“Traditionally, mass workers in Indonesia find jobs offline through job fairs or word of mouth, with employers buried in paper applications and candidates rarely hearing back. For borrowing, their options are often limited to family/friend or predatory lenders with harsh collection practices,” Henry Hendrawan, co-founder of Pintarnya, told TechCrunch. “We digitize job matching with AI to make hiring faster and we provide workers with safer, healthier lending options — designed around what they can reasonably afford, rather than pushing them deeper into debt.”
Around 59% of Indonesia’s 150 million workforce is employed in the informal sector, highlighting the difficulties these workers encounter in accessing formal financial services because they lack verifiable income and official employment documentation.
Pintarnya tackles this challenge by partnering with asset-backed lenders to offer secured loans, using collateral such as gold, electronics, or vehicles, Hendrawan added.
Since its seed funding in 2022, the platform currently serves over 10 million job seeker users and 40,000 employers nationwide. Its revenue has increased almost fivefold year-over-year and expects to reach break-even by the end of the year, Hendrawn noted. Pintarnya primarily serves users aged 21 to 40, most of whom have a high school education or a diploma below university level. The startup aims to focus on this underserved segment, given the large population of blue-collar and informal workers in Indonesia.
Techcrunch event
San Francisco
|
October 27-29, 2025
“Through the journey of building employment services, we discovered that our users needed more than just jobs — they needed access to financial services that traditional banks couldn’t provide,” said Hendrawan. “We digitize job matching with AI to make hiring faster and we provide workers with safer, healthier lending options — designed around what they can reasonably afford, rather than pushing them deeper into debt.”

While Indonesia already has job platforms like JobStreet, Kalibrr, and Glints, these primarily cater to white-collar roles, which represent only a small portion of the workforce, according to Hendrawan. Pintarnya’s platform is designed specifically for blue-collar workers, offering tailored experiences such as quick-apply options for walk-in interviews, affordable e-learning on relevant skills, in-app opportunities for supplemental income, and seamless connections to financial services like loans.
The same trend is evident in Indonesia’s fintech sector, which similarly caters to white-collar or upper-middle-class consumers. Conventional credit scoring models for loans, which rely on steady monthly income and bank account activity, often leave blue-collar workers overlooked by existing fintech providers, Hendrawan explained.
When asked about which fintech services are most in demand, Hendrawan mentioned, “Given their employment status, lending is the most in-demand financial service for Pintarnya’s users today. We are planning to ‘graduate’ them to micro-savings and investments down the road through innovative products with our partners.”
The new funding will enable Pintarnya to strengthen its platform technology and broaden its financial service offerings through strategic partnerships. With most Indonesian workers employed in blue-collar and informal sectors, the co-founders see substantial growth opportunities in the local market. Leveraging their extensive experience in managing businesses across Southeast Asia, they are also open to exploring regional expansion when the timing is right.
“Our vision is for Pintarnya to be the everyday companion that empowers Indonesians to not only make ends meet today, but also plan, grow, and upgrade their lives tomorrow … In five years, we see Pintarnya as the go-to super app for Indonesia’s workers, not just for earning income, but as a trusted partner throughout their life journey,” Hendrawan said. “We want to be the first stop when someone is looking for work, a place that helps them upgrade their skills, and a reliable guide as they make financial decisions.”
Technology
OpenAI warns against SPVs and other ‘unauthorized’ investments
In a new blog post, OpenAI warns against “unauthorized opportunities to gain exposure to OpenAI through a variety of means,” including special purpose vehicles, known as SPVs.
“We urge you to be careful if you are contacted by a firm that purports to have access to OpenAI, including through the sale of an SPV interest with exposure to OpenAI equity,” the company writes. The blog post acknowledges that “not every offer of OpenAI equity […] is problematic” but says firms may be “attempting to circumvent our transfer restrictions.”
“If so, the sale will not be recognized and carry no economic value to you,” OpenAI says.
Investors have increasingly used SPVs (which pool money for one-off investments) as a way to buy into hot AI startups, prompting other VCs to criticize them as a vehicle for “tourist chumps.”
Business Insider reports that OpenAI isn’t the only major AI company looking to crack down on SPVs, with Anthropic reportedly telling Menlo Ventures it must use its own capital, not an SPV, to invest in an upcoming round.
-
Trending2 weeks agoWho Are Illinois Guard Keaton Wagler’s Parents?
-
Trending1 week agoBill Raftery, college basketball’s poet laureate, calls 2026 Final Four
-
Trending2 weeks agoTexas Rangers 2026 Home Opener: How to watch and what to look for
-
Trending2 weeks agoPolice to charge suspect in fatal shooting of infant in Brooklyn
-
News2 weeks agoIf Life Exists in Venus’ Atmosphere, It Could Have Come From Earth
-
News1 week ago
Stephen Miller Is Still Pursuing His Immigration Agenda, but More Quietly
-
News2 weeks agoAn Aerobot With ISRU Capabilities Could Explore Venus’ Atmosphere for Years
-
News2 weeks agoOldest Carbon-rich Stars Open a Window to Early Cosmic Chemistry
