التكلفة البشرية لسباق الذكاء الاصطناعي: أرباح جوجل القياسية وعملات الهند المؤلمة

مشاركة:

في خضم السباق المحموم على الذكاء الاصطناعي، تعلن جوجل عن أرباح قياسية تتجاوز التوقعات وخططاً لمضاعفة الإنفاق، لكن الكواليس تخفي ثمناً باهظاً؛ حيث يكشف تقرير عن معاناة عاملين هنود يُجبرون على مشاهدة محتوى مسيء لساعات طويلة لتدريب النماذج، مما يضع الضوء على التناقض بين النمو التقني المتسارع والتكلفة الإنسانية الخفية.

📰آخر التطورات(3 أخبار)

عاملات هنود يشاهدن محتوى مسيئاً لساعات لتدريب الذكاء الاصطناعي

الغارديان - تقنية|٥‏/٢‏/٢٠٢٦|70%

On the veranda of her family’s home, with her laptop balanced on a mud slab built into the wall, Monsumi Murmu works from one of the few places where the mobile signal holds. The familiar sounds of domestic life come from inside the house: clinking utensils, footsteps, voices. On her screen a very different scene plays: a woman is pinned down by a group of men, the camera shakes, there is shouting and the sound of breathing. The video is so disturbing Murmu speeds it up, but her job requires her to watch to the end. Murmu, 26, is a content moderator for a global technology company, logging on from her village in India’s Jharkhand state. Her job is to classify images, videos and text that have been flagged by automated systems as possible violations of the platform’s rules. On an average day, she views up to 800 videos and images, making judgments that train algorithms to recognise violence, abuse and harm. View image in fullscreen Monsumi Murmu in forest near her home. Photograph: Anuj Behal This work sits at the core of machine learning’s recent breakthroughs, which rest on the fact that AI is only as good as the data it is trained on. In India, this labour is increasingly performed by women, who are part of an workforce often described as “ghost workers”. “The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her. In terms of risk, content moderation belongs in the category of dangerous work, comparable to any lethal industry Milagros Miceli, sociologist Now, she says, the images no longer shock her the way they once did. “In the end, you don’t feel disturbed – you feel blank.” There are still some nights, she says, when the dreams return. “That’s when you know the job has done something to you.” Researchers say this emotional numbing – followed by delayed psychological fallout – is a defining feature of content moderation work. “There may be moderators who escape psychological harm, but I’ve yet to see evidence of that,” says Milagros Miceli, a sociologist leading the Data Workers’ Inquiry, a project investigating the roles of workers in AI. “In terms of risk,” she says, “content moderation belongs in the category of dangerous work, comparable to any lethal industry.” Studies indicate content moderation triggers lasting cognitive and emotional strain, often resulting in behavioural changes such as heightened vigilance. Workers report intrusive thoughts, anxiety and sleep disturbances. A study of content moderators published last December, which included workers in India, identified traumatic stress as the most pronounced psychological risk. The study found that even where workplace interventions and support mechanisms existed, significant levels of secondary trauma persisted. View image in fullscreen A slab extending from the mud wall of her house serves as Murmu’s desk. She uses a secondhand laptop to do content moderation work. Photograph: Anuj Behal As early as 2021, an estimated 70,000 people in India were working in data annotation, which had a market value of about $250m (£180m) in 2021, according to the country’s IT industry body Nasscom. About 60% of revenues came from the US, while only 10% came from India. About 80% of data-annotation and content moderation workers are drawn from rural, semi-rural ormarginalised backgrounds. Firms deliberately operate from smaller cities and towns, where rents and labour costs are lower, and a growing pool of first-generation graduates are seeking jobs. Improvements in internet connectivity have made it possible to plug these locations directly into global AI supply chains, without relocating workers to cities. Women form half or more of this workforce. For companies, women are seen as reliable, detail-oriented and more likely to accept home-based or contract work that could be seen as “safe” or “respectable”. These jobs offer rare access to income without migration. A sizeable number of workers in these hubs come from Dalit and Adivasi (tribal) communities. For many of them, digital work of any kind represents an upward shift; cleaner, more regular and better-paid jobs than agricultural labour or mining. View image in fullscreen A data annotation office in Ranchi, Jharkhand. Tech firms often set up offices in smaller cities. Photograph: Anuj Behal But working from or close to home, can also reinforce women’s marginal position, according to Priyam Vadaliya, a researcher working on AI and data labour, formerly with the Bengaluru-based Aapti Institute. “The work’s respectability, and the fact that it arrives at the doorstep as a rare source of paid employment, often creates an expectation of gratitude,” she says. “That expectation can discourage workers from questioning the psychological harm it causes.” Raina Singh was 24 when she took up data-annotation work. A recent graduate, teaching had been her plan, but the certainty of a monthly income felt necessary before she could afford to pursue it. She returned to her home town of Bareilly in Uttar Pradesh and each morning logged on from her bedroom, working through a third-party firm contracted for global technology platforms. The pay – about £330 a month – seemed reasonable. The job description was vague, but the work felt manageable. I can’t even count how much porn I was exposed to. It was constant … the idea of sex started to disgust me Raina Singh, data worker Her initial assignments involved text-based tasks: screening short messages, flagging spam, identifying scam-like language. “It didn’t feel alarming,” she says. “Just dull. But there was something exciting too. I felt like I was working behind the AI. For my friends, AI was just ChatGPT. I was seeing what makes it work.” But about six months in, the assignments changed. Without notice, Singh was moved to a new project tied to an adult entertainment platform. Her task was to flag and remove content involving child sexual abuse. “I had never imagined this would be part of the job,” she says. The material was graphic and relentless. When she raised concerns with her manager, she recalls being told: “This is God’s work – you’re keeping children safe.” View image in fullscreen Raina working on her laptop: ‘It didn’t feel alarming, just dull. But there was something exciting too.’ Photograph: Anuj Behal Soon after, the task shifted again. Raina and six others on her team were instructed to categorise pornographic content. “I can’t even count how much porn I was exposed to,” she says. “It was constant, hour after hour.” The work affected her personal life. “The idea of sex started to disgust me,” she says. She withdrew from intimacy and felt increasingly disconnected from her partner. When Singh complained, the response was blunt: ‘your contract says data annotation – this is data annotation.’ She left the job, but a year on, she says the thought of sex can trigger a sense of nausea or dissociation. “Sometimes, when I’m with my partner, I feel like a stranger in my own body. I want closeness, but my mind keeps pulling away.” Vadaliya says job listings rarely explain what the work actually involves. “People are hired under ambiguous labels, but only after contracts are signed and training begins do they realise what the actual work is.” Remote and part-time roles are promoted aggressively online as “easy money” or “zero-investment” opportunities, and circulated through YouTube videos, LinkedIn posts, Telegram channels and influencer-led tutorials that frame the work as flexible, low-skilled and safe. View image in fullscreen Hyderabad is home to India’s AI industry – far removed from the scattered rural locations where data is actually labelled. Photograph: Anuj Behal The Guardian spoke to eight data-annotation and content-moderation companies in India. Only two said they provided psychological support to workers; the rest argued that the work was not demanding enough to require mental healthcare. Vadaliya says that where there is support, the individual has to seek it out, shifting the burden of care on to workers. “It ignores the reality that many data workers, especially those coming from remote or marginalised backgrounds, may not even have the language to articulate what they are experiencing,” she says. The absence of legal recognition of psychological harm in India’s labour laws, she adds, also leaves workers without meaningful protections. View image in fullscreen Monsumi Murmu walks in the forest to help deal with the stresses of work. ‘I sit under the open sky and try to notice the quiet around me.’ Photograph: Anuj Behal The psychological toll is intensified by isolation. Content moderators and data workers are bound by strict non-disclosure agreements (NDAs) that bar them from speaking about their work, even with family and friends. Violating NDAs can lead to termination or legal action. Murmu feared that if her family understood her job, then she, like many other girls in her village, would be forced to leave paid employment and into marriage. With just four months left on her contract, which pays about £260 a month, the spectre of unemployment keeps her from flagging concerns about her mental health. “Finding another job worries me more than the work itself,” she says. In the meantime, she has found ways to live with the distress. “I go for long walks into the forest. I sit under the open sky and try to notice the quiet around me.” Some days, she collects mineral stones from the land near her home or paints traditional geometric patterns on the walls of the house. “I don’t know if it really fixes anything,” says Murmu. “But I feel a little better.”

جوجل تخطط لمضاعفة الإنفاق وسط سباق الذكاء الاصطناعي

نيويورك تايمز - أعمال|٥‏/٢‏/٢٠٢٦|90%

Profits jumped 30 percent to $34.5 billion last quarter, and the tech giant is increasing its capital spending this year to as much as $185 billion.

أرباح جوجل تتجاوز التوقعات مع خطط لاستثمارات ضخمة في الذكاء الاصطناعي

الغارديان - تقنية|٥‏/٢‏/٢٠٢٦|75%

Google’s parent company, Alphabet, beat Wall Street expectations on Wednesday, and is planning a sharp increase in capital spending in 2026 as it continues to invest deeply in AI infrastructure. Alphabet on Wednesday reported profit of $34.5bn in the recently ended quarter, as revenue from cloud computing soared 48%. The company also forecast spending between $175bn and $185bn this year, a figure much higher than analysts’ expectations of roughly $115bn. In an earnings call, investors pressed Alphabet’s chief executive, Sundar Pichai, on the significant increase. “We’ve been supply constrained, even as we’ve been ramping up our capacity. Obviously, our CapEx spend this year is an eye towards the future,” Pichai said, in response. “We are constantly planning for the long-term.” Pichai added that he expects Google to “go through the year in a supply-constrained way”. Alphabet’s chief financial officer, Anat Ashkenazi, elaborated on the ways Google is trying to free up capital. According to Ashkenazi, that includes constructing its own data centers “to ensure that we do it in the most efficient way”, having an AI-powered tool known as a “coding agent” write about half the company’s codes (before being reviewed by engineers), and using AI across departments, even for smaller tasks such as handling invoices. “We’re seeing our AI investments and infrastructure drive revenue and growth across the board, Pichai said. Alphabet’s annual revenue exceeded $400bn for the first time, Pichai added. The company reported $113.83bn in revenue for the fourth quarter of 2025 – surpassing Wall Street estimates of $111.43bn. Earnings per share (EPS) also beat Wall Street expectations: the company reported $2.82 in EPS, compared with estimates of $2.63. The report comes after several months of good news for the company in the AI race. The newest version of Gemini, released by Google in November, is considered to be at the forefront of the generative AI industry, a development that has prompted panic at competitor OpenAI. Alphabet’s stock jumped 3% when Google debuted the model. Then in January, Google and Apple announced the company will start using Gemini to power AI features like Siri; the Apple assistant has previously faced criticism for not being as advanced and accurate as its competitors. Google’s valuation shot up to $4tn after the deal, making it the second-most-valuable company in the world. Analysts viewed the multi-year agreement as a huge win for Google: the tech giant beat out competitors like OpenAI, and got access to Apple’s user base of 2.5bn active devices. “Gemini is becoming the AI engine for the world’s most successful software companies,” Pichai said on Wednesday. Alphabet’s projected spending on AI infrastructure means its capital expenditure could as much as double this year. Shares were volatile in after-hours trading, with investors weighing the swell in spending against surging revenue and profit. Like larger rivals Amazon Web Services and Microsoft’s Azure, Google Cloud has been grappling with capacity constraints, and Pichai said the expenditures were necessary “to meet customer demand and capitalize on the growing opportunities we have ahead of us”. But investors have increasingly grown concerned about payoffs from AI investments, as the big cloud companies collectively spend massive amounts building out their infrastructure. Meta last week hiked capital investment for AI development this year by 73%. As the AI arms race heats up, Google’s Gemini AI assistant app exceeded 750 million users per month, Pichai said, up by 100 million compared with November. The driverless car division Waymo is working to integrate the AI model, and Google announced that its Chrome browser will adapt more Gemini AI features, too. Google is still dealing with scrutiny from lawmakers and regulators; the company has faced years of legal battles with US antitrust regulators over allegations it built an illegal monopoly over online search and advertising. That fight resurged this week when the Department of Justice and several states filed an appeal to a landmark antitrust ruling last year, in which a judge imposed only modest limits on the company’s contracts. The federal government wanted tougher restrictions. Google executives did not discuss the legal proceedings on the earnings call.