الذكاء الاصطناعي يقود تحولاً شاملاً: من الإعلام إلى الاستثمار والتعليم

مشاركة:

في ظل التسارع التقني، تبرز صحيفة عكاظ كنموذج إعلامي يدمج الذكاء الاصطناعي، تزامناً مع توجيهات وزارية بتسمية 2026 عام الذكاء الاصطناعي. وعلى الصعيد العالمي، تشهد الشركات الناشئة طفرة غير مسبوقة بتمويلات ضخمة من Nvidia وصفقات بمليارات الدولارات، بينما يحذر أكاديميون من تحديات العصر الجديد على التفكير النقدي.

📰آخر التطورات(5 أخبار)

«عكاظ» والذكاء الاصطناعي.. نحو نموذج إعلامي يجمع بين رسالة المهنة وأدوات المستقبل - عكاظ

عكاظ - جوجل|١٠‏/٣‏/٢٠٢٦|85%

<a href="https://news.google.com/rss/articles/CBMiUkFVX3lxTE1UeDhEekktR3dWZWotZk1TSk1wVVdpOFZCak5Yb1pDZVIxUDl2R2JEMTZsak5VSWo1SERfTWxJcm9lMC1PcUFsQ0x3QWgxc1pacEHSAVRBVV95cUxPY2lWa2NFOXUyTERiek1qY1dqbU1UQ0NIdkdFdDNxb3B5VWNJbmZsVzBRR2loX0xVUGFybVZpVlViWVo4UzBlSzZXUjdKXzZ2b2tDcU4?oc=5" target="_blank">«عكاظ» والذكاء الاصطناعي.. نحو نموذج إعلامي يجمع بين رسالة المهنة وأدوات المستقبل</a>&nbsp;&nbsp;<font color="#6f6f6f">عكاظ</font>

شركة ناشئة لرئيس الذكاء الاصطناعي السابق في ميتا قيمتها 3.5 مليار دولار

نيويورك تايمز - تقنية|١٠‏/٣‏/٢٠٢٦|75%

Advanced Machine Intelligence Labs, founded by Yann LeCun and other ex-Meta researchers, has raised more than $1 billion from investors.

الوزراء: تسمية عام 2026 بعام الذكاء الاصطناعي

أخبار 24|١٠‏/٣‏/٢٠٢٦|95%

الوزراء: تسمية عام 2026 بعام الذكاء الاصطناعي الوزراء يشيد بقدرات الدفاعات الجوية في اعتراض الصواريخ والمسيرات المعادية أكد مجلس الوزراء في جلسته اليوم -عبر الاتصال المرئي- برئاسة ولي العهد رئيس مجلس الوزراء الأمير محمد بن سلمان احتفاظ المملكة بحقها الكامل في اتّخاذ الإجراءات التي تكفل حماية أمنها وسيادتها وسلامة أراضيها وردع العدوان، ويُشِيد في هذا الإطار بقدرات الدفاعات الجوية السعودية في اعتراض وتدمير صواريخ ومسيرات معادية حاولت استهداف مواقع ومنشآت داخل الوطن. تأسيس المعهد الملكي للأنثروبولوجيا والدراسات الثقافية وفي بداية الجلسة؛ أطلع ولي العهد مجلس الوزراء على فحوى الاتصالات الهاتفية التي جرت خلال الأيام الماضية مع قادة عدد من الدول الشقيقة والصديقة، في إطار التشاور المستمر حول مستجدات الأوضاع بالمنطقة وتداعياتها على الأمن والاستقرار الإقليميين والدوليين. وأدان المجلس بشدة الاعتداءات الإيرانية الآثمة على المملكة ودول مجلس التعاون لدول الخليج العربية وعدد من الدول العربية والإسلامية والصديقة، والإصرار على تهديد الأمن والاستقرار والانتهاك السافر للمواثيق الدولية والقانون الدولي؛ بمهاجمة الأعيان المدنية، والمطارات، والمنشآت النفطية. وأكد المجلس احتفاظ المملكة بحقها الكامل في اتّخاذ الإجراءات التي تكفل حماية أمنها وسيادتها وسلامة أراضيها وردع العدوان، مشيدًا في هذا الإطار بقدرات الدفاعات الجوية السعودية في اعتراض وتدمير صواريخ ومسيرات معادية حاولت استهداف مواقع ومنشآت داخل الوطن. وأوضح وزير الإعلام سلمان الدوسري في بيان عقب الجلسة، أن مجلس الوزراء استعرض الدور المتواصل للمملكة العربية السعودية النابع من نهجها الداعم للتضامن والتعاون والتنسيق مع محيطها الخليجي والعربي تجاه التحديات الإقليمية الراهنة، مقدرًا في هذا السياق ما اشتمل عليه الاجتماع الوزاري المشترك بين مجلس التعاون لدول الخليج العربية والاتحاد الأوروبي، والاجتماع الوزاري لمجلس جامعة الدول العربية؛ من مضامين أدانت الاعتداءات الإيرانية الغاشمة. وتناول المجلس إثر ذلك عددًا من التقارير ذات الصلة بالشأن المحلي، منوهًا بما صدر عن الاجتماع السنوي (الثالث والثلاثين) لأمراء المناطق من توصيات ركّزت في مجملها على سبل دعم فرص التنمية، وتعزيز ممكنات مختلف القطاعات، والاستمرار في تطوير الخدمات التنموية. وتطرق المجلس إلى ما توليه الدولة من حرص واهتمام بتعزيز منظومة العمل الخيري، وترسيخ قيم البذل وتقديم نموذج يُحتذى به في مجالات العطاء والتكافل، مشيدًا في هذا الصدد بنجاح النسخة (السادسة) من الحملة الوطنية للعمل الخيري، مواصلةً بذلك نجاحاتها المتحققة في الأعوام الماضية. وعدّ مجلس الوزراء الاحتفاء بيوم (العَلَم) الذي يوافق غدًا الأربعاء الحادي عشر من مارس؛ تأكيدًا على الاعتزاز بدلالته ورمزيته في تاريخ الدولة السعودية تأسيسًا وتوحيدًا وبناءً، وبمضامينه المجسّدة للثوابت الراسخة والهوية الوطنية في المملكة. واطّلع المجلس على الموضوعات المدرجة على جدول أعماله، من بينها موضوعات اشترك مجلس الشورى في دراستها، كما اطّلع على ما انـتهى إليه كل من مجلسي الشؤون السياسية والأمنية، والشؤون الاقتصادية والتنمية، واللجنة العامة لمجلس الوزراء، وهيئة الخبراء بمجلس الوزراء وقد انتهى إلى الموافقة على تأسيس المعهد الملكي للأنثروبولوجيا والدراسات الثقافية، وتنظيم مكتبة الملك فهد الوطنية، وتسمية عام (2026م) بـ (عام الذكاء الاصطناعي). ووافق المجلس على تفويض وزير الخارجية -أو من ينيبه- بالتباحث مع الجانب الماليزي في شأن مشروع مذكرة تفاهم في شأن المشاورات السياسية بين وزارة خارجية المملكة ووزارة خارجية ماليزيا، والتوقيع عليه، كما وافق على عدد من مذكرات التفاهم: مذكرة تفاهم في مجال التدريب بين وزارة الرياضة في المملكة والمنظمة العربية للتنمية الإدارية، مذكرة تفاهم بين وزارة العدل في المملكة والمنظمة العالمية للملكية الفكرية في شأن نشر الأحكام القضائية، و مذكرتي تفاهم للتعاون في مجال السياحة بين وزارة السياحة في المملكة وكل من وكالة السياحة في المجر، ووزارة السياحة والبريد والتعاون والمعارض والمعلومات وجذب الاستثمارات السياحية في جمهورية سان مارينو، مذكرة تفاهم بين وزارة الاقتصاد والتخطيط في المملكة ووزارة التنمية المستدامة في البحرين للتعاون في مجالات التنمية المستدامة، ومذكرة تفاهم بين الهيئة العامة للعقار في المملكة والهيئة العامة لتنظيم القطاع العقاري في دولة قطر للتعاون في المجال العقاري.

شركة Thinking Machines الناشئة للذكاء الاصطناعي تحصل على تمويل وصفقة رقائق كبرى من Nvidia

ياهو فاينانس|١٠‏/٣‏/٢٠٢٦|75%

March 10 (Reuters) - AI startup Thinking Machines Lab said on Tuesday it has struck a multi-year partnership with Nvidia that will see it ‌receive a significant investment and procure at least one gigawatt of ‌the chipmaker's next-generation processors. Financial terms of the deal were not disclosed. Under the agreement, Thinking Machines - founded last ​year by former OpenAI Chief Technology Officer Mira Murati - will deploy Nvidia's upcoming Vera Rubin systems starting early next year. The computing power will primarily be used to train the startup's artificial intelligence models. Industry executives have said 1 gigawatt of computing ‌power, enough to power roughly ⁠750,000 U.S. homes, can cost around $50 billion. The deal will help Thinking Machines compete with larger rivals in building powerful AI systems, ⁠and underscores the industry's eagerness to scale computing capacity. Thinking Machines quickly became one of Silicon Valley's most closely watched AI startups after raising about $2 billion in a seed funding ​round ​led by Andreessen Horowitz that valued the ​company at $12 billion. Nvidia was also ‌an investor in the round. The startup has recently been seeking to raise more in a new funding round that could value it at tens of billions of dollars, sources told Reuters earlier. The company has recently seen several departures, including co-founder and former Chief Technology Officer Barret Zoph and co-founder Luke Metz, who both ‌returned to their former employer OpenAI amid fierce ​competition for AI talent. The partnership also highlights Nvidia’s ​growing role as a financier ​of the startups that rely on its AI chips. It has made ‌a recent $30 billion investment in OpenAI ​and invested $10 billion in Anthropic, while ​also supplying the graphics processing units (GPUs) used to train and run their models, a dynamic that some industry analysts say creates a circular flow ​of capital and computing ‌resources. That in turn has given rise to comparisons with the late ​1990s tech bubble. (Reporting by Krystal Hu in San Francisco; Additional reporting ​by Deepa Seetharaman; Editing by Edwina Gibbs)

أساتذة يكافحون لإنقاذ التفكير النقدي في عصر الذكاء الاصطناعي

الغارديان - تقنية|١٠‏/٣‏/٢٠٢٦|75%

Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events, look at art in the real world. It’s an effort to reconnect them to the bodily experience of learning, she said, and to keep them from turning to artificial intelligence to do the work for them. “There’s no AI-proof anything,” Pao said. “Rather than policing it, I hope that their overall experiences in this class will show them that there’s a way out.” It doesn’t always work. Recently, she asked students to visit a local museum, look at a painting for 10 minutes, and then write a few paragraphs describing the experience. It was a purposefully personal assignment, yet one student responded with a sophisticated but drab reflection – “too perfect, without saying anything”, Pao said. She later learned the student had tried to visit the museum on a Monday, when it was closed, and then turned to AI. As artificial intelligence has upended the way in which students read, learn and write, professors like Pao have been left to their own devices to figure out how to teach in a transformed landscape. Many faculty members in the hard sciences and social sciences have pointed to the “productivity boost” AI can offer, and the research potential unlocked by its ability to process and analyze vast amounts of data. AI’s most enthusiastic proponents have boasted that the technology may help cure cancer and “accelerate” climate action. But in fields most explicitly associated with the production of critical thought – what is collectively referred to as the “humanities” – most scholars see AI as a unique threat, one that extends far beyond cheating on homework and casts doubt on the future of higher education itself in a fast-approaching machine-dominated future. View image in fullscreen Lea Pao. Photograph: Courtesy of Lea Pao American degrees often cost up to hundreds of thousands of dollars and result in decades of debt, and recent years have seen a freefall in public confidence in US higher education. With the potential for AI to increasingly substitute independent thought, a pressing question becomes even more urgent: what exactly is a university education for? The Guardian spoke with more than a dozen professors – almost all of them in the humanities or adjacent fields – about how they are adapting at a time of dizzying technological advancement with few standards and little guidance. double quotation mark What is [AI] doing to us as a species? Dora Zhang By and large, they expressed the view that reliance on artificial intelligence is fundamentally antithetical to the development of human intelligence they are tasked with guiding. They described desperately trying to prevent students from turning to AI as a replacement for thought, at a time when the technology is threatening to upend not only their education, but everything from the stock market to social relations to war. Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc) off a cliff.” “I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?” A ‘soulless’ education AI criticism – or “doomerism”, as the technology’s proponents view it – has been mounting across sectors. But when it comes to its impact on students, early studies point to potentially catastrophic effects on cognitive abilities and critical thinking skills. Michael Clune, a literature professor and novelist, said that, already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”. Ohio State University, where he teaches, has begun requiring every freshman to take a class in generative AI and pitched itself as the first “AI fluent” university, pledging to embed AI “across every major”. View image in fullscreen Michael Clune. Photograph: Courtesy of Michael Clune “No one knows what that means,” Clune said of the plan. “In my case, as a literature professor, these tools actually seem to mitigate against the educational goals I have for my students.” That’s the crux of what many professors in the humanities fear: that technology that may well be a cutting-edge tool in other fields could spell the end of their own. Alex Karp, the Palantir co-founder and CEO, stoked those anxieties when he said in a recent interview that AI will “destroy humanities jobs”. On the other hand, Daniela Amodei, Anthropic’s president and co-founder – who was a literature major – said the opposite: that “studying the humanities is going to be more important than ever”. A number of tech and finance companies have recently said that they are looking to hire humanities majors for their creativity and critical thinking skills. Indeed, enrollment data at some universities suggests that the long-struggling humanities might have begun to see a resurgence in the age of AI, with early signs pointing to a reversal in decades-long decline in English majors in favor of Stem ones. Some caution that the humanities will survive – but as a province of the few. When he predicted the end of the humanities, Karp assured that there would be “more than enough jobs” for those with vocational training. Indeed, several professors spoke about concerns that AI will exacerbate a widening divide in US higher education and that small numbers of elite students will have access to a more traditional, largely tech-free liberal arts education, while everyone else has a “degraded, soulless form of vocational training administered by AI instructors”, said Zhang. “I fully expect that we will start seeing a kind of bifurcation in education,” said Matt Seybold, a professor at Elmira College in New York, who has written critically about “technofeudalism”. Many professors talked about keeping the technology out of the classroom as a battle already lost. As many as 92% of students have reported resorting to the technology in their school work, recent surveys show, and the numbers are rapidly increasing even as growing numbers express concerns about the technology’s accuracy and the integrity of using it. Reliance on AI among faculty is also on the rise, with observers pointing to the dystopian possibility that the college experience may soon be reduced to AI systems grading AI-generated homework – “a conversation between two robots”. View image in fullscreen Alex Karp, CEO of Palantir, during the AIPCon conference in Palo Alto, California, in March 2025. Photograph: Bloomberg/Getty Images Some universities have adopted AI detection software to catch artificially generated work; others prohibit faculty from directly accusing students of having used AI – as they can often be wrong. Professors said they resorted to oral interrogations, handwritten notebooks and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI. Many professors spoke of their frustration at having to sift through students’ artificially generated homework. “It creates hours of additional labor,” echoed Danica Savonick, an English professor at the State University of New York Cortland. “And makes me feel like a cop.” Some allow students to use AI for research – to a point. Karl Steel, an English professor at Brooklyn College, said that AI has helped make students’ presentations richer and more interesting – but that while they may use it to prepare, he has them speak from minimal notes and stand in front of a photo of a text they annotated by hand. He also assigns written responses to texts only after the class has discussed them. “I suppose they could use their phones to record the conversation, feed a transcript into a chatbot and produce a paper that way,” he said. “But that is more trouble, I think, than most students would take.” Left to their own devices Many universities’ administrations are embracing AI for instruction, research and evaluation. In some cases, AI has guided decisions about which programs to cut at times of austerity in the education sector. More than a dozen universities have partnered with OpenAI on a $50m initiative that the company has said will “accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI”. California State University has joined several of the world’s largest tech companies to “create an AI-powered higher education system”, as the university put it. Multiple universities have introduced AI majors and masters. The plans are lofty but offer little guidance on what professors are supposed to do with students who can’t read more than a couple paragraphs at a time or turn in essays generated in seconds by a machine. Left largely to themselves, some are trying to articulate clearer lines around AI use, and organize a more coordinated effort against its encroaching dominance. Last year, the American Association of University Professors, which represents 55,000 faculty members nationwide, published a report warning that universities were adopting the technology “uncritically” and with little transparency. Some university unions have begun incorporating protections against AI in their contracts to establish oversigh