Weekly Digest on AI and Emerging Technologies (10 March 2025)

Governance and Legislation

 

We Need to Avert an AI Safety Winter

(Siméon Campos, Chloe Touzet – RUSI – 7 March 2025) The third AI Summit in Paris (February 10-11th) is markedly differed from its Seoul and Bletchley Park predecessors. A successful fundraiser, the summit was an occasion for President Macron to present his strategy for a third way in AI governance, beyond American and/or Chinese leadership. While previous summits maintained a tight focus on safety and a limited participation, France positioned the event as AI’s equivalent of environmental policy’s Conference of the Parties. The summit expanded scope to include 100 countries for four days of preliminary scientific and cultural activities and a programme of side-events accommodating every stakeholder’s taste. Hosted in Paris’s iconic Grand Palais, with banners promoting “AI Science, not Science Fiction” adorning the main hall, the French Summit nonetheless sidelined the 100 scientists who had agreed in Seoul, to deliver an International AI Safety Report that summarised the scientific consensus on the risks posed by AI. The summit programme excluded follow-up from companies which had committed in Seoul to publish safety frameworks in time for Paris. The French summit downplayed “exaggerated anxieties” about AI risks and departed from the consensus-building efforts of previous summits. – https://www.rusi.org/explore-our-research/publications/commentary/we-need-avert-ai-safety-winter

 

AI in humanitarian missions: Opportunities and challenges

(Samar Jai Singh Jaswal – Observer Reswarch Foundation – 6 March 2025) Humanitarian crises are becoming increasingly complex, driven by factors such as protracted conflicts, climate change, global pandemics, and mass displacement. These challenges have burdened humanitarian mechanisms, necessitating innovative approaches to address urgent needs. Technology offers hope here. Its integration in humanitarian action has led to transformative changes, enabling faster responses, improved resource allocation, and data-driven decision-making. In this context, Artificial Intelligence has emerged as a game-changer with diverse applications in the humanitarian sector. – https://www.orfonline.org/expert-speak/ai-in-humanitarian-missions-opportunities-and-challenges

 

Meta’s Move to Limit Fact-Checking Endangers Women—and Democracy

(Melanne Verveer, Kristine Baekgaard – Lawfare – 6 March 2025) “We’re going to catch less bad stuff.” That’s how Mark Zuckerberg described his recent decision to end Meta’s fact-checking program, conceding that more harmful posts are going to make their way onto his platforms. What he failed to acknowledge or account for, however, is the repercussions this will have for women’s well-being, safety, and ability to participate fully and freely in public life. The tech platform’s move to curtail fact-checking and remove restrictions on specific topics—Zuckerberg singled out “gender and immigrants”—signals a profound step backward that will disproportionately harm women, and LGBTQ+ and minority communities, and amplify the spread of hate and misogyny. Research from the Georgetown Institute for Women, Peace and Security (GIWPS) further finds that widespread digital violence against women fundamentally erodes American democracy. – https://www.lawfaremedia.org/article/meta-s-move-to-limit-fact-checking-endangers-women-and-democracy

 

Risk thresholds for frontier AI: Insights from the AI Action Summit

 

(Eunseo Dana Choi, Dylan Rogers – OECD.AI – 5 March 2025) How many hot days make a heatwave? When do rising water levels become a flood? How many people constitute a crowd? We live in a world defined by thresholds. Thresholds impose order on the messy continuum of reality and help us make decisions. They can be seen as pre-defined points above which additional mitigations are deemed necessary. There is increasing interest in thresholds as a tool for governing advanced AI systems, or frontier AI. AI developers such as Google DeepMind, Meta, and Anthropic have published safety frameworks, including thresholds at which risks from their systems would be unacceptable. The OECD recently conducted an expert survey and public consultation on the topic of thresholds. To deepen this conversation, the UK AI Security Institute (AISI) and the OECD AI Unit convened leading experts at the AI Action Summit to discuss the role of thresholds in AI governance. Representatives from the nuclear and aviation industries joined experts from the Frontier Model Forum, Google DeepMind, Meta, Humane Intelligence, SaferAI, and the EU AI Office. This blog captures some key insights from the discussions. – https://oecd.ai/en/wonk/risk-thresholds-for-frontier-ai-insights-from-the-ai-action-summit

 

From Open-Source to All-Source: Leveraging Local Knowledge for Atrocity Prevention

(Jacqueline Geis – Just Security – 4 March 2025) The tools available to human rights researchers have expanded dramatically over the past 20 years, enabling greater remote investigative powers than ever before. Analysts in distant locations working independently, in loose collectives or for formal NGOs, can now parse social media feeds, analyze satellite imagery, and examine geographical data that were once the preserve of government intelligence agencies. As these technologies have become more readily available to a wider variety of actors, funders and governments have increasingly directed resources toward open-source investigation (OSINT) efforts, which can be launched rapidly as crises unfold and redeployed as situations change. Yet, this focus often comes at the expense of building local community networks that can provide a more varied dataset gained from proximity, lived experience, and local knowledge. Whereas OSINT efforts can be stood up immediately, such networks must be developed well before peak information demand. This process requires longer lead times and sustained financial and personnel resourcing that often stretches beyond the short-term (and frequently reactive) institutional funding timelines for crisis response. – https://www.justsecurity.org/108314/atrocity-prevention-open-source-local-knowledge/

Your Town Needs AI Experts, Not Just More GPUs

(Kevin Frazier – Lawfare – 4 March 2025) On or before July 22, the Trump administration will receive an “AI Action Plan” that could fundamentally reshape America’s technological future. To realize the goals of the January 2025 executive order on American AI leadership, this plan must prioritize a national strategy for artificial intelligence (AI) literacy—one that systematically breaks down geographic barriers to AI knowledge and creates pathways for all Americans to participate in the AI economy. Recent research led by former Department of Labor Chief Economist Jennifer Hunt reveals a troubling reality: Communities more than 125 miles from AI hotspots see 17 percent lower growth in AI-related jobs and innovation, creating widening opportunity gaps between coastal tech hubs and the rest of America. For the U.S. to maintain technological leadership, it must do more than establish AI literacy in a few centers—it demands a comprehensive approach to diffusing AI knowledge across the country, much as the Rural Electrification Administration once transformed America by spreading both electrical infrastructure and practical knowledge to communities far from urban centers. The stakes of this challenge extend beyond economic metrics to the very foundations of American competitiveness in an AI-driven future. – https://www.lawfaremedia.org/article/your-town-needs-ai-experts–not-just-more-gpus

Harnessing AI to Improve Access to Justice in Civil Courts

(Shana Lynch – Stanford HAI – 4 March 2025) In the United States, 20 million civil cases are filed annually. Of these, 75% involve at least one party without legal representation. David Engstrom, the LSVF Professor of Law at Stanford University and co-director of the Deborah L. Rhode Center on the Legal Profession, identifies several root causes for this low participation rate. People may struggle with time and resource costs, can’t access legal representation, find the legal process confusing, or face difficulty locating tools online. Artificial intelligence presents “massive access-widening potential,” Engstrom said during a recent seminar at the Stanford Institute for Human-Centered AI. – https://hai.stanford.edu/news/harnessing-ai-to-improve-access-to-justice-in-civil-courts

 

From deepfake scams to biased AI: How incident reporting can help us keep ahead of AI’s harms

(Bénédicte Rispal, John Leo Tarver, Luis Aranda – OECD.AI – 4 March 2025) Whether they admit it or not, most people have a celebrity crush. Yet in real life, most fans don’t have the opportunity to be in touch with their favourite celebrities. But what if the tables were turned? What if the celebrity reached out first? Would the fan be able to resist? Last year, a woman found herself caught in a situation that seemed too good to be true—and it was. She was led to believe she had captured the attention of none other than Brad Pitt himself. Through deepfake videos and AI-generated images, she became convinced that she was interacting with the movie star. “They” got to know each other by conversing online. Then came requests for money. At first, they were small. But gradually, the demands grew larger until she had handed over 830,000 euros. By the time she realised the truth, the damage had been done. Authorities are still working to recover her funds, but the financial and emotional tolls remain. – https://oecd.ai/en/wonk/deepfake-scams-biased-ai-incidents-framework-reporting-can-keep-ahead-ai-harms

 

Challenges in Governing AI Agents

(Noam Kolt – Lawfare – 3 March 2025) Leading AI companies have released a new type of AI system: autonomous agents that can plan and execute complex tasks in digital environments with limited human involvement. OpenAI’s Operator, Google’s Project Mariner, and Anthropic’s Computer Use Model all perform a similar function. They type, click, and scroll in a web browser to carry out a variety of online tasks, such as ordering groceries, making restaurant reservations, and booking flights. While the performance of these agents is currently unreliable, improvements are on the horizon. Scores on multiple benchmarks are steadily improving. The aspiration is to create AI agents that can undertake a broad range of personal and professional activities, serving as artificial personal assistants and virtual coworkers. – https://www.lawfaremedia.org/article/challenges-in-governing-ai-agents

Generative AI, Democracy and Human Rights

(David Evan Harris, Aaron Shull – Centre for International Governance Innovation – 28 February 2025) Disinformation is not new, but given how disinformation campaigns are constructed, there is almost no stage that will not be rendered more effective by the use of generative artificial intelligence (AI). Given the unsatisfactory nature of current tools to address this budding reality, disinformation, especially during elections, is set to get much, much worse. As these campaigns become more sophisticated and manipulative, the foreseeable consequence will be a further erosion of trust in institutions and a heightened disintegration of civic integrity, which in turn will jeopardize a host of human rights, including electoral rights and the right to freedom of thought. In this policy brief, David Evan Harris and Aaron Shull argue that policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen, act quickly to ban using AI to impersonate real persons or organizations, and require the use of watermarking or other provenance tools to allow people to distinguish between AI-generated and authentic content. – https://www.cigionline.org/publications/generative-ai-democracy-and-human-rights/

AI Action Summit in Paris Highlights A Shifting Policy Landscape

(Stanford HAI – 27 February 2025) Leaders from government, international organizations, and academia headed to Paris this month for the AI Action Summit, where they engaged in important discussions on how AI can prioritize public interest. Key conversations centered around providing independent and reliable AI access, developing more environmentally friendly technologies, and promoting effective global governance. The summit week included nearly 100 events worldwide from Feb. 6-11, 2025, including an international conference on AI and society and a discussion series on AI and culture. – https://hai.stanford.edu/news/ai-action-summit-in-paris-highlights-a-shifting-policy-landscape

AI+Education Summit: The Future is Already Here

(Stanford HAI -27 February 2025) Artificial intelligence can summarize text, find bugs in code, and create images. It can even record and summarize panels at a conference.* New tools and use cases are being developed as we speak, each model more powerful than the last. What does this reality mean for teachers and learners? In its third instance, the AI+Education Summit hosted by the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) brought together researchers, educators, tech developers, and policymakers for pivotal conversations on how to shape a thriving learning ecosystem with human-centered AI technologies. The convening showcased cutting-edge, research-based applications of AI in learning and facilitated dialogue about how to ensure AI serves education ethically, responsibly, and equitably. – https://hai.stanford.edu/news/aieducation-summit-the-future-is-already-here

 

The hidden cost of AI: Unpacking its energy and water footprint

(Arti Garg, Irene Kitsara, Sarah Bérubé – OECD.AI – 26 February 2025) On 12 February 2025, the OECD and IEEE co-organised an event on the margin of the French AI Action Summit with diverse experts to discuss AI’s growing environmental challenges. The event’s three main sessions covered critical sustainability concerns: the environmental cost of inference, the impact of data centres on the electricity grid, and AI’s water footprint. – https://oecd.ai/en/wonk/the-hidden-cost-of-ai-energy-and-water-footprint

Terrorism and Counter-Terrorism

The Digital Battlefield: How Terrorists Use the Internet and Online Networks for Recruitment and Radicalization

(Aaron Y. Zelin – The Washington Institute for Near East Policy – 4 March 2025) A terrorism expert discusses what U.S. policymakers and tech companies can do to stem the proliferation of jihadist activity online, including greater attention to multilingual content moderation, cryptocurrency abuse, and other evolving factors. – https://www.washingtoninstitute.org/policy-analysis/digital-battlefield-how-terrorists-use-internet-and-online-networks-recruitment-and

 

Geostrategies

Tech in big picture: Emerging trends in 2025

(Siddharth Yadav – Observer Research Foundation – 7 March 2025) Frontier technologies have come to occupy centre stage in geopolitical discussions and national strategies in recent years. In 2025, the trepidations associated with establishing dominance in sectors like Artificial Intelligence are apparent through various high-level projects and initiatives announced by governments globally. Major economies have expressed their ambitions to become the next AI superpower or the most favoured destination for AI development and deployment. Moreover, AI scaling laws are being held up with increasingly powerful multimodal frontier AI models released by companies like OpenAI, Anthropic and Meta. Technological innovations and the release of more powerful AI systems are occurring amidst rising geopolitical tensions between the United States and China. In light of the rising interpenetration of geopolitics and frontier technologies, this paper will highlight emerging policy and technology trends to look out for in the coming year. – https://www.orfonline.org/research/tech-in-big-picture-emerging-trends-in-2025

DeepSeek, Huawei, Export Controls, and the Future of the U.S.-China AI Race

(Gregory C. Allen – Center for Strategic & International Studies – 7 March 2025) Six months ago, few in the West aside from obsessive AI professionals had heard of DeepSeek, a Chinese AI research lab founded barely more than a year and a half ago. Today, DeepSeek is a global sensation attracting the attention of heads of state, global CEOs, top investors, and the general public. With the release of its R1 model on January 20, 2025—the same day as President Trump’s second inauguration—DeepSeek has cemented its reputation as the top frontier AI research lab in China and caused a reassessment of assumptions about the landscape of global AI competition. By January 27, DeepSeek’s iPhone app had overtaken OpenAI’s ChatGPT as the most-downloaded free app on Apple’s U.S. App Store. The stock prices of some U.S. tech companies briefly tumbled, including the AI chip designer Nvidia, which lost more than $600 billion off its valuation in a single day. – https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race

DeepSeek Points Toward U.S.-China Cooperation, Not a Race

 

(Simon Goldstein, Peter N. Salib – Lawfare – 5 March 2025) On Jan. 1, the Chinese company DeepSeek released r1, a new artificial intelligence (AI) model that matches the performance of recent American reasoning models such as OpenAI’s o1. R1 has caused something of a panic in the United States, with many calling for the U.S. government to ensure that the United States prevails in the “AI race” with China. President Trump has described DeepSeek as a “wake-up call.” And in a recent post, Anthropic CEO Dario Amodei cited DeepSeek as a reason why the U.S. government should further reduce Chinese access to advanced computer chips. – https://www.lawfaremedia.org/article/deepseek-points-toward-u.s.-china-cooperation–not-a-race

Hybrid AI and Human Red Teams: Critical to Preventing Policies from Exploitation by Adversaries

(David Bray – Stimson Center – 3 March 2025) Contrary to conventional wisdom that policymakers need only consider geopolitical implications when crafting technology policies and export controls, this assumption misses three critical points: First, the rapid pace of technological advancement means that traditional geopolitical analysis alone is insufficient. Second, the current approach to technology policy formation lacks rigorous analysis of adversaries’ capabilities that was once standard practice in national security decision-making during the Cold War era. Third, modern artificial intelligence (AI) capabilities now enable rapid identification of potential exploitation of US tech policies by adversarial nation states and nonstate actors. The new U.S. presidential administration has an opportunity to combine human expertise with AI-powered analysis to identify the potential vulnerabilities of draft tech policies before they are implemented rather than after they have been weaponized by adversaries. – https://www.stimson.org/2025/hybrid-ai-and-human-red-teams-critical-to-preventing-policies-from-exploitation-by-adversaries/

Securing Full Stack U.S. Leadership in AI

(Navin Girishankar, Joseph Majkut, Cy McGeady, Barath Harithas, and Karl Smith – Center for Strategic & International Studies – 3 March 2025) Today, the United States leads the world in generative AI. Its frontier labs set the pace in model development, U.S. firms control more than half of the world’s AI accelerators, and U.S. capital markets are poised to rapidly scale investment in data center infrastructure. Lasting U.S. advantage in AI, however, is not guaranteed. The global race for compute is intensifying as competitors—adversaries and allies alike—are maneuvering to catch up. Beyond the recent breakthrough with DeepSeek, China is building massive data centers, expanding its power sector, and developing domestic AI chips to reduce Western dependence. France aims to leverage surplus nuclear power to attract data centers and support AI research centers across the country. Japan seeks to overcome space and energy constraints by powering highly efficient data centers with idled nuclear plants. The United Arab Emirates is creating AI-focused economic zones and incentives to attract international companies, with nuclear power as part of its strategy. To stay ahead in the AI race, the United States should put meaningful distance between itself and competitors across all components of the AI stack—frontier models, data centers, advanced chips, and energy. These constitute the fundamentals of AI competitiveness. While all components are important, by far the most pressing need today is ensuring rapid access to the electricity needed to power large data centers. Simply put, failing to secure energy means surrendering U.S. leadership on AI. At stake in the United States is long-term growth and productivity, market security, and national security. – https://www.csis.org/analysis/securing-full-stack-us-leadership-ai

The battle for the internet

(Mercedes Page – The Strategies – 3 March 2025) Democracies and authoritarian states are battling over the future of the internet in a little-known UN process. The United Nations is conducting a 20-year review of its World Summit on the Information Society (WSIS), a landmark series of meetings that, among other achievements, formally established today’s multistakeholder model of internet governance. This model ensures the internet remains open, global and not controlled by any single entity. – https://www.aspistrategist.org.au/the-battle-for-the-internet/

Getting macro-ready for the AI race

(Amit Singh, Adam Triggs – East Asia Forum – 2 March 2025) As the AI race drives substantial investment and productivity gains, optimal policy conditions are necessary to ensure maximum economic benefit. Policymakers must take advantage of the AI boom by encouraging greater savings and more efficient service delivery, fostering competition for global capital, streamlining financial regulations to direct savings where needed, and prioritising workforce mobility and skill development for thriving AI industries. – https://eastasiaforum.org/2025/03/02/getting-macro-ready-for-the-ai-race/

Energy and AI Coordination in the ‘Eastern Data Western Computing’ Plan

(Andrew Stokols – The Jamestown Foundation – 28 February 2025) The “Eastern Data Western Computing” plan is a multiagency strategy that coordinates cloud computing data centers and energy infrastructure across the People’s Republic of China. These are increasingly relevant with the rise of artificial intelligence. This cloud infrastructure buildout likely will not rival that of the United States, but its coordination with renewable energy capacity means that the country’s digital infrastructure will be sustainable, based on a resilient energy system, and foster economic development opportunities in underinvested regions. Early plans for building data centers in Western China were backed by Li Zhanshu, later Xi Jinping’s chief of staff, and key support from other influential officials likely were key to establishing Guizhou as a hub in the national system. Many of the hubs’ locations are remote and have climates and geographic features that make them suitable for hosting data centers that can perform energy-intensive functions that do not necessarily require “real-time” computation and ultra-low latency. – https://jamestown.org/program/energy-and-ai-coordination-in-the-eastern-data-western-computing-plan/

Meta’s Waterworth cable project is about geopolitics and geoeconomics

(Ravi Nayyar – The Strategist – 28 February 2025) Announced on 14 February, Meta’s Project Waterworth is not just proposed to be the world’s longest submarine cable but reflects ever-shifting geopolitical and geoeconomic landscapes. It presents a great opportunity for Australia to collaborate more with its regional partners, especially India and the Pacific countries, on technologies keeping us online. For Meta, this addition to subsea infrastructure is slated to open a chance to monetise accelerating international data flows. In developing and running this cable, Meta also seeks to prioritise its own traffic and minimise latency for its and its partners’ infrastructure and services. No surprises there. – https://www.aspistrategist.org.au/metas-waterworth-cable-project-is-about-geopolitics-and-geoeconomics/

States vulnerable to foreign aggression embrace the cloud: lessons from Taiwan

(Jocelinn Kang – The Strategist – 28 February 2025) Taiwan is among nations pioneering the adoption of hyperscale cloud services to achieve national digital resilience. The island faces two major digital threats: digital isolation, in which international connectivity is intentionally severed or significantly degraded (for instance, if all submarine cables are cut), and digital disruption, in which local infrastructure, such as data centres, is inoperable. – https://www.aspistrategist.org.au/states-vulnerable-to-foreign-aggression-embrace-the-cloud-lessons-from-taiwan/

Trump’s AI strategy puts the Indo-Pacific at a crossroads

(Malki Opatha, Bart Hogeveen – The Strategist – 28 February 2025) The United States’ refusal to sign the recent AI Action Summit declaration should be seen as a strategic shift rather than a diplomatic snub to the rest of the world. AI is as much about innovation as it is about driving economic security and military power. Therefore, Washington’s decision reflects its intent to maintain an edge in AI development, free from global constraints. For Indo-Pacific nations, this shift deepens their strategic dilemma. The region risks being caught between emerging doctrines—balancing between Europe, China and the US, between regulate and don’t regulate, between mitigating social harms and advancing military capabilities. – https://www.aspistrategist.org.au/trumps-ai-strategy-puts-the-indo-pacific-at-a-crossroads/

Security

Canadian intelligence agency warns of threat AI poses to upcoming elections

(Alexander Martin – The Record – 7 March 2025) Canada’s signals and cyber intelligence agency, the Communications Security Establishment (CSE), is warning that hostile actors are likely to use artificial intelligence tools in an attempt to disrupt the country’s forthcoming elections. The good news from the report is that CSE assesses it to be “very unlikely … that disinformation, or any AI-enabled cyber activity, would fundamentally undermine the integrity of Canada’s democratic processes.” – https://therecord.media/canada-cyber-agency-elections-warning-ai-

Ransomware Attacks Build Against Saudi Construction Firms

(Robert Lemos – Dark Reading – 6 March 2025) A recent ransomware attack has compromised a construction firm in Saudi Arabia, underscoring the increasing risk facing everyday organizations in the Middle East, as more cybercriminal and ransomware-as-a-service (RaaS) groups flock to the region. On Feb. 14, the “DragonForce” RaaS group posted an announcement to its data-leak site warning the Saudi construction firm Al Bawani that the company had been compromised. It claimed to have stolen about 6TB worth of data, according to Resecurity, a cybersecurity service provider. – https://www.darkreading.com/cyberattacks-data-breaches/ransomware-attacks-saudi-construction-firms

Defence, not more assertive cyber activity, is the right response to Salt Typhoon

(Mark Raymond, Typhaine Joffe – ASPI The Strategist – 6 March 2025) The ongoing Salt Typhoon cyberattack, affecting some of the United States’ largest telecoms companies, has galvanised a trend toward more assertive US engagement in the cyber domain. This is the wrong lesson to take. Instead, the US should prioritise investments in cyber defence and reconsider its commitment to persistent engagement, a strategic move away from earlier US approaches based on restraint and deterrence. The attack underscores the risks of an increasingly permissive cyber environment: one in which large-scale cyber operations are normalised, restraint is eroded and investments in cyber defence are insufficient. – https://www.aspistrategist.org.au/defence-not-more-assertive-cyber-activity-is-the-right-response-to-salt-typhoon/

Espionage Actor ‘Lotus Blossom’ Targets South East Asia

(Alexander Culafi – Dark Reading – 6 March 2025) An espionage-focused threat actor dubbed “Lotus Blossom” is targeting areas around the South China Sea with a proprietary backdoor malware known as “Sagerunex.”. The threat actor, which targets governments, manufacturing, media, and telecommunications organizations across the region, gains access to a target and then unfolds a multistage attack chain, according to recent research from Cisco Talos threat intelligence researcher Joey Chen. Lotus Blossom, which has been in active operation since 2012, first issues a series of commands into Windows Management Instrumentation (WMI) to gain information related to user accounts, network configurations, process activities, and directory structures, he noted. The origin of Lotus Blossom — also known as Spring Dragon, Billbug, and Thrip — is unclear. While some researchers such as those at Symantec have referred to the actor as being China-based, Cisco Talos’ recent post stops short of attribution, only noting that the threat actor targets “areas including the Philippines, Vietnam, Hong Kong and Taiwan.” – https://www.darkreading.com/threat-intelligence/espionage-lotus-blossom-south-east-asia

North Korea’s Latest ‘IT Worker’ Scheme Seeks Nuclear Funds

(Kristina Beek – Dark Reading – 4 March 2025) North Korean-linked hackers are picking up new tactics within the ongoing fake IT worker schemes, impersonating individuals trying to obtain remote employment. In the latest example, IT workers impersonate Vietnamese, Japanese, and Singaporean nationals seeking roles in engineering, and full-stack developer positions within the US and Japan. Human risk security firm Nisos is tracking the campaign, sharing that its researchers have identified six personas in the scheme, two of which have already acquired jobs and four that are still on the hunt. – https://www.darkreading.com/remote-workforce/north-korea-it-worker-scheme-nuclear-funds

Latin American Orgs Face 40% More Attacks Than Global Average

(Nate Nelson, Dark Reading – 3 March 2025) Cyber threats are accelerating faster in Latin America than anywhere else in the world. The trend has been building for at least a year now, actually. Last summer, Check Point tracked a 53% year-over-year rise in weekly cyberattacks against organizations in the region, followed at a distance by Africa (37%) and Europe (35%). Today, the cybersecurity company reports, Latin American companies suffer 2,569 attacks per week on average — nearly 40% more than the global average of 1,848. Critical industries like healthcare, communications, and governments and militaries are frequently hounded — those organizations often face around 3,000 to 4,000 attacks per week — but even ordinary citizens are feeling the heat, primarily through their financial apps and institutions. – https://www.darkreading.com/cybersecurity-analytics/latin-american-orgs-more-cyberattacks-global-average

TikTok’s Teen Data Use Probed by UK Regulators

(Becky Bracken – Dark Reading – 3 March 2025) The United Kingdom’s Information Commissioner’s Office (ICO) wants TikTok, Imgur, and Reddit to open up their algorithms and prove that they are not using teenagers’ personal data to feed them content recommendations. The ICO passed a children’s code of conduct in 2021, which requires platforms to take “age assurance measures,” including tools that help estimate a child’s age, and protect them from potentially harmful content, the ICO said in its statement. “In announcing these investigations, we are making it clear to the public what action we are currently taking to ensure children’s information rights are upheld,” John Edwards, the UK’s Information Commissioner, said in a statement about the launch of the investigation. “This is a priority area, and we will provide updates about any further action we decide to take.” –  https://www.darkreading.com/application-security/tiktok-teen-data-use-probed-regulators

Defense, Intelligence, and Warfare

Trained on classified battlefield data, AI multiplies effectiveness of Ukraine’s drones: Report

(Sydney J. Freedberg Jr. – Breaking Defense – 6 March 2025) Ukraine has taken publicly available AI models, retrained them on its own extensive real-world data from frontline combat, and deployed them on a variety of drones — increasing their odds of hitting Russian targets “three- or four-fold,” according to a new thinktank report. “By removing the need for constant manual control and stable communications … drones enabled with autonomous navigation raise the target engagement success rate from 10 to 20 percent to around 70 to 80 percent,” writes Ukrainian-American scholar Kateryna Bondar, a former advisor to Kyiv, in a new report released today by the Center for Strategic and International Studies. “These systems can often achieve objectives using just one or two drones per target rather than eight or nine.”. To be clear, Ukraine has not built the Terminator. “We’re very far from killer robots,” Bondar told Breaking Defense in an exclusive interview. But in contrast to the more cautious bureaucracy of the West, she said, “the Ukrainians are more open to testing and trying anything and everything that can kill more Russians.” – https://breakingdefense.com/2025/03/trained-on-classified-battlefield-data-ai-multiplies-effectiveness-of-ukraines-drones-report/

Turkish-Italian venture adds new force to Europe’s drone market

(Tom Kington – Defense News – 6 March 2025) Turkish UAV champion Baykar announced a deal with Italy’s Leonardo on Thursday to grab a slice of Europe’s $100 billion ($108 billion) drone market and possibly offer a Turkish drone as candidate to be the GCAP fighter’s ‘Loyal Wingman.’. Using Baykar platforms and Leonardo electronics and radars, a planned 50-50 joint venture envisages drone assembly in Turkey but also at Leonardo facilities in Italy, which would ease certification for selling in a European market worth $100 billion in the next ten years, the firms said. – https://www.defensenews.com/global/europe/2025/03/06/turkish-italian-venture-adds-new-force-to-europes-drone-market/

What Ukraine can teach Europe and the world about innovation in modern warfare

(Joyce Hakmeh – Chatham House – 5 March 2025) Ukraine’s war effort has become a case study in how necessity fuels innovation. In the face of a far larger and better-equipped adversary, Ukraine has built a defence-tech ecosystem that is reshaping the rules of modern combat. Ukraine’s success in this is not just about resilience or patriotism but about the ability to adapt, decentralize and leverage new technologies faster than its opponent. Nowhere is this more evident than in the country’s approach to drone warfare, where rapid development and deployment have allowed Ukraine to strike deep behind enemy lines and disrupt conventional military calculations. – https://www.chathamhouse.org/2025/03/what-ukraine-can-teach-europe-and-world-about-innovation-modern-warfare

Pentagon to build AI for war planning in Europe and Asia

 

(Patrick Tucker, Jennifer Hlad – Defense One – 5 March 2025) In a bid to accelerate military decision-making—particularly in the European and Indo-Pacific regions—the Pentagon has hired Scale AI to prototype an artificial-intelligence program to help plan military campaigns, test battle scenarios, anticipate threats, and more. Dubbed Thunderforge, the system is intended to enable commanders “to navigate evolving operational environments” using “advanced large language models (LLMs), AI-driven simulations, and interactive agent-based wargaming,” the Defense Innovation Unit said Wednesday in a statement. – https://www.defenseone.com/technology/2025/03/pentagon-build-ai-war-planning-europe-and-asia/403506/

Russia Capitalizes on Development of Artificial Intelligence in its Military Strategy

(Sergey Sukhankin – The Jamestown Foundation – 3 March 2025) Russia has significantly increased its investment in artificial intelligence (AI), allocating a substantial portion of its state budget toward AI-driven military research. This funding aims to enhance Russia’s technological edge in modern warfare, particularly in AI-enabled military applications. Russia’s full-scale invasion of Ukraine marked the first major conflict with widespread AI use. Ukraine, supported by U.S. AI firms, successfully countered Russian forces, prompting Russia to accelerate AI integration in command systems, drones, and air defense networks. Russia’s focus and rapid development of AI has given it an advantage against Western weaponry regardless of the outcome of its invasion of Ukraine. Russia’s AI development traces back to early Soviet  experiments in the 1960s. It was not after its illegal annexation of Crimea in 2014, however, that Russia’s military AI development accelerated. – https://jamestown.org/program/russia-capitalizes-on-development-of-artificial-intelligence-in-its-military-strategy/

China’s new spy drone with 310-mile radar range can track US stealth jets

(Aamir Khollam – Interesting Engineering – 3 March 2025) China has deployed its most advanced long-range surveillance drone, the WZ-9 Divine Eagle, to the South China Sea. This move significantly boosts China’s reconnaissance capabilities in the region. Satellite imagery confirmed the drone’s presence at Ledong Air Base on Hainan Island, a key strategic location. The deployment of the WZ-9 represents a major challenge to the US air superiority, as the aircraft is specifically designed to track and counter stealth technology. The deployment of the WZ-9 Divne Eagle aligns with China’s anti-access/area denial (A2/AD) strategy, aimed at enhancing control over the South China Sea. – https://interestingengineering.com/military/china-wz-9-drone-deployed

Frontiers

Paralyzed man controls robot arm for record 7 months with new brain chip

(Srishti Gupta – Interesting Engineering – 6 March 2025) A reality where a paralyzed patient thinks about moving his limbs while a robotic arm imitates his intention has finally been achieved at UC San Francisco, thanks to a recently developed brain-computer interface (BCI), a device that interprets brain signals and converts them to commands for motion. Most BCIs previously available have a two-day maximum shelf life with a possibility of disruption, however, this one astonishingly operated for a full seven months without major recalibration. The biggest advancement comes from the AI model that this BCI is built around. It adapts to natural shifts in brain activity over time, allowing the participant to refine his imagined movements. – https://interestingengineering.com/innovation/bci-lets-paralyzed-man-move-arm

China sets world record, ‘traps’ light for 4,035 seconds to boost quantum information

(Aman Tripathi – Interesting Engineering – 5 March 2025) Scientists at the Beijing Academy of Quantum Information Sciences (BAQIS) have shattered the world record for light storage. Recently, they successfully held light-based information for an unprecedented 4,035 seconds – over an hour. “Storing light has always been a challenge across the world,” said Liu Yulong, an associate researcher at BAQIS and the study’s first author, as reported by China’s Xinhua news agency. This remarkable feat represents a major step forward in the quest to harness the power of quantum mechanics. – https://interestingengineering.com/science/china-world-record-traps-light

Holistic Evaluation of Large Language Models for Medical Applications

(Stanford HAI – 28 February 2025) Large language models (LLMs) hold immense potential for improving healthcare, supporting everything from diagnostic decision-making to patient triage. They can now ace standardized medical exams such as the United States Medical Licensing Examination (USMLE). However, evaluating clinical readiness based solely on exam performance is akin to assessing someone’s driving ability using only a written test on traffic rules, a recent study finds. While LLMs can generate sophisticated responses to healthcare questions, their real-world clinical performance remains under-examined. In fact, a recent JAMA review found that only 5% of evaluations used real patient data and the majority of studies evaluate performance on standardized medical exams. This state of affairs underscores the need for better evaluations that measure performance on real-world medical tasks, preferably using real clinical data when possible. – https://hai.stanford.edu/news/holistic-evaluation-of-large-language-models-for-medical-applications

Generative AI Tool Marks a Milestone in Biology

(Stanford HAI – 27 February 2025) Imagine being able to speed up evolution – hypothetically – to learn which genes might have a harmful or beneficial effect on human health. Imagine, further, being able to rapidly generate new genetic sequences that could help cure disease or solve environmental challenges. Now, scientists have developed a generative AI tool that can predict the form and function of proteins coded in the DNA of all domains of life, identify molecules that could be useful for bioengineering and medicine, and allow labs to run dozens of other standard experiments with a virtual query – in minutes or hours instead of years (or millennia). The open-source, all-access tool, known as Evo 2, was developed by a multi-institutional team co-led by Stanford HAI affiliate faculty Brian Hie, an assistant professor of chemical engineering and a faculty fellow in Stanford Data Science, and was partially funded by a Stanford HAI Hoffman-Yee Grant. Evo 2 was trained on a dataset that includes all known living species, including humans, plants, bacteria, amoebas, and even a few extinct species. Stanford Report talked to Hie about Evo 2’s advanced capabilities, why the scientific world is so eager to get its hands on this new tool, and how Evo 2 could reshape the biological sciences. – https://hai.stanford.edu/news/generative-ai-tool-marks-a-milestone-in-biology

This site is registered on wpml.org as a development site.