HIGHLIGHTS 10 TO 14 JANUARY 2025
AI Action Summit (Paris 2025)
Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet – https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet
Opening Address by Prime Minister Shri. Narendra Modi at the AI Action Summit, Paris – https://www.mea.gov.in/Speeches-Statements.htm?dtl/39020/Opening_Address_by_Prime_Minister_Shri_Narendra_Modi_at_the_AI_Action_Summit_Paris_February_11_2025
UN Secretary-General’s remarks at AI Action Summit – https://www.un.org/sg/en/content/sg/statement/2025-02-11/secretary-generals-remarks-ai-action-summit-scroll-down-for-english
Speech by President von der Leyen at the Artificial Intelligence Action Summit – https://ec.europa.eu/commission/presscorner/detail/en/speech_25_471
(Newsweek) Vice President JD Vance warned Tuesday that “excessive regulation” would kill the rapidly growing artificial intelligence industry. In a speech at the Paris AI Action Summit, Vance challenged Europe’s regulatory stance on artificial intelligence and content moderation on Big Tech platforms, highlighting a growing divide between the United States, its allies and China on AI governance. – https://www.msn.com/en-us/politics/government/jd-vance-exposes-ai-rift-with-allies-on-first-foreign-trip-as-vp/ar-AA1yPqcT?ocid=BingNewsSerp
(Global Times) World leaders seek common ground at Paris AI summit – https://www.globaltimes.cn/page/202502/1328270.shtml
Governance and Legislation
Applying International Human Rights Principles for AI Governance
(Sabhanaz Rashid Diya – Centre for International Governance Innovation – 12 February 2025) Despite gaining prominence, the fairness, accountability, transparency and ethics framework in artificial intelligence (AI) governance poses significant limitations. It is inadequately defined to meet the complexities of a pluralistic work, lacks consensus on normative values underpinning it, is prone to misuse and misrepresentation, and inadvertently promotes ethics washing. The International Bill of Human Rights, while not devoid of criticism and implementation challenges, provides a universal foundation for building consensus around value archetypes within and between societies. Canada can play a critical leadership role in international AI governance through the Global Digital Compact, as well as its membership in the Group of 20 and its presidency in the Group of Seven, by establishing human rights frameworks as a governance norm for AI systems. – https://www.cigionline.org/publications/applying-international-human-rights-principles-for-ai-governance/
A Parting CyberQuest
(Anthony M. Rutkowski – Lawfare – 12 February 2025) On Dec. 5, 2024, the Federal Communications Commission (FCC) Office of the Chairwoman issued a press release and accompanying fact sheet attempting to assert a broad new cybersecurity regulatory authority by creatively conjoining news coverage revelations of network hacking with an abstruse provision in a 1994 act on lawful interception. It was one day after the national security community collectively released extensive guidance on mitigating the related well-known hacking vulnerabilities—which was never mentioned by the FCC. Several weeks later, the FCC summarily declared cybersecurity authority over an array of U.S. telecommunications infrastructure to impose new regulations that include creation and notification of cybersecurity risk management and supply chain security plans. The designated incoming FCC chair published his strong objections. The next day, the White House published a related cybersecurity executive order (which was subsequently deleted). Although the FCC assertion attempt is certain to fail, the events underscored a continuing need in law and operational practices for instituting effective infrastructure cybersecurity. – https://www.lawfaremedia.org/article/a-parting-cyberquest
Japan Goes on Offense With New ‘Active Cyber Defense’ Bill
(Nate Nelson – Dark Reading – 12 February 2025) The Japanese government is on a mission to catch up to US national cybersecurity preparedness standards and has just passed bold legislation aimed at bolstering the country’s cyber-response capabilities. Together, the two articles of legislation constitute what’s referred to as the Active Cyber Defense Bill, which enables the Japanese government to take more aggressive measures to stop cyberattacks before they can cause widespread damage. – https://www.darkreading.com/cybersecurity-operations/japan-offense-new-cyber-defense-bill
Uneven Adoption of Artificial Intelligence Tools Among U.S. Teachers and Principals in the 2023–2024 School Year
(Julia H. Kaufman, Ashley Woo, Joshua Eagan, Sabrina Lee, Emma B. Kassan – RAND Corporation – 11 February 2025) Using survey data from the RAND American Educator Panels, the authors examine the use of artificial intelligence (AI) tools and products among teachers and principals in kindergarten through grade 12 (K–12) and the provision of school guidance on the use of AI during the 2023–2024 school year. The results indicate that 25 percent of surveyed teachers used AI tools for their instructional planning or teaching. That said, English language arts and science teachers were nearly twice as likely to report using AI tools as mathematics teachers or elementary teachers of all subjects. Nearly 60 percent of U.S. principals reported using AI tools for their work. Teachers and principals in higher-poverty schools were less likely to report using AI tools than those in lower-poverty schools. In addition, principals in high-poverty schools reported providing guidance for use of AI less often than their counterparts in lower-poverty schools. These results have implications for district and school leaders, as well as AI tool developers and researchers. – https://www.rand.org/pubs/research_reports/RRA134-25.html
It’s the Algorithm, Stupid: Influence in the Age of Generative AI
(Matt Freear – RUSI – 11 February 2025) The rise of Chinese-made DeepSeek and the rapid spread of AI chatbots raise pressing questions about their implications for democracy and security. Given their superhuman ability to learn and influence, there is an urgent need to strengthen user literacy. – https://www.rusi.org/explore-our-research/publications/commentary/its-algorithm-stupid-influence-age-generative-ai
Digital Data and Advanced AI for Richer Global Intelligence
(Danielle Goldfarb – Centre for International Governance Innovation – 11 February 2025) From collecting millions of online supermarket prices to measure inflation, to assessing the economic impact of the COVID-19 pandemic on low-income workers, digital data sets can be used to benefit the public interest. Using these and other examples, this special report explores how digital data sets and advances in artificial intelligence (AI) can provide timely, transparent and detailed insights into global challenges. These experiments illustrate how governments and civil society analysts can reuse digital data to spot emerging problems, analyze specific group impacts, complement traditional metrics or verify data that may be manipulated. AI and data governance should extend beyond addressing harms. International institutions and governments need to actively steward digital data and AI tools to support a step change in our understanding of society’s biggest challenges. – https://www.cigionline.org/publications/digital-data-and-advanced-ai-for-richer-global-intelligence/
Systemic Risk Assessments Hold Clues for EU Platform Enforcement
(David Sullivan – Lawfare – 11 February 2025) In late November 2024, 19 of the EU’s largest internet platforms and search engines—each with more than 45 million EU-based users—began satisfying some of the most innovative and far-reaching elements of the European Union’s flagship platform regulation, the Digital Services Act (DSA). The DSA regulates online intermediaries, seeking to reduce illegal and harmful content. As part of these regulations, companies must report on their assessment and mitigation of “systemic” risks, provide an independent audit of the service’s compliance with its DSA obligations, and develop a response to the audit’s findings. – https://www.lawfaremedia.org/article/systemic-risk-assessments-hold-clues-for-eu-platform-enforcement
Network architecture for global AI policy
(Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Andrew W. Wyckoff – Brookings – 10 February 2025) Artificial intelligence caught the attention of a few governments a decade ago. It has become a preoccupation for many since the watershed 2022 release of ChatGPT. In turn, that development set off a tsunami of policy initiatives across many national governments, most multilateral organizations, and diverse evolving groups and ad-hoc coalitions. These efforts seek both to realize the opportunities for AI to expand the frontiers of science and human capabilities and contribute to productivity and creativity, and to identify and mitigate risks that AI presents to humans and society. – https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/
Securing Cyberspace Conference 2024: Responsible Cyber Behaviour in Practice
(Sara Seppanen – RUSI – 10 February 2025) In October 2024, RUSI’s Cyber Research Group organised its inaugural Securing Cyberspace Conference. The Securing Cyberspace Conference 2024 gathered over 100 participants to discuss domestic and regional views on the conference theme ‘Responsible Cyber Behaviour in Practice: A Global View’. The programme covered topics including international law and norms, responsible use and development of tools and technologies – including AI – and the responsibility of states in conducting and responding to cyber operations and incidents. The event was part of RUSI’s ongoing Global Partnership for Responsible Cyber Behaviour work. – https://www.rusi.org/explore-our-research/publications/conference-reports/securing-cyberspace-conference-2024-responsible-cyber-behaviour-practice
Lawmakers move to ban DeepSeek’s AI from government devices
(Edward Graham – NextGov – 10 February 2025) House lawmakers are seeking to bar federal employees from downloading Chinese startup DeepSeek’s artificial intelligence chatbot app onto their government-issued devices over national security concerns. The bipartisan proposal — introduced on Feb. 6 by Reps. Josh Gottheimer, D-N.J., and Darin LaHood, R-Ill. — comes in response to the sudden emergence of DeepSeek’s AI app, which rivals the capabilities of U.S. genAI competitors like OpenAI while having been developed for a fraction of the cost. The bill is co-sponsored by 16 other lawmakers from both sides of the aisle. – https://www.nextgov.com/artificial-intelligence/2025/02/lawmakers-move-ban-deepseeks-ai-government-devices/402886/?oref=ng-homepage-river
Building International Partnerships to Combat Foreign Cyberattacks
(Julia Dickson, Emily Harding – Lawfare – 9 February 2025) Cyberattacks by adversary states and criminal organizations cost Americans more than $12.5 billion in 2023 alone. Most malicious cyber activity, however, is conducted by actors operating outside the United States using foreign infrastructure, making it challenging for U.S. law enforcement to address. The incoming Trump administration must expand international collaboration to stop this crime wave. A good place to start is by building regional, collaborative law enforcement hubs to combat malicious cyber activity. These hubs should be locally organized and run, but seed funded by the United States and its allies. The hubs should be virtual for the first year and then evolve into brick-and-mortar collaborative spaces to build community and trust for deeper information sharing. Over time, seamless, up-to-the minute collaboration will reduce the dark corners of internet infrastructure where criminals like to hide, and these hubs will prove a low-cost, high-impact way to shore up U.S. alliances in areas of the globe poised for dramatic growth. Initial hubs could be established in key partner-states in East Africa, Latin America, and Southeast Asia, with more regional partners brought on board as the program develops. – https://www.lawfaremedia.org/article/building-international-partnerships-to-combat-foreign-cyberattacks
The Changing Landscape of European Privacy Enforcement
(Kenneth Propp – Lawfare – 7 February 2025) The European Union’s agenda, like the old Soviet Union’s economic planning, operates in five-year increments. During European Commission President Ursula von der Leyen’s first five-year term from 2019 to 2024, implementing the European Union’s 2018 General Data Protection Regulation (GDPR) was an initial digital policy priority. Safeguarding transatlantic data transfers became another, after 2020, when the Court of Justice of the European Union (CJEU) struck down a transatlantic agreement for personal data transfer (the Privacy Shield). By 2023, a successor (the EU-U.S. Data Privacy Framework) was in place. As risk to transatlantic data transfers thereafter receded, U.S. digital policymakers shifted their attention to three major new EU digital legislative initiatives—the 2022 Digital Services Act (DSA) and Digital Markets Act (DMA), and the 2024 Artificial Intelligence Act (AIA). Von der Leyen’s just-begun second term, which started at the end of 2024, will emphasize implementation of these new landmark laws, as her mission letter to Henna Virkkunen, the responsible commissioner, emphasized. Several ongoing DMA investigations are scrutinizing advertising-related practices of U.S. technology giants. Privacy litigation involving data transfers to the United States has not gone away, however, and indeed seems destined to expand. One privacy activist’s challenge to the DPF is due to be taken up by an EU court soon, and rumors of a second case are becoming more concrete. In addition, European privacy nongovernmental organizations are poised to take advantage of new procedural possibilities for class-action-style litigation and for enhanced damages recovery, as detailed in the sections below. Europe’s changing privacy enforcement landscape could thus emerge as a significant policy issue during Trump’s and von der Leyen’s second terms. – https://www.lawfaremedia.org/article/the-changing-landscape-of-european-privacy-enforcement
OECD launches global framework to monitor application of G7 Hiroshima AI Code of Conduct
(OECD.AI – 7 February 2025) The Organisation for Economic Co-operation and Development (OECD) launched the first global framework for companies to report on their efforts to promote safe, secure, and trustworthy AI. This initiative monitors the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems, a central component of the Hiroshima AI Process launched during Japan’s G7 Presidency. – https://www.oecd.org/en/about/news/press-releases/2025/02/oecd-launches-global-framework-to-monitor-application-of-g7-hiroshima-ai-code-of-conduct.html
A Multistakeholder Model of Cyber Peace
(Jean-Marie Guéhenno, Olivia Grinberg, Jason Healey – Lawfare – 7 February 2025) The Russian NotPetya cyberattack of 2017 not only wiped 10 percent of all computers in Ukraine—where it was targeted—but also indiscriminately cascaded around the world, causing approximately $10 billion in damage. Another Russian attack, just one hour before their troops rolled across the Ukrainian border in 2022, disrupted the Viasat satellite communication network, taking offline “more than 5,800 wind turbines belonging to the German energy company Enercon” and internet service in France, the Czech Republic, and the United Kingdom. These cases illustrate that disruptive cyber campaigns are spilling out of conflict zones to affect everyone, even those far from the fighting. Would-be cyber peacekeepers have no effective way to protect civilians in these situations, unlike in traditional conflict. To deal with the nature of cyber conflict, the world needs a new, multistakeholder model for cyber peace. – https://www.lawfaremedia.org/article/a-multistakeholder-model-of-cyber-peace
Geostrategies
DeepSeek and the shifting tides of the US-China AI race
(Sameer Patil, Sauradeep Bag – Observer Research Foundation – 12 February 2025) The emergence of DeepSeek as a formidable Artificial Intelligence (AI) contender last week has raised unsettling questions about the conventional wisdom surrounding AI development—particularly the belief that winning the AI race is purely a function of pouring billions into graphics processing units (GPUs). Despite operating with seemingly fewer and less advanced chips, DeepSeek has managed to produce models that rival America’s best, challenging Nvidia chip company’s dominance in AI infrastructure. Market signals suggest investors remain steadfast in their faith in the American AI chip giant. Some dismiss DeepSeek’s efficiency claims as posturing, but others see merit. Even if true, it may have simply optimised around American models trained on superior hardware. The stark reality? A Chinese AI now matches the best US models—at a fraction of the cost. – https://www.orfonline.org/expert-speak/deepseek-and-the-shifting-tides-of-the-us-china-ai-race
Competition for Control of Rare Earths Triggering Great Power Conflict in Central Asia
(Paul Globe – The Jamestown Foundation – 11 February 2025) Control of access to rare earth minerals that are critical for the development of technologies is a driver in the strategic thinking of Western powers, as well as the People’s Republic of China and Russia. Recent developments in Central Asia highlight the growing importance of rare earth minerals in global geopolitics as these resources are now central to technological and strategic power. Central Asia as a region risks falling into conflict as governments prefer to involve multiple actors so that a single foreign power cannot hold sway over the country and undermine the central government. – https://jamestown.org/program/competition-for-control-of-rare-earths-triggering-great-power-conflict-in-central-asia/
Predicting the Next ‘DeepSeek Event’: Early Indicators of Capability Within the PRC’s AI Ecosystem
(Matthew Johnson – The Jamestown Foundation – 11 February 2025) Predicting artificial intelligence (AI) firm DeepSeek’s recent successes within a highly competitive AI ecosystem may have been possible by observing factors such as government recognition, proximity to top-tier national research institutions, and a complex network of corporate affiliates with proven technology expertise. Indicators of ties to the Party-state include DeepSeek’s Beijing arm being named one of thirty “main drafting units” for a national data security standards plan in 2023 and the designation of DeepSeek affiliate High-Flyer Technology as a national “high-tech enterprise” in 2020 and 2023. DeepSeek has built a strategic presence in Beijing, a leading hub for AI research, despite being headquartered in Hangzhou. This has fueled online speculation that it benefits from state support. External validations of High-Flyer/DeepSeek’s growing capability marked DeepSeek as a sophisticated innovator well before the market-shifting release of its R1 open source model. – https://jamestown.org/program/predicting-the-next-deepseek-event-early-indicators-of-capability-within-the-prcs-ai-ecosystem/
Challenger and Incumbent Tools for U.S.-China Tech Competition
(Mark Thomas – Lawfare – 11 February 2025) On Jan. 20, Chinese company DeepSeek released its R1 model, defying American dominance of AI. The model, which was built through software optimization rather than expensive microchip investments, beats leading competitors like OpenAI in several metrics and costs a tiny fraction of their development expenditure. The model’s innovative approach spooked microchip investors, prompting the largest single-day market cap loss in U.S. history for Nvidia. DeepSeek is a harbinger of things to come. Regardless of which country takes the lead in AI, China will deploy its ample resources to lead the world in some, if not many, technological domains. – https://www.lawfaremedia.org/article/challenger-and-incumbent-tools-for-u.s.-china-tech-competition
China’s technological ascent: Surpassing Western institutions
(Manoj Joshi – Observer Research Foundation – 10 February 2025) According to an index brought out by the prestigious scientific journal Nature, a Chinese regional university, the Sichuan University (SCU) in Chengdu, has recently overtaken Stanford University, MIT, Oxford and the University of Tokyo to emerge as the 11th top university in terms of science research output. The index assesses institutions by their contributions to articles in a slate of top-tier scientific journals. Nature’s Index measures five areas—biological sciences, chemistry, Earth and environmental sciences, health sciences, and physical sciences. While Harvard retains the top spot, the other nine universities on the list, followed by SCU, are all in China. While China has recently garnered global headlines because of the AI company DeepSeek, it has already gathered considerable momentum in the last decade by its efforts to become a science and technology power. – https://www.orfonline.org/expert-speak/china-s-technological-ascent-surpassing-western-institutions
Australia needs Australian AI
(Jocelinn Kang – ASPI The Strategist – 10 February 2025) Australia must do more to shape its artificial intelligence future. The release of DeepSeek is a stark reminder that if Australia does not invest in its own AI solutions, it will remain reliant on foreign technology—technology that may not align with its values and often carries the imprints of its country of origin. – https://www.aspistrategist.org.au/australia-needs-australian-ai/
DeepSeek’s Disruption: Geopolitics and the Battle for AI Supremacy
(Tobias Feakin – RUSI – 7 February 2025) The rise of DeepSeek signals a profound shift in the global AI landscape, challenging the foundations of US technological dominance. Until now, Washington’s AI strategy hinged on controlling access to high-performance computing and advanced semiconductors, enforcing export controls to constrain China’s innovation. DeepSeek’s ability to develop a powerful AI model with significantly lower computational costs upends this paradigm. The game has changed, and a new phase in the race for AI supremacy has begun. – https://www.rusi.org/explore-our-research/publications/commentary/deepseeks-disruption-geopolitics-and-battle-ai-supremacy
Trump, Stargate, DeepSeek: A new, more unpredictable era for AI?
(Isabella Wilkinson – Chatham House – 7 February 2025) 2025 is already proving a whiplash year for leaps and investments in artificial intelligence. On 19 January, China announced an AI investment fund, viewed as a response to tightened US export controls on chips. On 21 January, US President Donald Trump announced the Stargate Project, a company that he said would invest an unprecedented $500 billion in developing US AI infrastructure, backed by technology companies OpenAI and Oracle, Japanese bank SoftBank and the Emirati sovereign wealth fund, MGX. – https://www.chathamhouse.org/2025/02/trump-stargate-deepseek-new-more-unpredictable-era-ai
What does the TikTok saga reveal about China-US relations?
(Sun Chenghao and Chen Siyao – Brookings – 7 February 2025) The sudden influx of American users onto RedNote (Xiaohongshu), now dubbed the “TikTok refugees,” represents far more than a mere platform migration. Triggered by the impending TikTok ban in the United States, this phenomenon encapsulates a broader narrative: the intersection of technological governance, cultural exchange, and digital sovereignty in an increasingly fragmented online landscape. How do Chinese observers interpret the motivations behind America’s TikTok ban saga? What strategic scenarios are emerging in China regarding TikTok’s future trajectory? How do these narratives shape and reflect China’s views of the complex China-U.S. relations? Examining these questions offers critical insights into the shifting dynamics of global technological competition and the evolving contours of bilateral relations in a contested digital age. – https://www.brookings.edu/articles/what-does-the-tiktok-saga-reveal-about-china-us-relations/
The crisis in Western AI is real
(Charles Ferguson – ASPI The Strategist – 7 January 2025) The release of the Chinese DeepSeek-R1 large language model, with its impressive capabilities and low development cost, shocked financial markets and led to claims of a ‘Sputnik moment’ in artificial intelligence. But a powerful, innovative Chinese model achieving parity with US products should come as no surprise. It is the predictable result of a major US and Western policy failure, for which the AI industry itself bears much of the blame. – https://www.aspistrategist.org.au/the-crisis-in-western-ai-is-real/
DeepSeek’s Lesson: America Needs Smarter Export Controls
(Ashley Lin, Lennart Heim – RAND Corporation – 5 February 2025) Last December, the Chinese AI firm DeepSeek reported training a GPT-4-level model for just $5.6 million, challenging assumptions about the resources needed for frontier AI development. This perceived cost reduction, and DeepSeek’s cut-rate pricing for its advanced reasoning model R1, have left tech stocks plunging and sparked a debate on the effectiveness of U.S. export controls on AI chips. – https://www.rand.org/pubs/commentary/2025/02/deepseeks-lesson-america-needs-smarter-export-controls.html
Defense, Intelligence, and Warfare
Survival of the quickest: Military leaders aim to unleash, control AI
(Rudy Ruitenberg – Defense News – 13 February 2025) Artificial intelligence is massively accelerating military decision making, and armed forces that don’t keep up risk being outmatched, the NATO commander in charge of strategic transformation at the alliance said at the AI Action Summit in Paris. Alliance members are now using AI in the decision-making loop of observe, orient, decide and act, NATO Supreme Allied Commander Transformation Adm. Pierre Vandier, said at a conference focused on military AI. Analysis that previously took hours or days, such as processing large amounts of sensor data, can now be done in a matter of seconds, he said. – https://www.defensenews.com/global/europe/2025/02/13/survival-of-the-quickest-military-leaders-aim-to-unleash-control-ai/
Lawmakers ask DNI to reassess UK cyber, intel ties over Apple backdoor mandate
(David DiMolfetta – NextGov – 13 February 2025) A bipartisan, bicameral pair of lawmakers urged newly confirmed Director of National Intelligence Tulsi Gabbard to reevaluate U.S. cybersecurity and intelligence-sharing relations with the United Kingdom in response to a report revealing that the UK secretly ordered Apple to build a backdoor into encrypted iCloud backups. The Feb. 7 report from the Washington Post says that the order issued last month demands UK law enforcement and intelligence operatives be granted worldwide, unfettered access to users’ protected cloud data. Apple customers residing in the United States would be cast into that dragnet. – https://www.nextgov.com/cybersecurity/2025/02/lawmakers-ask-dni-reassess-uk-cyber-intel-ties-over-apple-backdoor-mandate/403005/?oref=ng-homepage-river
Intelligence agencies must explain what they do, says UK’s former cyber spy chief
(Alexander Martin – The Record – 13 February 2025) Amid a growing scandal over the British government’s reported attempt to force Apple to provide the country’s authorities with access to encrypted iCloud accounts, a former intelligence chief has called for more transparency from spy agencies. Speaking at the Munich Cyber Security Conference, Sir Jeremy Fleming — who headed the cyber and signals intelligence agency GCHQ from 2017 to 2023 — said he felt “really strongly” the agency’s “license to operate” had to be based on public understanding and trust. – https://therecord.media/intel-agencies-must-explain-what-they-do-fleming-gchq
America’s ‘Iron Dome’ is going to need a lot more sensors: NORTHCOM
(Meghann Myers – Defense One – 13 February 2025) Constructing an American version of Israel’s Iron Dome missile defense system will require much better missile-detection technology, the head of U.S. Northern Command said. The Trump administration’s renewed focus on air defense dovetails with warnings that leaders of NORTHCOM, and sister command North American Aerospace Defense, have sounded about U.S. detection capabilities in recent years, Air Force Gen. Gregory Guillot said at a Senate Armed Services Committee hearing. – https://www.defenseone.com/defense-systems/2025/02/americas-iron-dome-going-need-lot-more-sensors-northcom/402998/?oref=d1-featured-river-secondary
With IVAS takeover, Anduril looks to build out human-machine ‘ecosystem’
(Patrick Tucker – Defense One – 13 February 2025) Anduril has seized the lead on the Army’s IVAS headset program, putting the eight-year-old company in charge of one of the military’s most important soldier-enhancement programs, and poising it to deliver not just new drones but also a key means of controlling them and the data they gather. On Tuesday, the company announced that it would take over development and production of the Integrated Visual Augmentation System from Microsoft, whose stewardship of the $22 billion program was beset by delays, development problems, and cost overruns. – https://www.defenseone.com/business/2025/02/ivas-takeover-anduril-looks-build-out-human-machine-ecosystem/403009/?oref=d1-featured-river-top
Heven Drones unveils new hydrogen-powered, long range UAV at IDEX
(Seth J. Frantzman – Breaking Defense – 13 February 2025) Heven Drones is unveiling a new, hydrogen-powered unmanned aerial system ahead of the upcoming IDEX conference, as the company seeks to expand its footprint in the Gulf. Dubbed the Raider, the new platform is “tailored to provide extended endurance, versatile payload options, and field-ready modularity, addressing critical challenges faced by modern operators,” the company said in a statement. Speaking to Breaking Defense, company CEO Benzion Levinson said the decision to pursue the new design was driven by the recent conflicts in Ukraine and Israel, which have shown drones are moving from being “flying cameras” to being “flying robots.” – https://breakingdefense.com/2025/02/heven-drones-unveils-new-hydrogen-powered-long-range-uav-at-idex/
SDA asks industry to propose 60-day studies of ‘novel’ capabilities for Iron Dome
(Theresa Hitchens – Breaking Defense – 12 February 2025) The Space Development Agency (SDA) is soliciting “executive summaries” from interested vendors for fast-track studies of how the agency’s Proliferated Warfighter Space Architecture (PWSA) satellite network in low Earth orbit can be best exploited to support President Donald Trump’s “Iron Dome For America” missile shield. “SDA is interested in industry’s perspective on implementing the Iron Dome for America architecture, and is particularly interested in building on and integrating PWSA’s current contributions to global kill chains and missile defense,” the agency wrote in a Feb. 11 solicitation. The agency is asking for “novel architecture concepts, systems, technologies, and capabilities that enable leap-ahead improvements for future [PWSA] tranches, capability layers, or, enable new capability layers to address other emerging or evolving warfighter needs,” it adds. – https://breakingdefense.com/2025/02/sda-asks-industry-to-propose-60-day-studies-of-novel-capabilities-for-iron-dome/
GAO calls on Coast Guard to improve cyber for Maritime Transportation System
(Carley Welch – Breaking Defense – 12 February 2025) The Government Accountability Office released a report today calling for the US Coast Guard to improve the cybersecurity infrastructure of the Maritime Transportation System (MTS), the complex network of ports, waterways, ships and other vessels that are used to transport goods and passengers. The government watchdog found several gaps in the MTS’s cybersecurity practices in its study, which was conducted as a result of the 2023 National Defense Authorization Act. These include inconsistencies in cyber incident data, incompetencies in cyber professionals and a lack of a cohesive cybersecurity strategy to protect the MTS. – https://breakingdefense.com/2025/02/gao-calls-on-coast-guard-to-improve-cyber-for-maritime-transportation-system/
US cyber vulnerabilities fuel N. Korea’s nuclear arsenal, but solutions are near: DARPA official
(Carley Welch – Breaking Defense – 11 February 2025) The US’s vulnerable cybersecurity systems are indirectly allowing North Korea to bolster its nuclear arsenal, but thanks to existing technology this can be easily avoided, an official from the Defense Advanced Research Project Agency said Monday. North Korea is able to use the funds it acquires from ransomware attacks on US systems and those of other countries to pay for the development of nuclear weapons, Kathleen Fisher, director of DARPA’s Information Innovation Office, said – https://breakingdefense.com/2025/02/us-cyber-vulnerabilities-fuel-n-koreas-nuclear-arsenal-but-solutions-are-near-darpa-official/
Promising biotech startups ‘dying on the vine’: In-Q-Tel
(Sydney J. Freedberg Jr. – Breaking Defense – 10 February 2025) Driven by the spread of CRISPR gene-editing techniques and rapid advances in AI-powered biochemical models, biotechnology is taking off. But not all biotech is equal in an investor’s eyes, and the biotech sectors that face the hardest fight for funding — at least in the United States — happen to be the ones with most promise for the Pentagon. While tens of billions of dollars pour into biotech ever year in the US and Europe, it overwhelmingly goes to pharmaceutical products, everything from lifesaving medicines like mRNA COVID vaccines to mass-market cosmetic creams, because those investments have the shortest and most obvious path to high returns. By contrast, there’s much less commercial demand — and a more confusing regulatory process — for other promising biotech that might matter for the military. That includes new, biologically-derived materials like lighter-weight body armor, anti-corrosion coatings or even explosives. – https://breakingdefense.com/2025/02/promising-biotech-startups-dying-on-the-vine-in-q-tel/
Security
China’s Salt Typhoon hackers targeting Cisco devices used by telcos, universities
(Jonathan Greig – The Record – 13 February 2025) China’s Salt Typhoon campaign to breach telecommunications companies has continued through the new year despite efforts by governments to stop the hackers, researchers said Thursday. Recorded Future’s Insikt Group identified a campaign in December and January that involved attempts to compromise more than 1,000 Cisco network devices globally, many of which are associated with telecommunications providers. – https://therecord.media/china-salt-typhoon-cisco-devices
India’s Cybercrime Problems Grow as Nation Digitizes
(Robert Lemos – Dark Reading – 12 February 2025) India continues to see a surge in cybercrime affecting both citizens and businesses, with cyber fraud against citizens jumping 51% over the past year and cyberattackers targeting businesses in volumes significantly higher than global averages. Overall, Indian citizens filed more than 1.7 million cybercrime complaints in 2024, up from 1.1 million complaints in 2023, according to the latest data from India’s National Cyber Reporting Platform (NCRP) released in early February. While many of those cyber scams came from domestic sources, about 45% of the cyberattacks came from cybercriminal havens in Cambodia, Myanmar, and Laos, according to the report. – https://www.darkreading.com/cyber-risk/indias-cybercrime-problems-nation-digitizes
Cybercrime evolving into national security threat: Google
(Jonathan Greig – The Record – 12 February 2025) Cybercrime continues to expand and evolve and has become a national security-level threat that is enabling more attacks by state-backed groups, Google warned in a new report. Released ahead of the Munich Security Conference, the Google Threat Intelligence Group and Mandiant research covers their investigations throughout 2024 and observations from the last four years. – https://therecord.media/cybercrime-evolving-nation-state-threat
CHERI Security Hardware Program Essential to UK Security, Says Government
(James Coker – Infosecurity Magazine – 12 February 2025) The UK government-backed Digital Security by Design (DSbD) initiative must succeed to systematically address rising cyber risks to the nation, according to the National Cyber Security Centre’s (NCSC) CTO, Ollie Whitehouse. Whitehouse made the remarks during an event showcasing the technological advances from the ambitious program, which aims to secure the underlying computer hardware used in the UK. – https://www.infosecurity-magazine.com/news/cheri-security-hardware-uk-security/
Europol Warns Financial Sector of “Imminent” Quantum Threat
(Phil Muncaster – Infosecurity Magazine – 10 February 2025) Europe’s financial services sector must begin planning now for the transition to quantum-safe cryptography, as the risk of “store now decrypt later” (SNDL) attacks grows, Europol has warned. – https://www.infosecurity-magazine.com/news/europol-warns-financial-sector/
Frontiers
How Disruptive Is DeepSeek? Stanford HAI Faculty Discuss China’s New Model
(Vanessa Parli – Stanford HAI – 13 February 2025) In recent weeks, the emergence of China’s DeepSeek — a powerful and cost-efficient open-source language model — has stirred considerable discourse among scholars and industry researchers. At the Stanford Institute for Human-Centered AI (HAI), faculty are examining not merely the model’s technical advances but also the broader implications for academia, industry, and society globally. – https://hai.stanford.edu/news/how-disruptive-deepseek-stanford-hai-faculty-discuss-chinas-new-model
Phoenix Partners with China’s Origin Quantum on Decentralized AI, DePIN Network
(Quantum Insider – 12 February 2025) Phoenix has partnered with Origin Quantum to integrate its 72-qubit superconducting quantum chip into a decentralized AI and compute network, aiming to make quantum computing more accessible. The partnership builds on Origin Quantum’s “Wukong” superconducting quantum computer, which features 72 working qubits and 126 coupler qubits, supporting applications in biosciences, material engineering, AI, and optimization. Phoenix is developing QuantumVM, a web-based quantum computing platform that will allow users to run quantum applications without coding expertise, with an expected release in early Q2 2025. – https://thequantuminsider.com/2025/02/12/phoenix-partners-with-chinas-origin-quantum-on-decentralized-ai-depin-network/
BTQ and Coxwave Partner to Develop AI Chatbots for Quantum Education and Research
(Quantum Insider – 12 February 2025) BTQ Technologies and Coxwave partnered to develop AI-driven chatbots for quantum education and research, supported by a $117,000 grant from South Korea’s “AI Voucher” program. The collaboration will produce two AI-powered tools: AI Tutor, which simplifies quantum concepts for the public, and AI Assistant, which helps researchers analyze complex quantum topics. By combining Coxwave’s AI analytics with BTQ’s quantum expertise, the chatbots will potentially enhance learning and research efficiency, making quantum science more accessible. – https://thequantuminsider.com/2025/02/12/btq-and-coxwave-partner-to-develop-ai-chatbots-for-quantum-education-and-research/
Will 2025 mark the beginning of practically useful quantum computers?
(Prateek Tripathi – Observer Research Foundation – 10 February 2025) A major impediment facing quantum computing was that of scalability, the ability to significantly increase the number of qubits in a quantum computer. This has been particularly difficult in light of error correction, which has been a problem pretty much since the genesis of quantum computers. There have been several attempts to address the issue, but none of them panned out. In 2024, however, some developments have shown real promise. In particular, in a paper recently published in Nature, Google has claimed that it has been able to surmount this obstacle using its “Willow” quantum processor, thereby paving the way for large-scale quantum computers to become a practical reality in the near future. – https://www.orfonline.org/expert-speak/will-2025-mark-the-beginning-of-practically-useful-quantum-computers
Advancing Responsible Healthcare AI with Longitudinal EHR Datasets
(Jason Alan Fries, Michael Wornow, Ethan Steinberg, Zepeng Frazier Huo, Hejie Cui, Suhana Bedi, Alyssa Unell, Nigam Shah – Stanford HAI – 10 February 2025) Current evaluations of AI models in healthcare rely on limited datasets like MIMIC, lacking complete patient trajectories. New benchmark datasets offer an alternative. – https://hai.stanford.edu/news/advancing-responsible-healthcare-ai-longitudinal-ehr-datasets