Daily Digest on AI and Emerging Technologies (17 February 2025)

Top of the Day

Responsible AI and Civilian Protection in Armed Conflict

(Daniel R. Mahanty, Kailee Hilt – Centre for International Governance Innovation – 14 February 2025) While the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy holds promise, its supporters should place greater emphasis on how the implementation of its principles will lead to better protection of civilians in armed conflict, especially when combined with other measures not limited to the use of artificial intelligence (AI) and autonomy. This policy brief argues that the responsible use invoked by the declaration should not result in only marginally better protection of civilians (PoC) outcomes than “irresponsible” use, but should instead achieve markedly better ones. Giving meaning to the declaration’s implied PoC content depends on whether the expansion of its membership and stewardship of the process raises the ceiling or lowers the floor for responsible use. National and multilateral efforts to promote the responsible military use of AI should be connected to a renewed commitment among all states to mitigate harm to civilians resulting from all military operations, not only those that involve the use of AI. – https://www.cigionline.org/publications/responsible-ai-and-civilian-protection-in-armed-conflict/

Munich Cyber Security Conference 2025 – Police risk losing society’s trust in fight against cybercrime, warns Europol chief

(Alexander Martin – The Record – 14 February 2025) Law enforcement agencies risk losing the trust of the societies they protect unless those societies understand why new powers are needed to tackle surging levels of cybercrime, Europol’s chief warned on Thursday. Speaking at the Munich Cyber Security Conference, Catherine De Bolle — who took the reins at the agency in 2018 — defended law enforcement’s need to be able to lawfully access encrypted data amid controversy over one such attempt by the United Kingdom. – https://therecord.media/eurpol-chief-cybercrime-law-enforcement-powers-society-trust

 

Munich Cyber Security Conference 2025 – Putting the human back into AI is key, former NSA Director Nakasone says

(Dina Temple-Raston – The Record – 14 February 2025) A roster of officials from government, academia and industry gathered here Thursday to discuss how future workforces must marry the power of artificial intelligence with expertise only a human can provide. “Looking at the next generation of national security professionals, I want policy people who can code and coders who can do policy,” said the former head of the National Security Agency General Paul Nakasone at the Munich Cyber Security Conference. “Five years ago Baby Boomers were replaced by Gen Z’ers, and five years from now it’ll be people born in 1997 — and it’s a workforce that understands data, large language models and speaks a lot of languages, including computer languages.” – https://therecord.media/putting-the-human-back-into-ai-is-key-nakasone

Munich Cyber Security Conference 2025 – India could play a key role in AI development, Infosys co-founder says

(Dina Temple-Raston – The Record – 14 February 2025) Indian billionaire and chairman of tech giant Infosys Limited Nandan Nilekani said that the country is poised to emerge as one of the biggest users and developers of artificial intelligence as it rapidly adapts to the digital world. “Every Indian has a digital ID that can be authenticated online and a lot of thought went into how to have an inclusive approach,”  said Nilekani at the Munich Cyber Security Conference on Friday. “There are obvious use [AI use] cases for us because we have developed technologies at scale.” – https://therecord.media/india-could-play-key-role-in-ai-development

 

Munich Cyber Security Conference 2025 – Ukraine warns of growing AI use in Russian cyber-espionage operations

(Daryna Antoniuk – The Record – 14 February 2025) Russia is increasingly using artificial intelligence to analyze data stolen in cyberattacks, making its operations more precise and effective, according to Ukrainian cyber officials. For years, Russian hackers have exfiltrated vast amounts of data from Ukrainian government agencies, military personnel, and ordinary citizens. However, analyzing and utilizing these large datasets has posed a challenge. Now, AI is helping to bridge that gap, according to Ihor Malchenyuk, director of the cyberdefense department at Ukraine’s State Service of Special Communications and Information Protection (SSCIP). – https://therecord.media/russia-ukraine-cyber-espionage-artificial-intelligence

Munich Cyber Security Conference 2025- Ukraine struggles to counter Russian disinfo without US support, local cyber official says

(Daryna Antoniuk – The Record – 14 February 2025) The U.S. foreign aid freeze and a “dramatic” shift in the Trump administration’s approach to countering disinformation are leaving European nations increasingly vulnerable to Russian influence operations, a Ukrainian security official says. American funding has been instrumental in supporting Ukraine’s cybersecurity and counter-disinformation initiatives, said Natalia Tkachuk, head of cyber and information security at Ukraine’s National Security and Defense Council. – https://therecord.media/ukraine-russia-disinformation-us-foreign-aid

 

Munich Cyber Security Conference 2025 – Taiwan using AI to fight disinformation campaigns, former minister says

(Dina Temple-Raston – The Record – 14 February 2025) Taiwan’s first-ever minister of digital affairs, Audrey Tang, told an audience at the Munich Cyber Security Conference on Friday that the island nation is using AI to battle disinformation on social media. She said that the technology is helping officials pre-bunk Chinese influence operations targeting the island before they spread online. Taiwan’s National Security Bureau said the number of pieces of false or biased information distributed by China increased 60% in 2024, to 2.16 million from 1.33 million in 2023. According to a report released last month, the NSB said Facebook and X, formerly known as Twitter, were the main conduits for disinformation, along with platforms that explicitly target young people such as TikToK. – https://therecord.media/taiwan-using-ai-to-fight-disinformation

 

Build defence ‘Indic’ AI-language models in India

(Jui Marathe, Chaitanya Giri – Observer Research Foundation – 14 February 2025) The Ministry of Defence (MoD) started 2025 by deeming it the ‘Year of Reforms’. This year, it has pledged its focus on emerging technologies, especially robotics, machine learning, and artificial intelligence (AI). The theme, of course, is an organic continuation of its 2024 theme, the ‘Year of Technology Absorption, Empowering the Soldier’. The usual perception is that the soldier needs only to be empowered on the battlefield and during combat; But that is not entirely true. Assisting the soldier in diverse non-battlefield use-cases – internal administration, allocation of business rules, logistics, command and brigade level procurement, personnel re-education and training, wargaming, disaster search and rescue, military doctrine and technology ethics – goes a long way in making the military more efficient. Artificial intelligence (AI) absorption has already begun within the Indian Armed Forces for non-battlefield use cases. But it cannot be merely users of AI; its ability to cultivate and enhance national AI capabilities must be exploited. – https://www.orfonline.org/expert-speak/build-defence-indic-ai-language-models-in-india

The Pacific needs greater cyber resilience as malicious actors break into networks

(Blake Johnson, Fitriani Jocelinn Kang – ASPI The Strategist – 14 February 2025) Samoa and Papua New Guinea’s recent experiences with cyber intrusions are the latest reminders of the urgent need for enhanced cybersecurity resilience in the Pacific. What’s needed is capacity building and coordinated response initiatives. On 11 February Samoa’s Computer Emergency Response Team (SamCERT) issued an advisory warning about APT40, a Chinese state-backed hacking group operating in the region. Days later, reports emerged that Papua New Guinea had suffered an unattributed cyberattack on its tax office, the Internal Revenue Commission, in late January. – https://www.aspistrategist.org.au/the-pacific-needs-greater-cyber-resilience-as-malicious-actors-break-into-networks/

DeepSeek’s Background Raises Multiple Concerns

(Matthew Gabriel Cazel Brazil – The Jamestown Foundation – 14 February 2025) DeepSeek and its parent company, High-Flyer, are embedded in the vibrant—and heavily state-subsidized—“Hangzhou Chengxi Science and Technology Innovation Corridor,” which aims to create a Chinese answer to Silicon Valley in the companies’ hometown. DeepSeek claims that its models are not trained on GPUs illegally imported to the People’s Republic of China (PRC), but data indicates that PRC firms could be acquiring banned chips rerouted via Singapore, though Singapore denies this. DeepSeek’s operational code is open source, but it has released no training code, making it impossible to verify the hardware used to train its latest model. Evidence of the app sending data packets back to the PRC and to PRC-owned servers, despite claims by DeepSeek to the contrary, adds to growing security concerns about the company and its products, as does the models’ censorship of topics sensitive to the Chinese Communist Party. – https://jamestown.org/program/deepseeks-background-raises-multiple-concerns/

After Paris: Are the US and UeK leaving Europe behind on AI?

(Chatham House – 14 February 2025) Birgitte Andersen, Lord Tim Clement-Jones and Alex Krasodomski join the podcast to discuss the Artificial Intelligence Action Summit in Paris. – https://www.chathamhouse.org/2025/02/independent-thinking-after-paris-are-us-and-uk-leaving-europe-behind-ai

Indonesia’s Social Media Usage Law Might Not Protect Children

(Eka Nugraha Putra – FULCRUM – 13 February 2025) The Indonesian government’s move to safeguard children from online harm is laudable. However, it overlooks their wellbeing and trains them to be digitally illiterate by assuming the solution lies in a general law. – https://fulcrum.sg/indonesias-social-media-usage-law-might-not-protect-children/

Extremism in Gaming Spaces: Policy for Prevention and Moderation

(Claudia Wallner, Jessica White, Petra Regeni – RUSI – 13 February 2025) This policy brief seeks to identify recommendations for governments, regulators and other international policymaking entities to design effective policies for preventing and countering violent extremism policies in gaming spaces, to enforce standards, and to support the development of capacity to moderate online harms. – https://www.rusi.org/explore-our-research/publications/policy-briefs/extremism-gaming-spaces-policy-prevention-and-moderation

Trustworthy AI needs Trustworthy Data

(Eurasia Group – 10 February 2025) Artificial intelligence has shot to the top of the global agenda, as the international order is mired in what Eurasia Group founder Ian Bremmer calls a “geopolitical recession.” Global efforts to agree on principles and guardrails for AI are taking place against a backdrop of intense geopolitical competition, disruption, and a deficit of international leadership. Even in such an environment, global leaders’ efforts to create governance frameworks for AI have been impressive. From the UN Global Digital Compact to the Paris AI Action Summit, governments have recognized the mutual imperative to ensure that AI’s potential is harnessed safely and responsibly. Now, the challenge is to build coherence and consensus around the expanding web of AI initiatives and principles that have already been crafted by a wide range of organizations, including the OECD, UNESCO, the Council of Europe, the G7, and more. Despite the flurry of global, regional, and national policymaking activity around AI in recent years, longstanding global governance challenges involving the use and handling of data—which is critical to ensuring trustworthy AI—remain unresolved. – https://www.eurasiagroup.net/live-post/trustworthy-ai-needs-trustworthy-data

Governance and Legislation

Artificial intelligence and intellectual property: Navigating the challenges of data scraping

(Lee Tiedrich, Karine Perset, Sara Fialho Esposito – OECD.AI – 14 February 2025) The Global Partnership on AI (GPAI) has released a new report examining the intellectual property (IP) implications of how organisations collect and use data to train AI systems, with a particular focus on data scraping. This analysis examines approaches and potential solutions for addressing IP considerations in AI development. GPAI approved and released the report on 30 January 2025, which benefited from input by the OECD’s AI Governance Working Party (AIGO) and through GPAI reviews and workshops. – https://oecd.ai/en/wonk/ip-data-scraping

The TikTok Ban Withers Away

(Alan Z. Rozenshtein – Lawfare – 14 February 2025) The Supreme Court’s Jan. 17 decision upholding the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACAA) initially appeared to signal the end of TikTok’s presence in America. Yet what followed was a remarkable sequence of events that has effectively nullified the law. TikTok briefly went offline on the law’s Jan. 19 effective date when Apple and Google suspended TikTok from their app stores and Oracle and Akamai, TikTok’s cloud service providers, stopped hosting the company. But later that evening, Oracle and Akamai restored TikTok’s U.S. hosting infrastructure, reportedly at the behest of the Trump campaign, which assured them they would face no legal liability. The next day, newly inaugurated President Trump issued an executive order placing a 75-day hold on PAFACAA enforcement and ordering the Justice Department to send a letter informing the companies that, if they resumed providing services to TikTok, they would not be in violation of PAFACAA and would not incur liability under the statute. The Justice Department has apparently sent that letter (though its contents have not yet been made public) and, as of Feb. 13, Apple and Google have now returned TikTok to their app stores. So what happens now? – https://www.lawfaremedia.org/article/the-tiktok-ban-withers-away

Geostrategies

Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs

 

(Ruby Scanlon – Lawfare – 14 February 2025) In mid-January, leading U.S. artificial intelligence (AI) companies were sent reeling. DeepSeek, a Chinese AI company, unveiled its R1 model, a new chatbot of comparable quality to OpenAI’s GPT-4. While many analysts rushed to scrutinize DeepSeek’s technical capabilities, a more fundamental question loomed: How did a Chinese lab achieve such an impressive feat? The answer lies not just in DeepSeek’s top engineers or innovative training techniques, but in the vast political and financial ecosystem China has built to accelerate AI innovation. Over the past decade, the Chinese government has made AI development a national priority, directing considerable sums of money, policy incentives, and public-private partnership opportunities toward ensuring that Beijing can compete—and ultimately lead—in AI. – https://www.lawfaremedia.org/article/beyond-deepseek–how-china-s-ai-ecosystem-fuels-breakthroughs

Security

Calibrating Secure by Design with the Risks Faced by Small Businesses

(Lawfare – 14 February 2025) In this paper for Lawfare’s Security by Design Paper Series, Sezaneh Seymour and Daniel W. Woods argue that Secure by Design (SbD) policies should be calibrated to the actual risks faced by small businesses, rather than focusing primarily on software vulnerabilities. Using a dataset of over 90,000 U.S. firms, the authors find that insecure configurations are a more pressing problem than software vulnerabilities, with the latter comprising only 15% of security issues observed. – https://www.lawfaremedia.org/article/calibrating-secure-by-design-with-the-risks-faced-by-small-businesses

UK’s AI Safety Institute Rebrands Amid Government Strategy Shift

(Kevin Poireault – Infosecurity Magazine – 14 The UK’s AI Safety Institute has rebranded to the AI Security Institute as the government shifts its AI strategy to focus on serious AI risks with security implications, including malicious cyber-attacks, cyber fraud and other cybercrimes. The UK Technology Secretary Peter Kyle announced the pivot at the Munich Security Conference, three days after the AI Action Summit in Paris. – https://www.infosecurity-magazine.com/news/uk-ai-safety-institute-rebrands/

Texas investigating DeepSeek for violating data privacy law

(Suzanne Smalley – The Record – 14 February 2025) Texas on Friday announced it is investigating the Chinese AI company DeepSeek for allegedly violating the state’s data privacy law. Attorney General Ken Paxton’s office also has requested relevant documents from Google and Apple, seeking their “analysis” of the inexpensive and open source DeepSeek app and asking what documentation they required from DeepSeek before they made the app publicly available for download on their app stores. – https://therecord.media/texas-investigating-deepseek-privacy

This site is registered on wpml.org as a development site.