LEGISLATION
The U.S. National Security Memorandum on AI: Leading Experts Weigh In
(Just Security – 25 October 2024) On Oct. 24, the White House publicly released its long-awaited National Security Memorandum (NSM) on AI, mandated by the Biden administration’s October 2023 Executive Order on AI. The “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence” and corresponding fact sheet provide the first comprehensive strategy for governing AI use in national security systems, notably in defense and intelligence agencies. – The U.S. National Security Memorandum on AI: Leading Experts Weigh In
The Biden Administration’s National Security Memorandum on AI Explained
(Gregory C. Allen, Isaac Goldston – Center for Strategic & International Studies – 25 October 2024) On October 24, 2024, the Biden administration released a National Security Memorandum (NSM) titled “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence.” Writing this memorandum was a stated requirement from the administration’s October 2023 AI Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As the lengthy title suggests, this document covers a diverse set of issues. At nearly 40 pages long, this document is by far the most comprehensive articulation yet of United States national security strategy and policy toward artificial intelligence (AI). A closely related companion document, the Framework to Advance AI Governance and Risk Management in National Security, was published on the same day. – The Biden Administration’s National Security Memorandum on AI Explained
Success of the AI national security memo ‘will be in the implementation,’ industry says
(Alexandra Kelley – NextGov – 25 October 2024) Following yesterday’s release of the Biden administration’s first artificial intelligence-centric national security memorandum, technology policy analysts and advocates are paying close attention to the efficacy of the memo’s implementation. As the memo directs a series of actions for the federal government to execute that will contribute to securing U.S. leadership in AI innovation –– including supply chain security, forming a new specialized coordination group and streamlining visa processes for applicants with STEM backgrounds –– policy experts are pushing for firm oversight into these actions’ deployments. – Success of the AI national security memo ‘will be in the implementation,’ industry says – Nextgov/FCW
US needs more AI investment, not just guardrails, defense experts say
(Courtney Albon, Riley Ceder – Defense News – 25 October 2024) New White House AI guidance offers a solid framework for safely using the technology, but there needs to be more investment in the enabling infrastructure to better harness AI’s national security potential, Defense Department and industry leaders said this week. President Biden issued a first-of-its kind memorandum Thursday meant to provide guidance for national security and intelligence agencies on how to effectively and responsibly use AI to further American interests. – US needs more AI investment, not just guardrails, defense experts say
New Rules for US National Security Agencies Balance AI’s Promise With Need to Protect Against Risks
(Associated Press/SecurityWeek – 25 October 2024) New rules from the White House on the use of artificial intelligence by US national security and spy agencies aim to balance the technology’s immense promise with the need to protect against its risks. The framework signed by President Joe Biden and announced Thursday is designed to ensure that national security agencies can access the latest and most powerful AI while also mitigating its misuse. Recent advances in artificial intelligence have been hailed as potentially transformative for a long list of industries and sectors, including military, national security and intelligence. But there are risks to the technology’s use by government, including possibilities it could be harnessed for mass surveillance, cyberattacks or even lethal autonomous devices. – New Rules for US National Security Agencies Balance AI’s Promise With Need to Protect Against Risks – SecurityWeek
The National Security Memorandum on Artificial Intelligence — CSET Experts React
(Center for Security and Emerging Technology – 24 October 2024) On October 24, the White House issued the first-ever National Security Memorandum on Artificial Intelligence. CSET’s experts answer pressing questions and what it means for U.S. national security and AI development. – The National Security Memorandum on Artificial Intelligence — CSET Experts React | Center for Security and Emerging Technology
UK Government Introduces New Data Governance Legislation
(James Coker – Infosecurity Magazine – 24 October 2024) The UK government has introduced new legislation to govern personal data use and sharing through digital technologies. The Data (Use and Access) Bill provides a framework for digital verification services, enabling companies who provide tools for verifying identities to gain a government certified “trust mark.”. The trust mark will be a new logo to show digital verification services are approved by the newly created Office for Digital Identities and Attributes (OfDIA) within the Department for Science, Innovation and Technology (DSIT). – UK Government Introduces New Data Governance Legislation – Infosecurity Magazine
SECURITY
US, Australia Release New Security Guide for Software Makers
(Ionut Arghire – SecurityWeek – 25 October 2024) Software manufacturers should implement a safe software deployment program that supports and enhances the security and quality of both products and deployment environments, new joint guidance from US and Australian government agencies underlines. Meant to help software manufacturers ensure their products are reliable and safe for customers by establishing secure software deployment processes, the document, authored by the US cybersecurity agency CISA, the FBI, and the Australian Cyber Security Centre (ACSC) also guides towards efficient deployments as part of the software development lifecycle (SDLC). – US, Australia Release New Security Guide for Software Makers – SecurityWeek
The State of Cybersecurity: Challenges, Priorities and Insights
(Adham Etoom – Infosecurity Magazine – 25 October 2024) As cyber threats become more complex and frequent, organizations must be proactive in addressing workforce stress, persistent skills gaps, budget constraints, and rising cyber risks. ISACA’s 2024 State of Cybersecurity report, based on responses from 1868 global cybersecurity professionals, highlights the rapidly evolving cybersecurity landscape. – The State of Cybersecurity: Challenges, Priorities and Insights – Infosecurity Magazine
UK Government Urges Organizations to Get Cyber Essentials Certified
(James Coker – Infosecurity Magazine – 24 October 2024) The UK government has urged more organizations to become Cyber Essentials Certified, highlighting the significant impact the scheme has had on preventing damaging attacks. On the 10th anniversary since Cyber Essentials was introduced, the government published the results of an evaluation of the scheme’s effectiveness that was carried out in 2023. – UK Government Urges Organizations to Get Cyber Essentials Certified – Infosecurity Magazine
Principles for state approaches to commercial cyber intrusion capabilities
(James Shires – Chatham House – 18 October 2024) The rapid growth of markets in which cyber intrusion capabilities can be bought and sold as products and services by states, companies and criminals raises thorny policy challenges that are not adequately addressed by existing concepts of legitimate and illegitimate use. This paper explores these challenges, and puts forward a set of principles to help governments and wider society navigate commercial markets for cyber intrusion. Important policy interventions have been made over the past decade to counter the misuse of commercial cyber intrusion capabilities. These focus variously on governments, companies and individuals, but have been initiated by a relatively narrow group of like-minded actors. The principles recommended in this paper, underpinned by a fresh distinction between ‘permissioned’ and ‘unpermissioned’ intrusion, are intended to promote greater coherence and consistency of approaches, and to widen the scope for consensus. – Principles for state approaches to commercial cyber intrusion capabilities | Chatham House – International Affairs Think Tank
GOVERNANCE
Open-Access AI: Lessons From Open-Source Software
(Parth Nobel, Alan Z. Rozenshtein, Chinmayi Sharma – Lawfare – 25 October 2024) In light of the explosive growth of generative AI, which the general public has adopted at a faster rate than personal computers or the Internet, it is natural to worry about who controls this technology. Most of the major industry players—including leading AI labs such as OpenAI (makers of ChatGPT), Anthropic (Claude), and Google (Gemini)—rely on closed models whose details are kept private and whose operation is entirely dependent on the whims of these (increasingly profit-hungry) private companies. – Open-Access AI: Lessons From Open-Source Software | Lawfare
Cybersecurity Teams Largely Ignored in AI Policy Development
(Beth Maundrill – Infosecurity Magazine – 24 October 2024) Cybersecurity teams are being left out of the development of policies governing the use of AI in their enterprises, new research published by ISACA during its 2024 Europe Conference has found. Just 35% of 1800 cybersecurity professionals surveyed said they are involved in development of such policies. Meanwhile, 45% reported no involvement in the development, onboarding or implementation of AI solutions. – Cybersecurity Teams Largely Ignored in AI Policy Development – Infosecurity Magazine
Beijing’s Latest Data Security Regulations Create Framework for Broad Domestic and Extraterritorial Supervision
(Matthew Johnson – The Jamestown Foundation – 24 Ottobre 2024) The State Council-approved “Network Data Security Management Regulations” impose stringent compliance requirements on data processors and platform service providers to safeguard personal information, important data, and cross-border data. The “Regulations” signal continued efforts by the People’s Republic of China (PRC) to assert control over data management and security both within and beyond its borders. The “Regulations” place a heavy emphasis on adherence to the Chinese Communist Party’s (CCP) leadership in data security management, reflecting the PRC’s “comprehensive national security concept.” Overseen by the Cyberspace Administration of China and the Party’s multi-faceted security apparatus, they emphasize national security, mandate strict reporting and risk assessments, and extend their reach to foreign entities processing PRC citizens’ data. The “Regulations” mandate the creation of a National Data Security Coordination Mechanism to supervise protection measures and data catalogues at both national and local levels. Cross-border data transfers of important data and personal information must comply with the PRC’s broadly defined security and individual data rights norms, and companies face potential legal consequences if they process data in a way that harms the PRC’s national security or state interests. – Beijing’s Latest Data Security Regulations Create Framework for Broad Domestic and Extraterritorial Supervision – Jamestown
2023 LinkedIn data on OECD.AI: Definitions for AI occupations are more specific, women in more AI jobs as career transitions to AI grow
(Rosie Hood, Bénédicte Rispal, Lucia Russo, Luis Aranda – OECD.AI Policy Observatory – 24 October 2024) As the use of AI increases everywhere, its influence reshapes the labour market for workers and employers alike. LinkedIn’s 2024 Work Trend Index Annual Report shows a rising demand from both sides to leverage AI in the workplace. A staggering 75% of global knowledge workers now incorporate AI into their daily routines. Employees see AI as a tool that helps them save time, focus on high-priority tasks, boost creativity, and enjoy their work. On the other hand, employers increasingly seek talent with AI expertise, and AI-related hiring has surged by 323% over the past eight years. Furthermore, there has been a notable increase in job applications for AI-related roles: LinkedIn posts that mention AI have seen 17% greater application growth in the last two years compared to job posts that don’t mention AI. The integration of AI into the workforce does not just transform job roles. It also creates a new landscape of skills and opportunities. With the rise of generative AI, new diverse AI competencies, including non-technical abilities like using tools such as ChatGPT and Copilot, are becoming highly sought after in today’s job market. – 2023 LinkedIn data on OECD.AI: Definitions for AI occupations are more specific, women in more AI jobs as career transitions to AI grow – OECD.AI
Can U.S. Tech Giants Deliver on the Promise of Nuclear Power?
(David M. Hart – Council on Foreign Relations – 22 October 2024) U.S. technology companies are rapidly pushing into nuclear power as they compete to develop the next generation of artificial intelligence (AI) tools and services. Their investments, including several deals announced in late 2024, could help them meet their ambitious climate goals and revive a U.S. energy sector that has long failed to deliver on its promise. While high costs and public concern could derail these initiatives, the upside for the planet could be big if these deals help launch new power generation technologies that can be used worldwide. – Can U.S. Tech Giants Deliver on the Promise of Nuclear Power? | Council on Foreign Relations
International Shocks and Regional Responses in Data Governance
(Liliya Khasanova – Lawfare – 22 October 2024) With approximately 5.4 billion active internet users worldwide as of April 2024, the volume of data produced and processed daily is beyond imagination. Around 42 million WhatsApp messages are shared every minute, 1.4 million video or voice calls are made, and 180 million emails are sent, generating over 1.1 trillion megabytes of data daily. This volume grows exponentially, increasing by 23 per cent annually. Just as oil fueled the industrial age, data now powers increasingly digital economies. The market for big data, valued at $160.3 billion in 2022, is expected to reach $400 billion by the end of 2030, driven by artificial intelligence (AI), machine learning, and data analytics advancements. The recent Microsoft outage serves as a disturbing reminder of society’s dependence on—and the vulnerability of—digital infrastructure. Data is not only the lifeblood of the digital economy but also a key resource for shaping political decisions and tackling global challenges. Over the past decade, global crises, or “shocks,” have surfaced in diverse settings, prompting a range of policy and normative responses. But how do these shocks across different regions and policy fields shape the perception, discussion, and regulation of data privacy and security? An examination of recent significant crises—shocks—in intelligence, health, and military sectors demonstrates that (a) these crises may play a crucial role in advancing data regulation and (b) responses have occurred predominantly at the national and regional levels. This highlights how regional responses can often be more agile and effective in addressing crises and have the potential to drive systemic changes for development on a global scale. – International Shocks and Regional Responses in Data Governance | Lawfare (lawfaremedia.org)
The promise and peril of runaway technological advances
(UN News – 21 October 2021) The UN Security Council (…) explored the dual-edged nature of rapid technological advancements – ranging from artificial intelligence to neurotechnology – highlighting both groundbreaking solutions and emerging risks to global peace and security. – The promise and peril of runaway technological advances | UN News
Toward a Model Code for Digital Safety
(Michel Girard – Centre for International Governance Innovation – 18 October 2024) Although standards are being published to address privacy, cybersecurity and high-risk artificial intelligence, more needs to be done to address digital harms. Stakeholders are playing catch-up with a tsunami of new, unproven digital technologies, and standards are developed after the fact. One approach gaining traction is the development of a model code for digital safety. This code would define a set of core values that should be embedded in new digital technologies in order to prevent harms from occurring in the first place. This would replicate what stakeholders have been doing for close to 100 years to ensure the safety of the built environment. – Toward a Model Code for Digital Safety – Centre for International Governance Innovation (cigionline.org)
Fueling China’s Innovation: The Chinese Academy of Sciences and Its Role in the PRC’s S&T Ecosystem
(Center for Security and Emerging Technology – October 2024) The Chinese Academy of Sciences is one of the most important scientific research organizations not only in China but also globally. Through its network of research institutes, universities, companies, and think tanks, CAS is a core component of China’s science and technology innovation ecosystem. This brief first traces the organization’s historical significance in China’s S&T development, outlining key reforms that continue to shape the institution today. It then details CAS’s core functions in advancing S&T research, fostering commercialization of critical and emerging technologies, and contributing to S&T policymaking. Using scholarly literature, we provide insights into CAS’s research output in the science, technology, engineering, and mathematics (STEM) fields as well as in certain critical and emerging technologies, including artificial intelligence (AI). – Fueling China’s Innovation: The Chinese Academy of Sciences and Its Role in the PRC’s S&T Ecosystem | Center for Security and Emerging Technology (georgetown.edu)
DEFENSE, INTELLIGENCE, AND WAR
Obstructive Warfare: Applications and Risks for AI in Future Military Operations
(Amos C. Fox – Centre for International Governance Innovation – 25 October 2024) Policy makers, military leaders and scholars should anticipate AI increasingly contributing to data pathway warfare, in which combatants use information in innovative ways to overcome remote, deep and battlefield sensing capabilities. AI’s ability to make the battlefield more transparent for policy makers and military leaders might result in both military and non-state military forces adopting positional warfare to offset the advantages that AI-enabled sensing provides to combatants. Policy makers, military leaders and scholars should also anticipate an increase in urban warfare as combatants — both state and non-state — seek to offset the potential speed that AI might bring to sensor-to-shooter kill chains. When viewed collectively, these transformative aspects of AI will potentially result in longer conflicts; attritional wars, with increased civilian casualties and collateral damage; and munitions shortages, if industrial bases are not retooled to keep pace with the potential speed of future kill chains. – Obstructive Warfare: Applications and Risks for AI in Future Military Operations – Centre for International Governance Innovation
NATO’s strategy for digital transformation
(NATO – 22 October 2024) The rapid evolution of digital technologies has profoundly transformed our societies, our economies, and is having a significant impact on modern warfare. NATO’s Digital Transformation Implementation Strategy will help address the need for technological and cultural transformation, leveraging data and artificial intelligence to drive this digital transformation. – NATO – News: NATO’s strategy for digital transformation , 22-Oct.-2024
A Plague on the Horizon: Concerns on the Proliferation of Drone Swarms
(Zachary Kallenborn – Observer Research Foundation) In recent years, a number of states have begun integrating their armed drones into collaborative drone swarms. Although global proliferation can be anticipated, drone swarm proliferation should not be expected to be even or immediate. Some states may race to develop massive, armed drone swarms, while others may never develop sophisticated drone swarm capabilities. This brief explores why some states pursue drone swarms, why others may not, and the different pathways to acquisition. – A Plague on the Horizon: Concerns on the Proliferation of Drone Swarms (orfonline.org)
Cyber meets warfare in real time
(Andrew Borene – NextGov – 21 October 2024) Last month, a wave of simultaneous explosions, reportedly triggered by modified pager devices, tore through Hezbollah-controlled regions in Lebanon and Syria.While these events have been attributed to a covert operation likely linked to Israel, their ramifications extend well beyond the immediate conflict. The pager explosions mark a significant convergence of geopolitical, cyber and physical security threats. They raise urgent questions about how outdated technologies can be weaponized in new ways, and they highlight vulnerabilities in supply chains that have implications for both governments and private sector enterprises. – Cyber meets warfare in real time – Nextgov/FCW
NATO steps up Alliance-wide secure data sharing
On Thursday (17 October, 2024), NATO launched a new initiative to foster secure data sharing at speed and scale to further enhance situational awareness and data-driven decision-making. – NATO – News: NATO steps up Alliance-wide secure data sharing, 17-Oct.-2024