Weekly Digest on AI and Emerging Technologies (4 November 2024)

GOVERNANCE AND LEGISLATION

 

State to develop new AI marketplace for staff

 

(Alexandra Kelley – NextGov – 1 November 2024) The State Department is taking new steps in its modernization plan that will marry advanced technologies with ongoing diplomatic efforts, continuing its expansion into leveraging artificial intelligence for daily operations. Matthew Graviss, State’s chief data and artificial intelligence officer, told Nextgov/FCW that his agency is treating its AI endeavors as programs, not products. –  State to develop new AI marketplace for staff – Nextgov/FCW

 

Sustainability at the forefront of digital transformation

 

(Natalya Makarochkina – Arab News – 31 October 2024) As organizations worldwide invest in increasingly diverse applications of AI, from portfolio optimization to supply-chain management and executive decision-making support, there is little doubt that AI is here to stay and adoption will only grow. Estimates from International Data Corporation, a market intelligence business, suggest that in 2023, enterprises worldwide spent $166 billion on AI solutions — software, hardware and services. This figure is expected to grow by 27 percent per year to reach $423 billion by 2027. – Sustainability at the forefront of digital transformation | Arab News

Leading the digital and Fourth Industrial Revolutions in Africa

(Landry Signé – Brookings – 31 October 2024) Despite the narrative that Africa’s challenges will inevitably lead it to fall behind in the 4IR, current and emerging leaders and innovators are paving the path for Africa to become a global powerhouse. – Leading the digital and Fourth Industrial Revolutions in Africa

Beyond oil: Google’s big bet on Saudi Arabia’s AI future

(Mohammed Soliman – Middle East Institute – 31  October 2024) In a landmark move signaling the growing importance of the Middle East in the global tech landscape, Google has entered into a strategic partnership with Saudi Arabia’s Public Investment Fund (PIF), the kingdom’s sovereign wealth fund. Google Cloud and PIF announced the agreement, which would see the establishment of a new artificial intelligence (AI) hub in Saudi Arabia, on the sidelines of the Future Investment Initiative 8th Edition (FII8) conference in Riyadh. The hub, to be located near Dammam in the Eastern Province, will feature the latest Google Cloud infrastructure, including tensor processing units (TPUs) and graphics processing units (GPUs). The partnership aims to position Saudi Arabia as a global AI leader, drive advancements in Arabic language AI models, and create thousands of jobs in the technology sector. The partnership underscores the “growing interlink” between AI and energy, as Saudi Arabia — along with the United Arab Emirates and other Gulf states — uses its energy surplus to power data centers, a critical pillar of AI infrastructure. This energy advantage makes the region increasingly attractive to tech giants like Google, Microsoft, Nvidia, and Amazon. – Beyond oil: Google’s big bet on Saudi Arabia’s AI future | Middle East Institute

HarmonyOS NEXT: Beijing’s Bid for Operating System Independence

(W.Y. Kwok – The Jamestown Foundation – 30 October 2024) Huawei has launched HarmonyOS NEXT, the first fully self-developed mobile operating system in the People’s Republic of China’s (PRC) that is independent of Android and Linux/Unix kernels. HarmonyOS has seen substantial domestic adoption, surpassing iOS to become the PRC’s second-largest operating system with a 17 percent domestic market share. HarmonyOS is bolstered by strong local government and state enterprise support and is becoming a favorite vendor for managing government applications and running government systems. The operating system will face challenges in global markets, including the dominance of iOS and Android, security concerns, and ongoing geopolitical tensions. Southeast Asia will likely provide the first region in which HarmonOS NEXT will be launched overseas. – HarmonyOS NEXT: Beijing’s Bid for Operating System Independence – Jamestown

 

ITU Global Innovation Forum identifies ways to enhance digital innovation

​​​​​​​​​​(International Telecommunication Union – 30 October 2024) The Global Innovation Forum held by the International Telecommunication Union (ITU) between 28 and 30 October has identified key ways to help close the “digital innovation gap.”. The suggested approaches, focusing on critical factors from expanding collaboration to attracting investment, are meant to spur sustainable economic growth. – Press Release

Digital Inclusion and the Global Tech Divide: How the Digital Revolution is Leaving Some of Us in the Digital Dark Ages

(Aminah Mustapha – Georgetown Security Studies Review – 30 October 2024) The digital revolution has changed nearly every facet of daily life—from communication and work to tackling urgent global issues such as climate change and public health. Despite technology’s ability to connect and empower, millions worldwide are still excluded from its advantages. This “digital divide”—the disparity between those with access to modern information and communication technologies (ICT) and those without—extends far beyond merely lacking the latest smartphone or not having social media accounts. The digital divide restricts access to vital services like education and healthcare, heightening socioeconomic inequalities and hindering global progress on crucial matters. It exacerbates existing layers of exclusion, where cost, infrastructure issues, and poor digital literacy further separate the fortunate and the less fortunate. At a geopolitical level, inadequate technology development leads to national economic stagnation, weakens government effectiveness, and increases susceptibility to cyber threats and disinformation. Tech-related inequality risks intensifying geopolitical tensions and inciting social unrest if left unaddressed. Ultimately, this multilayered problem requires a collaborative response from governments and industry. Without purposeful efforts to close this gap, the digital divide will persist in increasing inequalities, leaving the disadvantaged even further behind. Policy interventions that foster digital inclusion are needed. Digital inclusion means ensuring all individuals, especially underserved communities, have equitable access to technology and the skills needed to use it effectively. It includes affordable internet, digital literacy programs, accessible content, and the ability to participate in the digital economy. Ultimately, digital inclusion enables fuller social participation and helps bridge social and economic divides–essential to a fairer and more resilient global future. – Digital Inclusion and the Global Tech Divide: How the Digital Revolution is Leaving Some of Us in the Digital Dark Ages. – Georgetown Security Studies Review

Tech companies are showing a new, strong interest in nuclear power. Here’s why

(Jennifer T. Gordon, Lauren Hughes – Atlantic Council – 29 October 2024) Earlier this month, Amazon Web Services (AWS) and Google announced partnerships and investments in advanced nuclear reactor developers. Tech companies are new players in the nuclear innovation ecosystem and find investment in nuclear generation—both existing and future reactors—compelling because of its unparalleled ability to reliably generate large amounts of carbon-free electricity. However, the upfront capital investment required to build a first-of-a-kind reactor is substantial. Tech companies are now partnering directly with advanced reactor developers and with traditional industry players—such as utility companies—to advance new nuclear projects. – Tech companies are showing a new, strong interest in nuclear power. Here’s why. – Atlantic Council

AI in Space Technologies: A Singapore Case Study

(Karryl Kim Sagun Trajano, Iuna Tsyrulneva, Chee Yong Sean Chua – RSIS – 29 October 2024) Singapore, Asia’s smartest city in 2024, is treading towards integrating artificial intelligence (AI) with space technologies. This report examines the impact, potential, and challenges of this convergence, based on insights from five space experts across Singapore’s public, private, and academic sectors. Key themes include: (i) AI’s role in enhancing space and Earth sustainability; (ii) Singapore’s focus on equatorial data collection with AI to bridge global data gaps; and (iii) fostering public-private-academic synergy in AI and space. Highlighted are AI applications for remote sensing, challenges like data quality and talent gaps, and the multi-stakeholder ecosystem in Singapore as a case for sustainable innovation. The report recommends that Singapore develop a National Space Strategy to ensure responsible and sustainable space and Earth activities, strive for sustained innovation momentum in AI applications for space technologies, establish stronger collaborative platforms across sectors that will aid in data sharing and address the talent gap, prioritise collection of climate and environmental data to improve forecasts, contribute to the global climate dataset, and expedite reactions to regional adverse events. – AI in Space Technologies: A Singapore Case Study – RSIS

What AI Labs Can Learn From Independent Agencies About Self-Regulation

(Nick Caputo – Lawfare – 28 October 2024) Nine years ago, OpenAI was founded as a nonprofit for the “benefit of all humanity.” The organization installed a board of directors to ensure this mission, even after the lab shifted to a “capped profit” structure in 2019 to attract capital. On Nov. 17, 2023, OpenAI’s board of directors voted to remove then- and current-CEO Sam Altman from his position on the basis that he was “not consistently candid” in his communications with the board, which—in their view—threatened the lab’s fundamental mission. But just days later, Altman was back in as CEO and that board was out, replaced in time by a new set of directors that seem less worried about catastrophic harms from artificial intelligence (AI). Now, as it seeks to raise $6.5 billion on a valuation of $150 billion, OpenAI is aiming to do away with the unique institutional structure that allowed its first board to remove Altman in favor of a more normal corporate form (likely as a public benefit corporation) that will allow it to raise more money and give Altman equity. OpenAI and others in Silicon Valley have argued that self-regulation (or waiting for regulation by Congress, likely the same thing) is the best way forward for the industry. – What AI Labs Can Learn From Independent Agencies About Self-Regulation | Lawfare

Considering a Legally Binding Instrument on Autonomous Weapons

 

(Charlie Trumbull – Lawfare – 28 October 2024) Autonomous weapons systems have taken center stage in the field of disarmament. Discussions on these weapons began over a decade ago with the Group of Governmental Experts (GGE) in Geneva, Switzerland, in the context of the Convention on Certain Conventional Weapons (CCW), and have recently expanded to the Human Rights Council and the UN General Assembly. In December, the General Assembly stressed “the urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems.” It requested that the secretary-general “seek the views of Member States” on ways to address these challenges and “submit a substantive report” to the General Assembly at its 79th session. – Considering a Legally Binding Instrument on Autonomous Weapons | Lawfare

Alignment of AI in education with human rights frameworks is key

(UN News – 25 October 2024) Access to high-quality education is a human right that not only greatly benefits individuals but also uplifts entire communities. Millions of children, however, remain out of school due to a variety of factors including gender, location, social background or conflict. With the rise of Artificial Intelligence (AI) the UN Special Rapporteur on the Right to Education, Farida Shaheed, has prepared a report to the General Assembly exploring the challenges and benefits of incorporating AI in school settings. With a third of the world still offline or without access to devices, she told UN News’s Ana Carmo that AI is “bound” to increase inequality, intensifying the so-called digital divide. – Alignment of AI in education with human rights frameworks is key | | UN News

Augmentation, Not Substitution: HCSS Manual for the Responsible Use of Generative AI

(Tim Sweijs, Jesse Kommandeur, Abe de Ruijter – The Hague Centre for Strategic Studies – 24 October 2024) This manual outlines principles for the responsible and effective use of Generative Artificial Intelligence (AI) at HCSS, in recognition of both the opportunities and the challenges and limitations related to the use of Generative AI applications in applied research and policy analysis. The manual outlines ten maxims based on a set of principles centering on confidentiality, transparency, authenticity, reliability, integrity and ingenuity. These principles serve to ensure the responsible use of Generative AI in line with professional practices of research, ethical standards, and legal requirements related to privacy and copyright. Overall, the manual calls for a balanced approach towards integrating generative AI in research and policy analysis. It underscores the need to address inherent biases within generative AI applications and reiterates the importance of multidisciplinary and multimethod approaches in applied research. Ultimately, the principles serve as a foundation for harnessing the capabilities of Generative AI while enhancing the effectiveness of human analysts and safeguarding the integrity of research outcomes. The manual will be updated in due course as the situation evolves. – Augmentation, Not Substitution: HCSS Manual for the Responsible Use of Generative AI – HCSS

Top 10 Emerging Technologies 2024

(World Economic Forum – 24 October 2024) What are ‘elastocalorics’ or ‘reconfigurable intelligent surfaces’? In a few years’ time these emerging technologies may have transformed the way we heat and cool our homes, and how we transmit ever greater amounts of data. They are among the technological innovations identified in the World Economic Forum’s annual Top 10 Emerging Technologies report, which picks the tech that could transform the world in the coming years.  In this video-podcast, the two lead authors of the report take us through each of the 10 on this year’s list. – Top 10 Emerging Technologies 2024 | World Economic Forum

SECURITY

Ransomware attack hits German pharmaceutical wholesaler, disrupts medicine supplies

(Alexander Martin – The Record – 1 November 2024) AEP, a German pharmaceutical wholesaler based in Bavaria, said it was hit by a ransomware attack that could disrupt the supply of medicine to thousands of pharmacies. In a statement on the AEP website, the company described the cyberattack as “targeted and criminal” and resulting in the partial encryption of AEP’s IT systems. – Ransomware attack hits German pharmaceutical wholesaler, disrupts medicine supplies

California court suffering from tech outages after cyberattack

(Jonathan Greig – The Record – 1 November 2024) The San Joaquin County Superior Court said nearly all of its digital services have been knocked offline due to a cyberattack that began earlier this week. The court first warned the county’s nearly 800,000 residents of technology issues on Wednesday before admitting that it was a cybersecurity incident on Thursday. The attack knocked out all of the court’s phone and fax services, websites containing juror reporting instructions, the e-filing platform, credit card payment processing and more. – California court suffering from tech outages after cyberattack

Federal agency investigating how Meta uses consumer financial data for advertising

(Suzanne Smalley – The Record – 1 November 2024) The Consumer Financial Protection Bureau (CFPB) has notified Meta it may take “legal action” against the tech giant, alleging that the company improperly obtained consumers’ financial data from third parties and pumped it into its massively profitable targeted advertising business. News of the federal probe emerged in a Thursday filing Meta submitted to the Securities and Exchange Commission (SEC). – Federal agency investigating how Meta uses consumer financial data for advertising

Police seek compromise with CFPB as regulator mulls reining in investigator access to sensitive data

(Suzanne Smalley – The Record – 1 November 2024) One of the country’s largest police associations met with the director of the Consumer Financial Protection Bureau (CFPB) in October in an effort to ensure the agency’s forthcoming proposed rule governing data brokers allows police to continue to access sensitive personal data for investigative purposes. The CFPB is currently honing the hotly anticipated proposed rule. Depending on its final form, it could both significantly rein in many data brokers buying and selling consumer data and make it harder for law enforcement to obtain data. That tension played out at a recent and previously unreported meeting between one of the National Association of Police Organizations (NAPO) and CFPB Director Rohit Chopra on October 7. –  Police seek compromise with CFPB as regulator mulls reining in investigator access to sensitive data

Tajikistan bans Counter-Strike, GTA and plans raids on gaming centers

(Daryna Antoniuk – The Record – 1 November 2024) Tajikistan has imposed a ban on distributing the popular video games Counter-Strike and Grand Theft Auto (GTA), labeling them as violent and immoral. According to a statement by the country’s interior ministry, police in Tajikistan’s capital, Dushanbe, “will conduct raids and inspections in computer gaming centers” suspected of selling these games. “Young people and teenagers who regularly play these games come under their negative influence and commit various crimes,” the ministry said, adding that parents should keep an eye on their children and prevent them from playing games “that promote killing, theft, and violence.”. Counter-Strike is a multiplayer first-person shooter where players join either a terrorist or counter-terrorist team to complete objectives like defusing bombs, rescuing hostages or eliminating the opposing team. In GTA, players explore cities and complete missions, often engaging in criminal activities as part of the storyline. – Tajikistan bans Counter-Strike, GTA and plans raids on gaming centers

Los Angeles housing agency confirms another cyberattack after 2023 ransomware incident

(Jonathan Greig – The Record – 1 November 2024) The Housing Authority of the City of Los Angeles (HACLA) said it is dealing with a cyberattack following claims of data theft made by a ransomware gang. In a statement to Recorded Future News, a spokesperson for HACLA confirmed that it has “been affected by an attack” on its IT network. “As soon as we became aware of this, we hired external forensic IT specialists to help us investigate and respond appropriately,” the spokesperson said. “Our systems remain operational, we’re taking expert advice, and we remain committed to delivering important services for low income and vulnerable people in Los Angeles.” – Los Angeles housing agency confirms another cyberattack after 2023 ransomware incident

US, Israel Describe Iranian Hackers’ Targeting of Olympics, Surveillance Cameras

(Eduard Kovacs – SecurityWeek – 1 November 2024) The United States and Israel this week published a cybersecurity advisory describing the latest activities of an Iranian threat group, including attacks targeting the recent Olympics and surveillance cameras. The FBI has been tracking this group’s activities since 2020. The threat actor is known in the private sector as Cotton Sandstorm, Marnanbridge, and Haywire Kitten, but it’s probably best known as Emennet Pasargad, the name of the company that was until recently used as a front for the group’s activities. – US, Israel Describe Iranian Hackers’ Targeting of Olympics, Surveillance Cameras  – SecurityWeek

Cybersecurity Risks of AI-Generated Code

(Jessica Ji, Jenny Jun, Maggie Wu, Rebecca Gelles – Center for Security and Emerging Technology – November 2024) Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications. – Cybersecurity Risks of AI-Generated Code | Center for Security and Emerging Technology

CISA Launches First International Cybersecurity Plan

(James Coker – Infosecurity Magazine – 30 October 2024) The US Cybersecurity and Infrastructure Security Agency (CISA) has published its first ever international strategic plan, designed to boost international cooperation in combatting cyber threats to critical infrastructure. The plan acknowledges the complex and geographically dispersed nature of cyber risks, and the need for threat information and risk reduction advice to be shared rapidly with international partners. – CISA Launches First International Cybersecurity Plan – Infosecurity Magazine

The changing submarine cables landscape

(Yuka Koshino – European Union Institute for Security Studies – 30 October 2024) The rapid development of digital technologies has dramatically increased global internet bandwidth demand, with submarine cables today accounting for nearly 99% of intercontinental data traffic. These undersea cables are the backbone of global communications and the internet economy. Despite their critical importance, these cables remain inadequately protected, particularly in the Indo-Pacific region. Globally, approximately 100-200 cases of damage to undersea cables are reported annually according to the International Cable Protection Committee (ICPC). Most of these incidents are accidental, typically caused by fishing or anchoring. Physical protection measures, such as burial in shallow waters and electronic monitoring of anomalies, along with legal regulations, all contribute to cable security. However, much of the responsibility for protecting these cables lies with operators, limiting private-public cooperation and creating vulnerabilities, whether during times of peace, ‘grey zone’ situations or at moments of crisis. Ensuring seamless protection that spans from peacetime to periods of potential volatility remains a significant challenge. The Indo-Pacific, a region marked by complex geopolitics and rising tensions, faces unique challenges in ensuring the security of its submarine cables. For the European Union, whose interests in the region are framed by its 2021 Indo-Pacific Strategy, the vulnerabilities related to this critical infrastructure pose a strategic risk, calling for more proactive measures to manage them. – The changing submarine cables landscape | European Union Institute for Security Studies

Over Half of US County Websites “Could Be Spoofed”

 

(Phil Muncaster – Infosecurity Magazine – 30 October 2024) Security experts have sounded another US election warning after claiming that the majority of US county websites could be copied to spread disinformation and steal info. Comparitech analyzed the websites and official contact email addresses for 3144 US counties to compile its report. These administrative districts play an important role in elections, as many voters turn to their local county website for information on polling booths and other queries. – Over Half of US County Websites “Could Be Spoofed” – Infosecurity Magazine

How to Improve the Security of AI-Assisted Software Development

(Matias Madou – SecurityWeek – 29 October 2024) By now, it’s clear that the artificial intelligence (AI) “genie” is out of the bottle – for good. This extends to software development, as a GitHub survey shows that 92 percent of U.S.-based developers are already using AI coding tools both in and outside of work. They say AI technologies help them improve their skills (as cited by 57 percent), boost productivity (53 percent), focus on building/creating instead of repetitive tasks (51 percent) and avoid burnout (41 percent). – How to Improve the Security of AI-Assisted Software Development – SecurityWeek

Staying Ahead in the Global Technology Race

(Center for Strategic & International Studies – 29 October 2024) The United States is in the midst of a generational shift in economic policy and its role in national security planning. Even in these polarized times, there is surprising consensus across the American political spectrum that the economic policies and global institutions fostered since World War II are no longer adequate. They have left the United States vulnerable to competition with non-market actors, principally China; domestic economic dislocations; and global crises such as climate change and pandemics. These vulnerabilities persist and will await the next administration. The need for an allied approach to economic security is now axiomatic. It will require the United States to lead and partner in equal measures as it navigates the emerging economic security policy trilemma between promote, protect, and partner policies. The challenge for the next administration is to focus on the time- and stress-tested drivers of innovation—competition in secure, trusted international technology markets and cooperation with allies. – Staying Ahead in the Global Technology Race: A Roadmap for Economic Security

2024 Quad Cyber Challenge Joint Statement

(U.S. Department of State – 21 October 2024) Quad partners of Australia, India, Japan, and the United States—under the auspices of the Quad Senior Cyber Group—announced the continuation of our joint campaign, the Quad Cyber Challenge, to strengthen responsible cyber ecosystems, promote public resources, and raise cybersecurity awareness. The theme of this year’s Challenge is promoting cybersecurity education and building a strong workforce. – 2024 Quad Cyber Challenge Joint Statement – United States Department of State

Suspicious Social Media Accounts Deployed Ahead of COP29

(Kevin Poireault – Infosecurity Magazine – 29 October 2024) A network of 71 suspicious accounts on X has been deployed ahead of the UN’s COP29 climate change conference. The accounts aim to give the impression of grassroots support for the Azerbaijan government, according to NGO Global Witness. Azerbaijan, host of the 29th Climate Conference of Parties (COP29) from November 11 to 22, 2024, has a track record of using coordinated inauthentic accounts on Facebook to target the country’s journalists and democracy activists as well as using bots and troll farms on X to criticize Armenia. – Suspicious Social Media Accounts Deployed Ahead of COP29 – Infosecurity Magazine

A Digital Megaphone: The Far Right’s Use of Podcasts for Radicalisation

(William Allchorn – Global Network On Extremism & Technology – 22 October 2024) In recent years, podcasts have exploded as a medium for communication, discussion, and entertainment. This digital platform is inexpensive and easily accessible, providing anyone with a microphone and internet connection a way to share their ideas with a global audience. While this democratisation of media has allowed diverse voices to emerge, it has also created fertile ground for extremists, including the far right, to spread their ideology. The rise of far-right podcasts is a concerning phenomenon, as these platforms have become powerful tools for radicalisation, disinformation, and community-building around dangerous ideas. This Insight provides a history and general overview of the far-right’s use of podcast audio, its appeals, themes, tactics, and how it presents a pathway to further radicalisation. – A Digital Megaphone: The Far Right’s Use of Podcasts for Radicalisation – GNET

 

DEFENSE, INTELLIGENCE, AND WAR

Pentagon developing ‘Responsible AI’ guides for defense, intelligence, interagency — even allies

(Sydney J. Freedberg Jr. – Breaking Defense – 1 November 2024) As part of the Biden administration’s global push for “Responsible Artificial Intelligence,” the Defense Department is building a growing library of interactive online guides to help program managers and other officials develop safe and ethical AI. (AI-aided counter-drone defense? Sure. Algorithms to help draft contracting language? Maybe, with guardrails. Self-aware nuclear launch systems that don’t wait for human input? No). Versions have already been published or are in the works for defense officials, the intelligence community, civilian agencies, and even foreign allies, Pentagon RAI director Matthew K. Johnson said this week, with updates addressing generative artificial intelligence and President Joe Biden’s recently signed National Security Memorandum on AI. –  Pentagon developing ‘Responsible AI’ guides for defense, intelligence, interagency — even allies – Breaking Defense

PRC Adapts Llama for Military and Security AI Applications

(Sunny Cheung – The Jamestown Foundation – 31 October 2024) Researchers in the People’s Republic of China (PRC) have optimized Meta’s Llama model for specialized military and security purposes. ChatBIT, an adapted Llama model, appears to be successful in demonstrations in which it was used in military contexts such as intelligence, situational analysis, and mission support, outperforming other comparable models. Open-source models like Llama are valuable for innovation, but their deployment to enhance the capabilities of foreign militaries raises concerns about dual-use applications. The customization of Llama by defense researchers in the PRC highlights gaps in enforcement for open-source usage restrictions, underscoring the need for stronger oversight to prevent strategic misuse. – PRC Adapts Llama for Military and Security AI Applications – Jamestown

Leaders wrestle with a potent mix: AI and weapons of mass destruction

(Linus Höller – Defense News – 30 October 2024) Emerging technologies have radically reshaped the arms control landscape and pose a set of major challenges, though also some opportunities in curbing the spread of weapons of mass destruction, said representatives of five major UN-adjacent disarmament agencies. Speaking Oct. 25 on the sidelines of the United Nations General Assembly’s First Committee meeting – the UN’s top disarmament body – representatives discussed how the emergence of artificial intelligence, accessible drones, new reactor technologies and others have impacted their task of controlling the proliferation of dangerous weapons and materials. – Leaders wrestle with a potent mix: AI and weapons of mass destruction

Broader federal investment in quantum sensing needed to outpace China, industry report says

(Patrick Tucker – Defense One – 29 October 2024) The tech industry needs more funding from the U.S. government—not just the Pentagon—if it is to outpace China in the quest for quantum sensors, whose national-security implications include ground-based replacements for aging and vulnerable GPS satellites, an industry group argues in a new report. Currently, the U.S. government spends about $900 million on quantum sensing each year, most going to the Defense Department.. An increase is needed, according to the report, which was produced by the industry-driven Quantum Economic Development Consortium with funding from the National Institute of Standards and Technology. –  Broader federal investment in quantum sensing needed to outpace China, industry report says – Defense One

Bytes and Battles: Inclusion of Data Governance in Responsible Military AI

(Yasmin Afina, Sarah Grand-Clément – Centre for International Governance Innovation – 29 October 2024) Data plays a critical role in the training, testing and use of artificial intelligence (AI), including in the military domain. Research and development for AI-enabled military solutions is proceeding at a rapid pace; however, pathways and governance solutions to address concerns (such as issues with the availability and quality of training data sets) are lacking. This paper provides a comprehensive overview of data issues surrounding the development, deployment and use of AI; examines data governance lessons and practices from civilian applications; and identifies pathways through which data governance could be enacted. The paper concludes with an overview of possible policy and governance approaches to data practices surrounding military AI to foster the responsible development, testing, deployment and use of AI in the military domain. – Bytes and Battles: Inclusion of Data Governance in Responsible Military AI – Centre for International Governance Innovation

Five Eyes Agencies Launch Startup Security Initiative

 

(Phil Muncaster – Infosecurity Magazine – 29  October 2024) The UK, US, Canada, New Zealand and Australian governments have launched a new program designed to help their tech startups improve baseline cybersecurity measures, in the face of escalating state-backed threats. Secure Innovation was originally a UK initiative run by GCHQ’s National Cyber Security Centre (NCSC) and MI5’s National Protective Security Authority (NPSA). However, it has now been adopted and promoted by all Five Eyes intelligence agencies in regionalized versions. – Five Eyes Agencies Launch Startup Security Initiative – Infosecurity Magazine

Space and Counterspace Technologies: Assessing the Current Threat Environment

(Victoria Samson – Observer Research Foundation – 25 October 2024) The role of space and counterspace technologies in future warfare will only grow with time; yet what do these phrases currently mean? ‘Counterspace’ is preferred over ‘space’ because the issue is not just that the technology is space-related but that there is an attempt to interfere with it, which is more disruptive to global stability. Similarly, the term ‘space weapons’ has become outdated as it no longer reflects the current space threat environment. What we have are capabilities, some of which are deployed in space, and some of which are not, which can be used in a dual-purpose way: benign or aggressive, or for defensive or offensive goals. The concern is less the technology and more the intent behind it and how it is used. – Space and Counterspace Technologies: Assessing the Current Threat Environment

This site is registered on wpml.org as a development site.