Daily Digest on AI and Emerging Technologies (29 October 2024)

TOP OF THE DAY

 

Considering a Legally Binding Instrument on Autonomous Weapons

 

(Charlie Trumbull – Lawfare – 28 October 2024) Autonomous weapons systems have taken center stage in the field of disarmament. Discussions on these weapons began over a decade ago with the Group of Governmental Experts (GGE) in Geneva, Switzerland, in the context of the Convention on Certain Conventional Weapons (CCW), and have recently expanded to the Human Rights Council and the UN General Assembly. In December, the General Assembly stressed “the urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems.” It requested that the secretary-general “seek the views of Member States” on ways to address these challenges and “submit a substantive report” to the General Assembly at its 79th session. – Considering a Legally Binding Instrument on Autonomous Weapons | Lawfare

 

What AI Labs Can Learn From Independent Agencies About Self-Regulation

(Nick Caputo – Lawfare – 28 October 2024) Nine years ago, OpenAI was founded as a nonprofit for the “benefit of all humanity.” The organization installed a board of directors to ensure this mission, even after the lab shifted to a “capped profit” structure in 2019 to attract capital. On Nov. 17, 2023, OpenAI’s board of directors voted to remove then- and current-CEO Sam Altman from his position on the basis that he was “not consistently candid” in his communications with the board, which—in their view—threatened the lab’s fundamental mission. But just days later, Altman was back in as CEO and that board was out, replaced in time by a new set of directors that seem less worried about catastrophic harms from artificial intelligence (AI). Now, as it seeks to raise $6.5 billion on a valuation of $150 billion, OpenAI is aiming to do away with the unique institutional structure that allowed its first board to remove Altman in favor of a more normal corporate form (likely as a public benefit corporation) that will allow it to raise more money and give Altman equity. OpenAI and others in Silicon Valley have argued that self-regulation (or waiting for regulation by Congress, likely the same thing) is the best way forward for the industry. – What AI Labs Can Learn From Independent Agencies About Self-Regulation | Lawfare

Google Invests in Alternative Neutral Atom Quantum Technology

(Kevin Townsend – SecurityWeek – 28 October 2024) Google has privately invested in a firm developing a very different and potentially rival quantum computer technology. Google, a major figure in quantum computer development using superconducting technology to produce quantum bits (qubits), has invested a multi-million dollar sum into a firm developing an entirely different quantum technology: neutral atoms. In mid-October 2024 – five years after Google announced it had achieved ‘quantum supremacy’ in 2019 – it invested in the quantum hardware firm QuEra Computing. This was a private investment in a private firm that was founded in 2018. The investment is outside of venture funding, and there are no disclosed details. – Google Invests in Alternative Neutral Atom Quantum Technology – SecurityWeek

Alignment of AI in education with human rights frameworks is key

(UN News – 25 October 2024) Access to high-quality education is a human right that not only greatly benefits individuals but also uplifts entire communities. Millions of children, however, remain out of school due to a variety of factors including gender, location, social background or conflict. With the rise of Artificial Intelligence (AI) the UN Special Rapporteur on the Right to Education, Farida Shaheed, has prepared a report to the General Assembly exploring the challenges and benefits of incorporating AI in school settings. With a third of the world still offline or without access to devices, she told UN News’s Ana Carmo that AI is “bound” to increase inequality, intensifying the so-called digital divide. – Alignment of AI in education with human rights frameworks is key | | UN News

Space and Counterspace Technologies: Assessing the Current Threat Environment

(Victoria Samson – Observer Research Foundation – 25 October 2024) The role of space and counterspace technologies in future warfare will only grow with time; yet what do these phrases currently mean? ‘Counterspace’ is preferred over ‘space’ because the issue is not just that the technology is space-related but that there is an attempt to interfere with it, which is more disruptive to global stability. Similarly, the term ‘space weapons’ has become outdated as it no longer reflects the current space threat environment. What we have are capabilities, some of which are deployed in space, and some of which are not, which can be used in a dual-purpose way: benign or aggressive, or for defensive or offensive goals. The concern is less the technology and more the intent behind it and how it is used. – Space and Counterspace Technologies: Assessing the Current Threat Environment

Top 10 Emerging Technologies 2024

(World Economic Forum – 24 October 2024) What are ‘elastocalorics’ or ‘reconfigurable intelligent surfaces’? In a few years’ time these emerging technologies may have transformed the way we heat and cool our homes, and how we transmit ever greater amounts of data. They are among the technological innovations identified in the World Economic Forum’s annual Top 10 Emerging Technologies report, which picks the tech that could transform the world in the coming years.  In this video-podcast, the two lead authors of the report take us through each of the 10 on this year’s list. – Top 10 Emerging Technologies 2024 | World Economic Forum

 

A Digital Megaphone: The Far Right’s Use of Podcasts for Radicalisation

(William Allchorn – Global Network On Extremism & Technology – 22 October 2024) In recent years, podcasts have exploded as a medium for communication, discussion, and entertainment. This digital platform is inexpensive and easily accessible, providing anyone with a microphone and internet connection a way to share their ideas with a global audience. While this democratisation of media has allowed diverse voices to emerge, it has also created fertile ground for extremists, including the far right, to spread their ideology. The rise of far-right podcasts is a concerning phenomenon, as these platforms have become powerful tools for radicalisation, disinformation, and community-building around dangerous ideas. This Insight provides a history and general overview of the far-right’s use of podcast audio, its appeals, themes, tactics, and how it presents a pathway to further radicalisation. – A Digital Megaphone: The Far Right’s Use of Podcasts for Radicalisation – GNET

 

SECURITY

‘All servers’ for Redline and Meta infostealers hacked by Dutch police and FBI

(Alexander Martin – The Record – 28 October 2024) The Dutch National Police announced on Monday having gained “full access” to all of the servers used by the Redline and Meta infostealers, two of the most widely used cybercrime tools on the internet. Infostealer malware is a major cybersecurity threat, often sold as a malware-as-a-service tool, that infects victims’ devices to harvest information such as credit card details and autofill password data. – ‘All servers’ for Redline and Meta infostealers hacked by Dutch police and FBI

Texas county says 47,000 had SSNs, medical treatment info leaked during May cyberattack

(Jonathan Greig – The Record – 28 October 2024) A cyberattack in May gave hackers access to the personal, financial and medical information of more than 47,000 residents living in Wichita County, Texas. County officials filed breach notification documents with regulators in Texas as well as Maine and posted a notice on their website warning residents that the incident involved everything from names, Social Security numbers and government IDs to financial account information, health insurance information and some types of medical treatment information. – Texas county says 47,000 had SSNs, medical treatment info leaked during May cyberattack

Free, France’s second-largest telecoms company, confirms being hit by cyberattack

(Alexander Martin – The Record – 28 October 2024) Free, the second-largest internet service provider in France, confirmed being hacked this weekend following the attempted sale of purportedly stolen customer information on a cybercrime forum. The Paris-based company has issued a warning that personal data was compromised in the incident, has filed a criminal complaint with the country’s public prosecutor and has notified France’s cybersecurity agency, as reported by newspaper Le Monde on Saturday. – Free, France’s second-largest telecoms company, confirms being hit by cyberattack

New Type of Job Scam Targets Financially Vulnerable Populations

(Alessandro Mascellino – Infosecurity Magazine – 28 October 2024) A surge in online job scams targeting financially vulnerable individuals has been identified by cybersecurity experts at Proofpoint. Known as “job scamming,” this new tactic mirrors the existing “pig butchering” fraud model but aims at a broader audience by preying on job seekers looking for remote, flexible work. While pig butchering scams typically focus on individuals with significant investment funds, these job scams seek smaller, faster payouts from financially struggling targets. – New Type of Job Scam Targets Financially Vulnerable Populations – Infosecurity Magazine

How Belgium’s Leonidas Project Boosts National Cyber Resilience

(Kevin Poireault – Infosecurity Magazine – 28 October 2024) The outbreak of the war in Ukraine in 2022 brought heightened cyber threats to Ukraine’s allies. As a result, many European countries began rethinking their cyber defenses. Belgium was one of them, with the country’s Prime Minister, Alexander De Croo, launching a new project aimed at strengthening national cyber defenses and leveling up the government’s cyber support to Belgian organizations in 2022. This initiative, called the Leonidas project, was entrusted to the country’s national cyber agency, the Centre for Cybersecurity Belgium (CCB), and its Cyber Threat Research and Intelligence Sharing (CyTRIS). – How Belgium’s Leonidas Project Boosts National Cyber Resilience – Infosecurity Magazine

DEFENSE, INTELLIGENCE, AND WAR

AI has role to play in protecting American nuclear C2 systems: STRATCOM head

(Carley Welch – Breaking Defense – 28 October 2024) US Strategic Command must update its nuclear command, control and communications (NC3) systems to become more resilient to adversaries, which includes implementing artificial intelligence and machine learning, STRATCOM’s chief said today. NC3 systems “must be secure, and they must be redundant to function in both conventional conflict conflicts or nuclear ones,” Gen. Anthony Cotton, commander of STRATCOM, said during his keynote address at the Department of Defense Intelligence Information System conference here in Omaha. “Robust cybersecurity measures are critical to protect NC3 systems from adversary attacks or manipulation. Maintaining technological superiority in IT systems create uncertainty in the calculus of our adversaries. That enhances deterrence.” – AI has role to play in protecting American nuclear C2 systems: STRATCOM head – Breaking Defense

America’s defense contractors are failing basic cybersecurity and China is exploiting it

(Eric Noonan – NextGov – 28 October 2024) Most Americans likely assume that defense contractors, funded by taxpayer dollars, already meet stringent cybersecurity requirements. After all, these companies are the backbone of national defense, handling everything from classified military projects to critical infrastructure. Unfortunately, that assumption is dangerously wrong. A new report from Merrill Research delivers a sobering reality check: Only 4% of defense contractors are fully prepared to meet the Department of Defense minimum cybersecurity requirements known as the Cybersecurity Maturity Model Certification. These requirements represent basic cyber hygiene, not cutting-edge tools — yet the vast majority of contractors fail to meet even these minimum standards. – America’s defense contractors are failing basic cybersecurity and China is exploiting it – Nextgov/FCW

White House targets US investments in Chinese AI and quantum tech

(Patrick Tucker – Defense One – 28 October 2024) A new White House “final rule” announced Monday aims to shut down the flow of U.S. venture capital and other investment funds into Chinese technology that poses a national security risk, such as quantum computers, artificial intelligence and advanced microelectronics, senior White House officials said. The rule is intended as a guide to implement an executive order the White House put out last August, ordering the Treasury Department and other relevant agencies to identify categories of investment in Chinese tech that could pose a threat to the United States. Today’s rule  “prohibits U.S. persons from engaging in certain transactions involving semiconductors, quantum and artificial intelligence. And second, it requires U.S. persons to notify treasury of certain other transactions involving semiconductors and artificial intelligence,” a senior White House official told reporters ahead of the release. – White House targets US investments in Chinese AI and quantum tech – Defense One

Biden admin issues restrictions on US investments into sensitive tech tied to China

(David DiMolfetta – NextGov – 28 October 2024) The White House and Treasury Department released a final rule Monday evening that blocks various U.S. investments into sensitive technologies that could undermine national security, particularly in relation to China’s military and intelligence capabilities. The move targeting outbound investments, which has been in development for some time, is designed to prevent China from augmenting key technologies that directly support its military modernization and related activities. – Biden admin issues restrictions on US investments into sensitive tech tied to China – Nextgov/FCW

Aussies grappling with AI’s role in cyber threat decision making

(Colin Clark – Breaking Defense – 28 October 2024) As Australia’s national security establishment seeks to implement the US-led Responsible Military Use of AI and Autonomy Agreement, which requires a human being make the final decision to fire a weapon, a top government official on cyber policy is being up front that his government is trying to figure out how to make it all work. At the core of the challenge, said Peter Anstee, first assistant secretary of the Department of Home Affairs’ cyber and technology security policy division, is whether man-in-the-loop decision making can ever be fast enough to keep up with a cyber attack guided by artificial intelligence, or if that requirement will effectively hamstring Canberra’s ability to counter and respond to such a threat. – Aussies grappling with AI’s role in cyber threat decision making – Breaking Defense

GOVERNANCE

Canada Needs a National Strategy on the Future of Innovation

(Matthew da Mota – Centre for International Governance Innovation – 28 October 2024) There have been many recent critiques of Canada’s ability to drive innovation and retain intellectual property (IP), which has sparked discussion and government consultation to explore solutions to the country’s lagging innovation economy. Despite huge pools of talent, vast funding programs, several institutes for artificial intelligence (AI) and other emerging technologies, tax credits, investment and excellent universities, Canada is struggling to turn these resources into tangible innovation results and continues to lose IP rights and workforce. Although we can pinpoint many causes and effects related to this issue, there is one overarching reality that defines the Canadian innovation sphere: Canada lacks a coherent vision and strategy on the future of innovation and tech and the role we wish to play in the global context. This lack of clear strategy and vision is made even more pressing by the potential for significant shifts in governance and priorities both north and south of the border, which exacerbates the fact that no long-term bipartisan program exists to ensure long-enduring prosperity and innovation in Canada. – Canada Needs a National Strategy on the Future of Innovation – Centre for International Governance Innovation

Moving toward truly responsible AI development in the global AI market

(Chinasa T. Okolo, Marie Tano – Brookings – 24 October 2024) Many AI applications, including large language models, rely on patterns learned from labeled datasets to generate accurate responses to new inputs.

Large AI companies, such as OpenAI, often outsource the labeling of these vast datasets to regions like Africa, where workers face low pay, limited benefits, and long hours, often engaging with sensitive or graphic materials. To address ethical concerns about labor exploitation, the U.S. should reform laws around privacy and labor outsourcing, and companies must invest in local initiatives that prioritize the dignity and fair treatment of data workers to avoid echoes of colonialist exploitation. – Moving toward truly responsible AI development in the global AI market

NIST approves 14 new quantum encryption algorithms for standardization

(Alexandra Kelley – NextGov – 28 October 2024) The National Institute of Standards and Technology announced a new series of digital signature algorithms ready for the agency’s post-quantum cryptographic standardization process, following the finalization of the inaugural three earlier this year. – NIST approves 14 new quantum encryption algorithms for standardization – Nextgov/FCW

This site is registered on wpml.org as a development site.