TOP OF THE DATE
Do We Want an “IAEA for AI”?
(Akash Wasil – Lawfare – 20 November 2024) In November 2023, nations at the first global AI Safety Summit recognized the possibility of “serious, even catastrophic harm” from advanced artificial intelligence (AI). Some of the risks identified stem from deliberate misuse. For example, a nation could decide to instruct an advanced AI system to develop novel biological weapons or cyberweapons; Anthropic CEO Dario Amodei testified in 2023 that AI systems would be able to greatly expand threats from “large-scale biological attacks” within two to three years. Other risks mentioned arise from unintentional factors—experts have warned, for instance, that AI systems could become powerful enough to subvert human control. A race toward superintelligent AI could lead to the creation of highly powerful and dangerous systems before scientists have developed the safeguards and technical understanding required to control them. – https://www.lawfaremedia.org/article/do-we-want-an–iaea-for-ai
AI Safety and Automation Bias. The Downside of Human-in-the-Loop
(Lauren Kahn, Emelia Probasco, Ronnie Kinoshita – CSET – November 2024) Automation bias is the tendency for an individual to over-rely on an automated system. It can lead to increased risk of accidents, errors, and other adverse outcomes when individuals and organizations favor the output or suggestion of the system, even in the face of contradictory information. Automation bias can endanger the successful use of artificial intelligence by eroding the user’s ability to meaningfully control an AI system. As AI systems have proliferated, so too have incidents where these systems have failed or erred in various ways, and human users have failed to correct or recognize these behaviors. This study provides a three-tiered framework to understand automation bias by examining the role of users, technical design, and organizations in influencing automation bias. It presents case studies on each of these factors, then offers lessons learned and corresponding recommendations. – https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/
Researchers Find That Moving Vehicles Could Use Quantum Information to Coordinate Actions
(Matt Swayne – Quantum Insider – 20 November 2024) Researchers at the University of Kent demonstrated that quantum information could be used to coordinate the actions of moving devices, such as drones or autonomous vehicles, potentially improving logistics efficiency and reducing delivery costs. By simulating the phenomenon on IBM’s superconducting quantum computer, the team showed that two devices sharing entangled qubits can influence each other without direct communication, even when separated. The study, published in New Journal of Physics, highlights a novel application of quantum computing to enhance coordination between devices and explores the practical challenges of implementing these strategies on current hardware. – https://thequantuminsider.com/2024/11/20/researchers-find-that-moving-vehicles-could-use-quantum-information-to-coordinate-actions/
The future of the US digital economy depends on equitable access to its jobs
(Robert Maxim, Mark Muro, Yang You, Carl Romer – Brookings – 19 November 2024) Over the past several years, emerging technologies such as generative artificial intelligence (AI) have dominated headlines, while industrial strategies centered on technologies such as semiconductors have become central to U.S. economic policymaking. Meanwhile, a growing stream of scholarship has shown that certain groups—including women and many workers of color—remain underrepresented in technology-oriented fields, despite the importance of diverse workforces for firm, industry, and national competitiveness. – https://www.brookings.edu/articles/the-future-of-the-us-digital-economy-depends-on-equitable-access-to-its-jobs/
Safer together: How governments can enhance the AI Safety Institute Network’s role in global AI governance
(Frank Ryan, George Gor, Niki Iliadis – OECD.AI – 18 November 2024) As we integrate AI into every facet of society—from healthcare to national security—it is more than a technical challenge to ensure that the technology is secure, safe, and trustworthy. It is a global imperative. While each country must address its own AI risks, the technology’s ever-growing reach across borders demands coordinated efforts. The recently launched International Network of AI Safety Institutes is one of the most promising initiatives to address this need. Launched in May 2024 at the Seoul AI Summit, the AISI Network’s mission is “to promote the safe, secure, and trustworthy development of AI.” While the effort is commendable, it is important to ask if such an ambitious, collaborative body can effectively govern a technology as dynamic and integral to national security and competitiveness as AI. – https://oecd.ai/en/wonk/ai-safety-institute-networks-role-global-ai-governance
GOVERNANCE AND LEGISLATION
UK says a new law banning social media for under-16s is ‘on the table’
(Alexabder Martin – The Record – 20 November 2024) The British government is considering banning children from using social media as part of the country’s efforts to address the impact of the online world on young people’s wellbeing. Setting out his priorities on Wednesday for the online safety regulator Ofcom, Peter Kyle, the government’s technology secretary, announced a new study on the effects social media has on under-16s. – https://therecord.media/britain-social-media-ban-children-proposal
Bipartisan quantum funding bill advances from committee
(Alexandra Kelley – NextGov – 19 November 2024) A bill that would accelerate the Department of Energy’s quantum information sciences research efforts advanced through the Senate Committee on Energy and Natural Resources Tuesday, signaling ongoing congressional interest in pushing emerging technology-centric legislation through both chambers. The Department of Energy Quantum Leadership Act of 2024 — a bipartisan bill authored by Sens. Dick Durbin, D-Ill., and Steve Daines, R-Mont. — contains multiple provisions related to quantum technology and sciences research, namely funding federal efforts in quantum networking research and development, establishing domestic foundry programs and conducting industry outreach efforts. – https://www.nextgov.com/emerging-tech/2024/11/bipartisan-quantum-funding-bill-advances-committee/401158/?oref=ng-homepage-river
New TSA cyber rules leave lawmakers, industry hopeful for happy medium regulations
(David DiMolfetta – NextGov – 19 November 2024) The Transportation Security Administration is out with another cybersecurity rule proposal, and the release has resurfaced recurring discussions about overlapping cybersecurity reporting laws and “check-the-box” mentalities that many cyber thought leaders argue don’t end up protecting critical systems from hackers. The notice of proposed rulemaking issued earlier this month would require a slew of pipeline, freight railroad and passenger railroad owners and operators to establish cybersecurity risk management programs that aim to help the surface transportation landscape respond to digital incidents. It followed earlier rounds of TSA cybersecurity rules, born out of the 2021 Colonial Pipeline incident that motivated the Biden administration to invigorate U.S. cyber posture. – https://www.nextgov.com/cybersecurity/2024/11/new-tsa-cyber-rules-leave-lawmakers-industry-hopeful-happy-medium-regulations/401148/?oref=ng-homepage-river
Trust and security are top concerns in the public sector’s use of generative AI, survey says
(Edward Graham – NextGov – 19 November 2024) Public sector organizations overwhelmingly believe it is important for them to adopt generative artificial intelligence technologies but remain concerned about trust in the new capabilities, according to a survey from Amazon Web Services published on Tuesday. The report — the results of which were shared exclusively with Nextgov/FCW ahead of its publication — found that 89% of participants said it was somewhat or critically important for their institutions to embrace GenAI, even as they also acknowledged limitations with the broader deployment of the tools across their organizations. – https://www.nextgov.com/artificial-intelligence/2024/11/trust-and-security-are-top-concerns-public-sectors-use-generative-ai-survey-says/401134/?oref=ng-homepage-river
SECURITY
Five alleged members of Scattered Spider cybercrime group charged for breaches, theft of $11 million
(Jonathan Greig – The Record – 20 November 2024) The Justice Department unsealed charges against five men accused of running prolific phishing campaigns that allowed them to steal employee credentials, gain access to sensitive data and pilfer millions of dollars. A Justice Department spokesperson confirmed that the five are part of the notorious Scattered Spider group — responsible for several devastating cyber incidents including the ransomware attack on MGM Casino last year. – https://therecord.media/five-scattered-spider-members-charged-breaches-11-million-theft
Ghost Tap: Hackers Exploiting NFCGate to Steal Funds via Mobile Payments
(Ravie Lakshmanan – The Hacker News – 20 November 2024) Threat actors are increasingly banking on a new technique that leverages near-field communication (NFC) to cash out victim’s funds at scale. The technique, codenamed Ghost Tap by ThreatFabric, enables cybercriminals to cash-out money from stolen credit cards linked to mobile payment services such as Google Pay or Apple Pay and relaying NFC traffic. “Criminals can now misuse Google Pay and Apple Pay to transmit your tap-to-pay information globally within seconds,” the Dutch security company told The Hacker News in a statement. “This means that even without your physical card or phone, they can make payments from your account anywhere in the world.” – https://thehackernews.com/2024/11/ghost-tap-hackers-exploiting-nfcgate-to.html
NHIs Are the Future of Cybersecurity: Meet NHIDR
(The Hacker News – 20 November 2024) The frequency and sophistication of modern cyberattacks are surging, making it increasingly challenging for organizations to protect sensitive data and critical infrastructure. When attackers compromise a non-human identity (NHI), they can swiftly exploit it to move laterally across systems, identifying vulnerabilities and compromising additional NHIs in minutes. While organizations often take months to detect and contain such breaches, rapid detection and response can stop an attack in its tracks. – https://thehackernews.com/2024/11/nhis-are-future-of-cybersecurity-meet.html
OWASP Warns of Growing Data Exposure Risk from AI in New Top 10 List for LLMs
(James Coker – Infosecurity Magazine – 20 November 2024) Sensitive information disclosure via large language models (LLMs) and generative AI has become a more critical risk as AI adoption surges, according to the Open Worldwide Application Security Project (OWASP). To this end, ‘sensitive information disclosure’ has been designated as the second biggest risk to LLMs and GenAI in OWASP’s updated Top 10 List for LLMs, up from sixth in the original 2023 version of the list. – https://www.infosecurity-magazine.com/news/owasp-data-exposure-risk-ai/
Hackers Hijack Jupyter Servers for Sport Stream Ripping
(Phil Muncaster – Infosecurity Magazine – 20 November 2024) Security researchers have uncovered a surprising new attack methodology for illegal sports streaming, which uses hijacked Jupyter servers. Aqua Security threat hunters used information gathered from the vendor’s honeypots to discover the campaign. They found “several dozen events” where legitimate open source tool “ffmpeg” was being dropped and executed on its Jupyter Lab and Jupyter Notebook honeypots. – https://www.infosecurity-magazine.com/news/hijack-jupyter-servers-sport/
One Deepfake Digital Identity Attack Strikes Every Five Minutes
(Phil Muncaster – Infosecurity Magazine – 20 November 2024) Fraudsters are using deepfake technology with growing frequency to help them bypass digital identity verification checks, Entrust has warned. The identity security specialist revealed the findings in its Entrust Onfido 2025 Identity Fraud Report yesterday. It is based on data collected from the millions of identity verifications the vendor makes each year across 195 countries. – https://www.infosecurity-magazine.com/news/deepfake-identity-attack-every/
DEFENSE, INTELLIGENCE, AND WAR
DIU picks 7 companies to support Replicator autonomy, C2 efforts
(Ashley Roque – Breaking Defense – 20 November 2024) The Defense Innovation Unit (DIU) has inked deals with seven software developers and tasked them with moving out with autonomy and command and control work, aimed at propelling the Replicator initiative forward. “Many leading AI [artificial intelligence] and autonomy firms are outside of our traditional defense industrial base, and DIU is working actively with partners across the department to bring the very best capabilities from the US tech sector to bear in support of our most critical warfighter needs,” DIU Director Doug Beck said in today’s announcement. “This latest step in the Replicator initiative is a critical example of that teamwork in action.” – https://breakingdefense.com/2024/11/diu-picks-7-companies-to-support-replicator-autonomy-c2-efforts/
Turkey joins drone-carrier operations club with first takeoff and landing from Turkish ship
(Agnes Helou – Breaking Defense – 20 November 2024) Turkish firm Baykar’s unmanned combat aerial vehicle (UCAV) Bayraktar TB3 performed its first successful flight test to take off and land on the Turkish ship TCG Anadolu, a short runway vessel, the firm said Tuesday in a statement. “The flight, conducted at the convergence of the Aegean and Mediterranean Seas, lasted for 46 minutes before the aircraft successfully landed back on the same short runway, without the need for any landing support equipment,” Baykar said in the statement. Bayraktar TB3 performed the test on Nov. 19 after completing open-sea shipboard trials. – https://breakingdefense.com/2024/11/turkey-joins-drone-carrier-operations-club-with-first-takeoff-and-landing-from-turkish-ship/