TOP OF THE DAY – Securing Critical Infrastructure in the Age of AI
(Center for Security and Emerging Technology – October 2024) As critical infrastructure operators and providers seek to harness the benefits of new artificial intelligence capabilities, they must also manage associated risks from both AI-enabled cyber threats and potential vulnerabilities in deployed AI systems. In June 2024, CSET led a workshop to assess these issues. This report synthesizes our findings, drawing on lessons from cybersecurity and insights from critical infrastructure sectors to identify challenges and potential risk mitigations associated with AI adoption. – Securing Critical Infrastructure in the Age of AI | Center for Security and Emerging Technology (georgetown.edu)
Governance
(Xhoi Zajmi – Euractiv – 30 September 2024) The rise and integration of artificial intelligence into our daily lives is no longer a futuristic fantasy, but new data shows AI in the media sector might do more harm than good. Whether AI leads to better journalism is an ongoing debate, but more media companies across the world aren’t waiting to find out which side up new tech lands. The media sector is jumping on the trend and embracing the latest developments in the hope of engaging audiences, keeping revenue above water, and sustaining their business models. Data published by Bentley University and Gallup in their latest Business in Society Report shows that a majority of Americans (56%) believe AI, in general, does equal amounts of harm and good. However, the percentage of those who believe that the harm caused by AI outweighs its goods is still greater than that of those who believe the opposite. – Trust issues… AI’s double-edged sword cuts a fine line for modern journalism – Euractiv
(Xhoi Zajmi – Euractiv – 30 September 2024) Innovation is essential for maximising the potential of forests and addressing global challenges while paving the way towards a sustainable future for the forestry sector, according to the Food and Agriculture Organisation of the United Nations. In its report, “The State of the World’s Forests 2024: Forest-sector innovations towards a more sustainable future”, FAO explores ways to scale up forest conservation, restoration and sustainable use through innovation. – Innovate and maximise forest potential to fight food and climate challenges, says FAO – Euractiv
(Alexandra Kelley – NextGov – 30 September 2024) In the State Department’s ongoing development and deployment of its internal artificial intelligence chatbot, collaboration is just as important as the technology bedrocking these systems, according to agency officials. Matthew Graviss, State’s chief data and artificial intelligence officer, joined Gharun Lacy, the deputy assistant secretary and assistant director of the Diplomatic Security Service for Cyber and Technology Security at State, to update Nextgov/FCW about its internal AI-powered chatbot meant to streamline department operations. – State’s AI chatbot journey started with collaboration – Nextgov/FCW
(Jacob Wulff Wold – Euractiv – 30 September 2024) A range of academics, from Turing award-winner Yoshua Bengio to PhD candidates, have been named chairs and vice-chairs of working groups that will draft a Code of Practice on general-purpose artificial intelligence (GPAI), according to a Monday (30 September) Commission press release. For providers of general-purpose AI systems like ChatGPT, the AI Act relies heavily on the Code of Practice, which will detail what the Act’s risk management and transparency requirements would entail in practice until standards are finalised, sometime in 2026. – Academics to chair drafting the Code of Practice for general-purpose AI – Euractiv
(Amoha Basrur – Observer Research Foundation – 27 September 2024) Artificial Intelligence (AI) has been the culmination of the information age. It is the result of decades of advancements in data processing and machine learning and has conversely led to these systems increasingly governing the flow of information today. AI has been hailed as a great equaliser- promising to revolutionise how people access, interpret, and share knowledge. However, its applications range from translation tools and chatbots to content filtering and censorship tools. Moreover, questions about bias, transparency, and accountability often remain unanswered. The unprecedented avenues that AI has created for information dissemination and control have a flip side. These advancements come with significant ethical considerations to ensure that AI-driven information systems serve society equitably and responsibly. – Ethical considerations in AI-driven access to information (orfonline.org)
(Anulekha Nandi – Observer Research Foundation – 27 September 2024) Access to information has been recognised as a key element of sustainable development goals (SDGs) since the adoption of the Rio Declaration in 1992. Since then, it has formed the centre-piece of international development initiatives finding a place within the SDG 2030 agenda in 2015 to promote participatory governance and strong institutions under SDG 16.10. The Human Rights Council in its 2020 resolution on freedom of opinion and expression also made it incumbent upon public institutions to make information publicly available. Access to information has been at the heart of earlier development efforts that aimed to use information and communication technologies like radios to provide underserved communities with relevant information about economic opportunities, development projects, and best practices to improve their living conditions. – The age of AI and access to information paradox (orfonline.org)
(Siddharth Yadav – Observer Research Foundation – 27 September 2024) Over the past decade, algorithms have become the invisible vehicles through which most online activities are carried out, from the operation of search engines to the functioning of social media platforms. Algorithms are also permeating the public sector with their use in urban planning, public resource allocation, and processing immigration applications among other uses. This development has been accompanied by digitally distributed information contributing to socio-political friction and, in extreme cases, to the eruption of violence. The Indian government along with governments around the world have realised the need to regulate online platforms due to their impact on social-political operations. Central to the regulatory scramble is the issue of algorithms and automated decision-making systems that deliver information to millions of screens globally. Individuals are traditionally seen as the primary decision-makers regarding the information they consume. However, algorithms and automated recommender systems now play a crucial—and almost invisible—role in distributing information available to users according to nondescript standards. Individuals using online platforms are largely unaware of such systems or unable to decipher how they function. Consequently, users exercise only limited control over the content presented to them. This “control asymmetry” is fostering a mistrust towards tech companies and public institutions by extension. To address this opacity and enable proper regulatory measures, it is crucial to first define what algorithmic transparency entails. – Algorithmic transparency: Public access to information on automated decision-making (orfonline.org)
(Eugene Volokh – Lawfare – 27 September 2024) Generative artificial intelligence (AI) output is likely protected by the First Amendment, much like human-written speech is generally protected. But the existing First Amendment exceptions, such as that for defamation (written libel or oral slander), would apply to such output. AI companies therefore enjoy substantial protection for AI-generated speech, but not absolute protection. – First Amendment Limits on AI Liability | Lawfare (lawfaremedia.org)
Geostrategies
(Raluca Besliu – The Parliament – 30 September 2024) With semiconductors at the core of AI and consumer tech, Europe faces intense competition for crucial supply chains. But amid US-China tensions, the bloc has a unique opportunity to carve its own path. – Microchip supply chains: The key to the EU’s AI competitiveness? (theparliamentmagazine.eu)
Security
(Ionut Arghire – SecurityWeek – 30 September 2024) Patelco Credit Union has informed authorities that the information of more than 1 million individuals was stolen in a ransomware attack this summer. The incident was identified on June 29 and resulted in Patelco taking some of its day-to-day banking systems offline, the company said, explaining that it led to an outage affecting the union’s online banking services, mobile application, and call center. – Patelco Credit Union Data Breach Impacts Over 1 Million People – SecurityWeek
(Eduard Kovacs – SecurityWeek – 30 September 2024) The Community Clinic of Maui in Hawaii, a nonprofit healthcare organization doing business as Malama I Ke Ola Health Center, informed authorities in the US last week that a cyberattack suffered earlier this year has resulted in a data breach impacting over 120,000 individuals. Local media reported in May that it took the Maui healthcare organization more than two weeks to reopen after experiencing “major computer problems”. – Hawaii Health Center Discloses Data Breach After Ransomware Attack – SecurityWeek
(Ionut Arghire – SecurityWeek – 30 September 2024) The cybercriminal gang tracked as Storm-0501 is targeting hybrid cloud environments of US organizations in multiple sectors, Microsoft warns. A financially motivated group relying on commodity and open source tools for ransomware deployments, Storm-0501 has been active since 2021, when it was using the Sabbath ransomware in attacks against US schools. – Microsoft: Cloud Environments of US Organizations Targeted in Ransomware Attacks – SecurityWeek
(James Coker – Infosecurity Magazine – 30 September 2024) Over a third (34%) of English schools and colleges were hit by a cyber incident in the previous academic year 2023/24, according to a new government report. A teacher survey by exam watchdog the Office of Qualifications and Examinations Regulation (Ofqual) found that 20% of schools and college were unable to recover immediately following an incident, with 4% taking more than half a term to return to normal operations. – Cyber-Attacks Hit Over a Third of English Schools – Infosecurity Magazine (infosecurity-magazine.com)
(Phil Muncaster – Infosecurity Magazine – 30 September 2024) The UK’s National Cyber Security Centre (NCSC) teamed up with government agencies across the Atlantic to issue a new alert about Iranian cyber-threats on Friday. Released in concert with the FBI, US Cyber Command – Cyber National Mission Force (CNMF) and the Department of the Treasury (Treasury), the security advisory claimed that Iran’s Islamic Revolutionary Guard Corps (IRGC) is behind the spear phishing campaign. – UK and US Warn of Growing Iranian Spear Phishing Threat – Infosecurity Magazine (infosecurity-magazine.com)
Defense, Intelligence, and War
(Theresa Hitchens – Breaking Defense – 30 September 2024) The National Geospatial Intelligence Agency (NGA) today issued a call to industry — worth up to $708 million over a maximum of seven years — for help training AI-driven computer vision systems to, among other tasks, process satellite imagery and identify targets of interest. Under the Sequoia program indefinite delivery/indefinite quantity (ID/IQ) contract, chosen vendors will provide data labeling, which allows artificial intelligence and machine learning systems to discriminate among objects. It is a foundational capability in particular for NGA’s sprawling Maven program, according to NGA officials. NGA took over Maven from the Defense Department in 2022. NGA gathers imagery from satellites and aircraft, analyzes it, and then disseminates the resultant geospatial intelligence (GEOINT) products (such as 3D maps) to users across the US government, including to DoD leaders and military commanders. – NGA seeks help training AI to translate imagery for targeting intel – Breaking Defense
(Ryan Naraine – SecurityWeek – 30 September 2024) A professional hacking team linked to the North Korean government has broken into Diehl Defence, a German company that manufactures Iris-T air defense systems, using a clever phishing campaign with fake job offers and advanced social engineering tactics, according to a report by Der Spiegel. The attack, pinned on the Kimsuky APT, combined the use of booby-trapped PDF files with spear-phishing lures offering Diehl Defence employees jobs with American defense contractors. – North Korea Hackers Linked to Breach of German Missile Manufacturer – SecurityWeek
(Valerie Insinna, Ashley Roque – Breaking Defense – 30 September 2024) The Pentagon has greenlit a second Replicator initiative, this time taking aim at the problem of countering small drones at US military installations across the globe, the department announced today. In a Sept. 27 memo detailing the new effort, Defense Secretary Lloyd Austin charged Deputy Defense Secretary Kathleen Hicks with developing a plan for Replicator 2, with the intent of seeking funding for the new project in the upcoming fiscal 2026 budget request and fielding “meaningfully improved” counter-drone capabilities within 24 months of receiving money from Congress. – Pentagon homes in on counter-drone tech in Replicator 2 initiative – Breaking Defense
(Euractiv/Reuters – 30 September 2024) A report that Russia is developing a China-backed attack drone programme for the war in Ukraine is “deeply concerning”, a European Union spokesperson said on Friday (27 September). Reuters reported on Wednesday that IEMZ Kupol, a subsidiary of Russian state-owned arms company Almaz-Antey, has developed and flight-tested a new drone model called Garpiya-3 (G3) in China with the help of local specialists. – EU concerned by report of Russia producing attack drones in China – Euractiv
(Sydney J. Freedberg Jr. – Breaking Defense – 20 September 2024) The Pentagon’s AI chief, Radha Plumb, wants more small, innovative companies to develop cutting-edge software for the Department of Defense, from the back office to the battlefield. But to make that happen, Plumb acknowledged Friday, she’ll have to assure them their trade secrets will be safe — not just from the government but from prime contractors. It’s a solvable problem, but it’s not yet solved, said Plumb, the Pentagon’s Chief Digital & AI Officer, and her organization can’t solve it alone. – Pentagon CDAO seeks industry input on protecting IP: ‘We’re really open to feedback’ – Breaking Defense
(Rudy Ruitenberg – Defense News – 30 September 2024) The European Union officially opened its defense-innovation office in Kyiv earlier this month, as the bloc seeks to boost cooperation between the Ukrainian and European defense industries. The office is part of Europe’s defense industrial strategy adopted in March, and one role will be connecting the bloc’s startups and innovators with Ukraine’s defense industry and armed forces, the European Commission said in a statement on Friday. The office also aims to strengthen Ukraine’s integration into the European defense-equipment market. – EU opens defense innovation hub in Kyiv to boost industry outreach (defensenews.com)
Legislation
(Associated Press/SecurityWeek – 29 September 2024) California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. – California Governor Vetoes Bill to Create First-in-Nation AI Safety Measures – SecurityWeek