Digest on AI & Emerging Technologies (17 September 2024)

TOP OF THE DAY - Without Freedom of Thought, Future Planning Becomes Impossible. If the United Nations wants to shore up our global future, it should start with protecting our freedom of thought and opinion.

The UN Secretary-General is convening the Summit of the Future this month in New York, designed to reset the international order to address the myriad challenges the world faces in the twenty-first century. Sustainable development, peace and security, the climate crisis and technological innovation are all on the agenda. It is an ambitious program. Our ability to plan for our future depends on our capacity to think clearly. For that, we need freedom of thought. But in today’s tech-driven world, our right to freedom of thought is under attack.

(Susie Alegre – Centre for International Governance Innovation – 16 September 2024) 

Governance & Geostrategies

Facebook owner Meta said on Monday it was banning RT, Rossiya Segodnya and other Russian state media networks from its platforms, claiming the outlets had used deceptive tactics to carry out covert influence operations online. The ban marks a sharp escalation in actions by the world’s biggest social media company against Russian state media, after it spent years taking more limited steps such as blocking the outlets from running ads and reducing the reach of their posts.

(Katie Paul – Reuters – 17 September 2024) 

A total of 134 countries representing 98% of the global economy are now exploring digital versions of their currencies, with almost half at an advanced stage and pioneers like China, the Bahamas and Nigeria starting to see a pick up in usage. The research by the U.S.-based Atlantic Council think-tank published on Tuesday showed that all G20 nations are now looking into central bank digital currencies (CBDCs) as they are known and that 44 countries in total are piloting them.

(Marc Jones – Reuters – 17 September 2024) 

Leading artificial intelligence (AI) researchers are sounding the alarm about the catastrophic risks of rapidly advancing AI technology. Geoffrey Hinton and Yoshua Bengio, two of the greatest living AI scientists, believe that rapid AI advances present “societal-scale risks” on par with “pandemics and nuclear war.” Surveys of thousands of top AI researchers estimate a 19 percent probability that humanity loses control of “future advanced AI systems[,] causing human extinction or similarly” negative outcomes. Even the CEOs of OpenAI, Anthropic, and Google DeepMind agree that their technology poses a global-scale threat.

(Peter N. Salib, Simon Goldstein – Lawfare – 16 September 2024)

Using satellites to observe Earth’s systems generates huge amounts of complex data that must be organized and analysed to boost climate intelligence. But recent advances in data processing and forecasting are transforming raw Earth observation data into actionable insights at unprecedented speeds. When used in conjunction with satellite data, 10 emerging technology trends are making climate insights more accessible and helping to address climate change.

(Minoo Rathnasabapathy, Nikolai Khlystov – World Economic Forum – 16 September 2024)

icrosoft-backed OpenAI said on Monday its safety committee will oversee security and safety processes for the company’s artificial intelligence model development and deployment, as an independent body. The change follows the committee’s own recommendations to OpenAI’s board which were made public for the first time. OpenAI, the company behind the viral chatbot ChatGPT, formed its Safety and Security Committee this May to evaluate and further develop the company’s existing safety practices.

(Reuters – 16 September 2024)

Because of the wide variety of tasks they can be used to perform, foundation models — a class of artificial intelligence (AI) models trained on large and diverse datasets and capable of performing many tasks — have the potential to have a large effect in shaping the economic and social effects of AI. The authors of this report examined the economic and production attributes of pre-trained foundation models to answer the following questions: Does the market for foundation models have the characteristics of a natural monopoly, and, if so, is regulation of that market needed?

(Jon Schmid, Tobias Sytsma, Anton Shenk – RAND Corporation – 12 September 2024)

Regenerative agriculture offers a way to ensure food security and help combat climate change. Developments in AI can help accelerate the transition to regenerative agriculture. We outline how emerging technology can be harnessed and scaled in low and middle-income countries.

(Jaskiran Warrik, Shreejit Borthakur – World Economic Forum – 10 September 2024) 

Security

The intent of the Indonesian National Armed Forces (TNI) to establish a Cyber Force needs to be supported, but not rushed. In response to the recent hacking of TNI Strategic Intelligence Agency data in June 2024, the TNI commander, General Agus Subiyanto, declared that he would expand the TNI structure to include a new cyber force. Other government leaders, such as the then-chief of the TNI Information Center Maj. Gen. R. Nugraha Gumilar, claimed that the information obtained by the hacker was outdated, thereby attempting to downplay the event. Subiyanto took the incident as quite a slap in the face for the TNI, reacting almost immediately.

(Yokie Rahmad Isjchwansyah – ASPI The Strategist – 17 September 2024)

Two Democratic senators are asking leadership in the Biden administration to do more to mitigate risks of artificial intelligence algorithms making biased decisions. Sens. Edward Markey, D-Mass., and Majority Leader Chuck Schumer, D-N.Y., told Office of Management and Budget Director Shalanda Young in a Monday letter that federal agencies need to establish more safeguards to prevent algorithmic discrimination.

(Alexandra Kelley – NextGov – 16 September 2024) 

The US Treasury Department has slapped sanctions on five individuals and one entity associated with the Intellexa Consortium, a global business caught creating and distributing commercial spyware for targeted and mass surveillance campaigns. The latest round of sanctions are part of a broader US government effort to combat the proliferation and misuse of commercial spyware and surveillance tools and comes just days after Apple abruptly abandoned its lawsuit against Israel’s NSO Group.

(Ryan Naraine – SecurityWeek – 16 September 2024)

Hackers are making available the information of US voters in an attempt to undermine confidence in the security of election infrastructure, but the claims made by these hackers are false, according to the FBI and CISA. In a joint public service announcement published last week, the agencies pointed out that most US voter information can be purchased or legitimately acquired, but threat actors continue to make statements suggesting that the information getting leaked is evidence of election infrastructure compromise.

(Eduard Kovacs – SecurityWeek – 16 September 2024) 

The US Government has announced a commitment from the AI industry to reduce image-based sexual abuse. The “voluntary commitments,” which cover both AI model developers and data providers, commits technology firms to act against non-consensual intimate images and child sexual abuse material.

(Stephen Pritchard – Infosecurity Magazine – 16 September 2024)

Defense, Intelligence, and War

Generative artificial intelligence startup Ask Sage recently announced it had deployed its genAI software to the Army’s secure cloud, cArmy. The press release touted a host of processes the new tech could automate, such as software development, cybersecurity testing, and even parts of the federal acquisition system — “to include drafting and generating RFIs, RFPs, scope of work, defining requirements, down-selecting bidders and much more.”

(Sydney J. Freedberg Jr.  – Breaking Defense – 16 September 2024) 

Northrop Grumman will release a new “tool box” of connected technologies, physical processors and radar apertures as well as software defined signals intelligence, cyber effects and communications all in one, the company announced at the Air Force Association’s Air Space Cyber conference at National Harbor on Monday.

(Patrick Tucker – Defense One – 16 September 2024)

The Pentagon announced today it will help lead a $3 billion U.S. Commerce Department initiative designed to make sure the U.S. military has access to a reliable domestic microelectronics supply chain. The first task order under what’s known as the Secure Enclave program was awarded to leading microchip developer Intel Corp. The funding will focus on improving commercial fabrication facilities and builds on work Intel has done through other DOD programs.

(Courtney Albon – Defense News – 16 September 2024)

Since the Air Force and Space Force launched their first generative AI tool in June, more than 80,000 airmen and guardians have experimented with the system, according to the Air Force Research Laboratory. The lab told Defense News this week that the early adopters come from a range of career fields and have used the tool for variety of tasks — from content creation to coding.

(Courtney Albon – Defense News – 16 September 2o24)

Defense startup Anduril Industries, perhaps most known for its artificial intelligence applications for drone warfare, is setting its sights farther up — using its own funds to develop new satellites for monitoring the heavens. (…) Anduril’s Senior Vice President of Space and Engineering Gokul Subramanian said the company intends for the satellites to be launched by the end of 2025 — with the Space Force clearly the target market.

(Theresa Hitchens – Breaking Defense – 13 September 2024)

Legislation

When French prosecutors took aim at Telegram boss Pavel Durov, they had a trump card to wield – a tough new law with no international equivalent that criminalises tech titans whose platforms allow illegal products or activities. The so-called LOPMI law, enacted in January 2023, has placed France at the forefront of a group of nations taking a sterner stance on crime-ridden websites. But the law is so recent that prosecutors have yet to secure a conviction.

(Gabriel Stargardter  – Reuters – 17 September 2024) 

Imagine this scenario: America’s top scientists are researching a new technology on behalf of a U.S. regulatory agency. They report to the agency head that there is a substantial risk that the new technology, if deployed, could cause enormous harm to the U.S. civilian population. However, they cannot quantify the risks involved with any precision. There is a great deal of uncertainty around the potential effects of the novel technology. But the majority of researchers in the field believe that it is likely to be remarkably dangerous. Should the relevant agency, or Congress, regulate the new technology? Or, given the inherently uncertain nature of the risk, should they do nothing? 

(Matthew Tokson – Lawfare – 16 September 2024)

This site is registered on wpml.org as a development site.