20230515

Quantum Entanglement 

Backpropagation through Time

Identification of Potential Terrorists and Adversary Planning: Emerging Technologies and New Counter-terror Strategies — New algorithms and hardware technology offer possibilities for the pre-detection of terrorism far beyond even the imagination and salesmanship of people hoping to apply forms of deep learning studied in the IEEE Computational Intelligence Society (CIS) decades ago. For example, new developments in Analog Quantum Computing (AQC) give us a concrete pathway to options like a forwards time camera or backwards time telegraph, a pathway which offers about a 50% probability of success for a well-focused effort over just a few years. However, many of the new technologies come with severe risks, and/or important opportunities in other sectors. This paper discusses the possibilities, risks and tradeoffs relevant to several different forms of terrorism.


Breakthrough Technology for Prediction and Control — Computational intelligence (CI), which includes deep learning, neural networks, brain-like intelligent systems in general and allied technologies, the Internet of Things (IoT), Brain-Computer Interface (BCI) and Quantum Information Science and Technology (QuIST).

  1. Using the same type of desktop machinery which created three entangled photons for the Greenberger, Horne and Zeilinger (GHZ) experiment, replicate the stunning preliminary results achieved in 2015 on an extended experiment supporting the time-symmetric reformulation of quantum physics. Because of the preliminary results so far and the strong underlying logic, the probability of success is estimated at 80%. Note that success would also open the door to many other new technologies, and even failure would provide important clarification about advanced QuIST modeling requirements.

  2. Enhance the existing approach to quantum ghost imaging by using that same GHZ source: use two photons on the left to create the recorded image and detect when an entangled triplet is recorded, and the third photon on the right to reach into space to the object to be imaged. This is a mathematical task aimed at proving coincidence detection can be done entirely on the left-hand side without a space-based detector. Even if this stage fails, lessons learned would inform subsequent BTT development.

  3. Attach the new triphoton ghost imaging system to a powerful telescope imaging the sun, so the third photon returns through the eyepiece. If step 2 succeeds, this would yield an image of the sun eight minutes forward in time, unlike conventional images which are eight minutes old. Given the sun’s dynamics, this would clearly demonstrate a new era in QuIST and offer advance solar flare warnings.

  4. Integrate the triphoton system with long, slow optical fibers that curve light paths, enabling forward-time camera or BTT capabilities on Earth—realizing science fiction visions. Strict scientific protocols should limit detailed discussion of steps 2–4 until step 1 establishes firm confidence.

Keywords. Predetection, terrorism, nuclear proliferation, cyberblitzkrieg, time-symmetric physics, GHz, deep learning, Internet of Things, backwards time, retrocausality

20230504

 
Time Magazine, June 2023

How Artificial Intelligence Could Save the Day: The threat of extinction and how AI can help protect biodiversity in Nature 

The Conversation If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen As a professor of AI, I am also in favor of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns. 

CNN AI industry and researchers sign statement warning of ‘extinction’ risk Dozens of AI industry leaders, academics and even some celebrities called for reducing the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority.

NYT AI Poses ‘Risk of Extinction,’ Industry Leaders Warn Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons. 

BBC Experts warn of artificial intelligence risk of extinction Artificial intelligence could lead to the extinction of humanity, experts — including the heads of OpenAI and Google Deepmind — have warned.

PBS Artificial intelligence raises risk of extinction, experts warn Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind. 

NPR Leading experts warn of a risk of extinction from AI Experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don't take control over humans or destroy the world. 

CBC Artificial intelligence poses 'risk of extinction,' tech execs and experts warn More than 350 industry leaders sign a letter equating potential AI risks with pandemics and nuclear war

CBS AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say Artificial intelligence could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a "global priority," according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the "godfather" of AI. 

USA Today AI poses risk of extinction, 350 tech leaders warn in open letter CAIS said it released the statement as a way of encouraging AI experts, journalists, policymakers and the public to talk more about urgent risks relating to artificial intelligence.

CNBC AI poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn Sam Altman, CEO of ChatGPT-maker OpenAI, as well as executives from Google’s AI arm DeepMind and Microsoft were among those who supported and signed the short statement. 

Wired Runaway AI Is an Extinction Risk, Experts Warn A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic. 

Forbes Geoff Hinton, AI’s Most Famous Researcher, Warns Of ‘Existential Threat’ From AI The alarm bell I’m ringing has to do with the existential threat of them taking control,” Hinton said Wednesday, referring to powerful AI systems. “I used to think it was a long way off, but I now think it's serious and fairly close.

The Guardian Risk of extinction by AI should be global priority, say experts Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war.

The Associated Press Artificial intelligence raises risk of extinction, experts say in new warning Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Al Jazeera Does artificial intelligence pose the risk of human extinction? Tech industry leaders issue a warning as governments consider how to regulate AI without stifling innovation.

The Atlantic We're Underestimating the Risk of Human Extinction An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Sky News AI is similar extinction risk as nuclear war and pandemics, say industry experts The warning comes after the likes of Elon Musk and Prime Minister Rishi Sunak also sounded significant notes of caution about AI in recent months.

80,000 hours The Case for Reducing Existential Risk Concerns of human extinction have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge, MIT, Oxford, and elsewhere.

The Washington PosAI poses ‘risk of extinction’ on par with nukes, tech leaders say Dozens of tech executives and researchers signed a new statement on AI risks, but their companies are still pushing the technology 

TechCrunch OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk In a Twitter thread accompanying the launch of the statement, CAIS director Dan Hendrycks expands on the aforementioned statement, naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI — not simply risk of extinction.”