
![]() |
| Discovery Channel Special |

![]() |
| Discovery Channel Special |
At the confluence of cutting-edge science and space exploration, where magic is borne and miraculous discoveries await, an extraordinary figure emerges: autodidact polymath, protean Renaissance explorer, Christopher Altman is an American quantum technologist and NASA-trained commercial astronaut bringing tomorrow's technologies to bear on today's greatest challenges.
In vibrant Japan, immersive studies on a Japanese Fulbright Fellowship brought together the sharp contrast between the futuristic, neon-lit cityscapes of Tokyo's living cybernetic metropolis with the ancient temples, bonsai gardens, and spartan dojos where Altman practiced bushidō, the traditional Japanese martial arts disciplines of kendo, shōdan kyūdo, and judo.
In 2001, he was recruited to multidisciplinary, Deep Future research institute Starlab, where his research group's record-breaking artificial intelligence project was featured in a Discovery Channel Special, recognized with an official entry into the Guinness Book of World Records, and he was called to provide expert testimony to the French Senate, Le Sénat, on the long-term future of Artificial Intelligence.
In the aftermath of the tragic September 11 attacks, Altman volunteered, then was elected to serve as Chairman for the UNISCA First Committee on Disarmament and International Security. His Chair Report to the General Assembly on the exponential acceleration of converging technologies found resonance at the highest echelons of power — at the White House, through direct meetings with US National Security Advisor Condoleezza Rice, et al — providing early momentum for the creation of the United States Cyber Command. For his contributions to the field, he was selected as recipient for the annual RSA Information Security award for Outstanding Achievement in Government Policy the following year.
Altman was then tasked to spearhead a priority national security program in Japan, personally reporting to directors DARPA QuIST and ARDA/DTO, direct predecessor to IARPA, under mandate to create coherent national research estimates and compile long-term science and technology roadmaps for advanced research and development activity across East Asia, attending conferences including the World Technology Summit and the Gordon Research Conference, collaborating with leading scientists and Nobel laureates, and briefing US national labs researchers, policy and research funding agency leaders with a comprehensive assessment of forward-looking trends in the field. His comprehensive national quantum roadmaps went on to serve as the quintessential prototype for the creation of the official US Government Quantum Roadmap — an accolade conveyed directly by the program chair leading the initiative at Los Alamos National Labs.
At a joint press conference Monday with
Virgin Galactic
at the Next-Generation Suborbital Researchers Conference, XCOR, SwRI, and others,
Astronauts for Hire Inc. announced the selection of its third class of commercial
scientist-astronaut candidates to conduct experiments on suborbital flights.
Among those selected was Singularity University inaugural program faculty advisor, teaching fellow, and track chair Christopher Altman, a graduate fellow at the Kavli Institute of Nanoscience, Delft University of Technology.
“The selection process was painstaking,” said Astronauts for Hire Vice President and Membership Chair Jason Reimuller. “We had to choose a handful of applicants who showed just the right balance of professional establishment, broad technical and operational experience, and a background that indicates adaptability to the spaceflight environment.”
“With the addition of these new members to the organization, Astronauts for Hire has solidified its standing as the premier provider of scientist-astronaut candidates,” said its President Brian Shiro. “Our diverse pool of astronauts in training represent more than two dozen disciplines of science and technology, speak sixteen languages, and hail from eleven countries. We can now handle a much greater range of missions across different geographic regions.”
Altman completed Zero-G and High-Altitude Physiological Training under the Reduced Gravity Research Program at NASA Ames Research Center in Silicon Valley and NASA Johnson Space Center in Houston, and was tasked to represent NASA Ames at the joint US-Japan space conference (JUSTSAP) and the launch conference (PISCES) for an astronaut training facility on the slopes of Mauna Kea Volcano on the Big Island of Hawaii.
Altman’s research has been highlighted in international press and publications including Discover Magazine and the International Journal of Theoretical Physics. He was recently awarded a fellowship to explore the foundations and future of quantum mechanics at the Austrian International Akademie Traunkirchen with Anton Zeilinger.
“The nascent field of commercial spaceflight and the unique conditions afforded by space and microgravity environments offer exciting new opportunities to conduct novel experiments in quantum entanglement, fundamental tests of spacetime, and large-scale quantum coherence,” said Altman.
Two hundred years ago, if you suggested people would comfortably travel in flying machines—reaching any destination in the world in a few hours time—instantly access the world's cumulative knowledge by speaking to something the size of a deck of cards, or travel to the Moon, or Mars, you'd be labeled a madman. The future is bound only by our imagination.
Someday very soon we may look back on the world today in much the same way as we did those who lived in the time of Galileo, when everyone lived with such great certainty and self-assuredness that the Earth was flat and the center of the universe. The time is now. A profound shift in consciousness is long overdue. The universe is teeming with life. We're all part of the same human family.
This is potentially the single most momentous moment in our known history—not just for us as a nation, or us as humanity, but as a planet. The technological leaps that could come from developing open contact with nonhuman intelligence are almost beyond our comprehension. That is why this is such a monumental moment for us as a collective whole. It could literally change every single one of the eight billion human lives on this planet.
We stand on the shores of a vast cosmic ocean, with untold continents of possibility to explore. As we continue forwards in our collective journey, scaling the cosmic ladder of evolution, progressing onwards, expanding our reach outwards in the transition to a multiplanetary species—Earth will soon be a destination, not just a point of origin.

– Don Williams
Keynote on the Future of Space Exploration, broadcast live to 108 cities around the world
Backpropagation through Time
Identification of Potential Terrorists and Adversary Planning: Emerging Technologies and New Counter-terror Strategies — New algorithms and hardware technology offer possibilities for the pre-detection of terrorism far beyond even the imagination and salesmanship of people hoping to apply forms of deep learning studied in the IEEE Computational Intelligence Society (CIS) decades ago. For example, new developments in Analog Quantum Computing (AQC) give us a concrete pathway to options like a forwards time camera or backwards time telegraph, a pathway which offers about a 50% probability of success for a well-focused effort over just a few years. However, many of the new technologies come with severe risks, and/or important opportunities in other sectors. This paper discusses the possibilities, risks and tradeoffs relevant to several different forms of terrorism.
Breakthrough Technology for Prediction and Control — Computational intelligence (CI), which includes deep learning, neural networks, brain-like intelligent systems in general and allied technologies, the Internet of Things (IoT), Brain-Computer Interface (BCI) and Quantum Information Science and Technology (QuIST).
Keywords. Predetection, terrorism, nuclear proliferation, cyberblitzkrieg, time-symmetric physics, GHz, deep learning, Internet of Things, backwards time, retrocausality
The Conversation If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen As a professor of AI, I am also in favor of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.
CNN AI industry and researchers sign statement warning of ‘extinction’ risk Dozens of AI industry leaders, academics and even some celebrities called for reducing the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority.
NYT AI Poses ‘Risk of Extinction,’ Industry Leaders Warn Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.
BBC Experts warn of artificial intelligence risk of extinction Artificial intelligence could lead to the extinction of humanity, experts — including the heads of OpenAI and Google Deepmind — have warned.
PBS Artificial intelligence raises risk of extinction, experts warn Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind.
NPR Leading experts warn of a risk of extinction from AI Experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don't take control over humans or destroy the world.
CBC Artificial intelligence poses 'risk of extinction,' tech execs and experts warn More than 350 industry leaders sign a letter equating potential AI risks with pandemics and nuclear war.
CBS AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say Artificial intelligence could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a "global priority," according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the "godfather" of AI.
USA Today AI poses risk of extinction, 350 tech leaders warn in open letter CAIS said it released the statement as a way of encouraging AI experts, journalists, policymakers and the public to talk more about urgent risks relating to artificial intelligence.
CNBC AI poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn Sam Altman, CEO of ChatGPT-maker OpenAI, as well as executives from Google’s AI arm DeepMind and Microsoft were among those who supported and signed the short statement.
Wired Runaway AI Is an Extinction Risk, Experts Warn A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic.
Forbes Geoff Hinton, AI’s Most Famous Researcher, Warns Of ‘Existential Threat’ From AI The alarm bell I’m ringing has to do with the existential threat of them taking control,” Hinton said Wednesday, referring to powerful AI systems. “I used to think it was a long way off, but I now think it's serious and fairly close.
The Guardian Risk of extinction by AI should be global priority, say experts Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war.
The Associated Press Artificial intelligence raises risk of extinction, experts say in new warning Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Al Jazeera Does artificial intelligence pose the risk of human extinction? Tech industry leaders issue a warning as governments consider how to regulate AI without stifling innovation.
The Atlantic We're Underestimating the Risk of Human Extinction An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.
Sky News AI is similar extinction risk as nuclear war and pandemics, say industry experts The warning comes after the likes of Elon Musk and Prime Minister Rishi Sunak also sounded significant notes of caution about AI in recent months.
80,000 hours The Case for Reducing Existential Risk Concerns of human extinction have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge, MIT, Oxford, and elsewhere.
The Washington Post AI poses ‘risk of extinction’ on par with nukes, tech leaders say Dozens of tech executives and researchers signed a new statement on AI risks, but their companies are still pushing the technology
TechCrunch OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk In a Twitter thread accompanying the launch of the statement, CAIS director Dan Hendrycks expands on the aforementioned statement, naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI — not simply risk of extinction.”