20231111
20230922
![]() |
GLOBAL INSPIRATIONAL LEADERS AWARD
At the confluence of cutting-edge science and space exploration, where magic is borne and miraculous discoveries await, an extraordinary figure emerges: autodidact polymath, protean Renaissance explorer, Christopher Altman is an American quantum technologist and NASA-trained commercial astronaut bringing tomorrow's technologies to bear on today's greatest challenges.
In vibrant Japan, immersive studies on a Japanese Fulbright Fellowship brought together the sharp contrast between the futuristic, neon-lit cityscapes of Tokyo's living cybernetic metropolis with the ancient temples, bonsai gardens, and spartan dojos where Altman practiced bushidō, the traditional Japanese martial arts disciplines of kendo, shōdan kyūdo, and judo.
In 2001, he was recruited to multidisciplinary, Deep Future research institute Starlab, where his research group's record-breaking artificial intelligence project was featured in a Discovery Channel Special, recognized with an official entry into the Guinness Book of World Records, and he was called to provide expert testimony to the French Senate, Le Sénat, on the long-term future of Artificial Intelligence.
In the aftermath of the tragic September 11 attacks, Altman volunteered, then was elected to serve as Chairman for the UNISCA First Committee on Disarmament and International Security. His Chair Report to the General Assembly on the exponential acceleration of converging technologies found resonance at the highest echelons of power — at the White House, through direct meetings with US National Security Advisor Condoleezza Rice, et al — providing early momentum for the creation of the United States Cyber Command. For his contributions to the field, he was selected as recipient for the annual RSA Information Security award for Outstanding Achievement in Government Policy the following year.
Altman was then tasked to spearhead a priority national security program in Japan, personally reporting to directors DARPA QuIST and ARDA/DTO, direct predecessor to IARPA, under mandate to create coherent national research estimates and compile long-term science and technology roadmaps for advanced research and development activity across East Asia, attending conferences including the World Technology Summit and the Gordon Research Conference, collaborating with leading scientists and Nobel laureates, and briefing US national labs researchers, policy and research funding agency leaders with a comprehensive assessment of forward-looking trends in the field. His comprehensive national quantum roadmaps went on to serve as the quintessential prototype for the creation of the official US Government Quantum Roadmap — an accolade conveyed directly by the program chair leading the initiative at Los Alamos National Labs.
Returning to the United States from a graduate research fellowship at the Kavli Institute of Nanoscience, Altman was recruited to lead a futures studies program at NASA's Ames Research Center, where he was mentored by a panel of veteran astronauts and shuttle mission commanders and a USAF General, PhD astrophysicist and former head of US Space Command. Altman conducted manned spaceflight training, then selected by a committee comprised of current and former NASA astronauts and astronaut trainers to as a flight member with the world's first commercial astronaut corps. His keynote on The Future of Spaceflight — broadcast live to 108 cities around the world — served as catalyst for NASA to fund the corps for its first series of manned spaceflight missions. Altman successfully completed spaceflight training the subsequent spring.
As senior research scientist at PISCES, a technology testbed and astronaut training facility on the slopes of Mauna Kea on the Big Island of Hawaii — where Neil Armstrong and Buzz Aldrin trained for the Apollo 11 Moon landing — Altman served as principal investigator for a team that includes NASA and Caltech scientists working together with the world's-first inventors and world-record holding pioneers of free-space quantum teleportation. As Chief Scientist for Artificial Intelligence and Quantum Technology, Altman works with colleagues to establish the foundation for a global network of satellites linked together by macroscopic quantum entanglement for secure quantum communications.
As affiliate researcher at Harvard University, Altman's reach extends far beyond Earth's orbit and out among the stars to seek definitive evidence of extraterrestrial artifacts through the detection of anomalous aerial technosignatures and interstellar objects — a mission complemented by his role as lead astronaut in a program aiming to pinpoint celestial transient events in search of potential exoprobes orbiting the Earth, with preliminary results twice published in the scientific journal Nature.
Sustainable living in space requires sustainable living on Earth, through in situ resource utilization (ISRU) and beneficial, dual-use spin-off technologies. As Chief Astronaut Technical Officer for MIT partner Mars City Design, Altman's experience and perspective is applied to directing agency plans for long-term lunar and Mars settlement. As Cofounder and Chief Scientist of SolarCoin, he aims to accelerate our societal transition from petroleum-dependent, scarcity economics to a renewable energy-based, post-scarcity economy. With each step forward, his tireless efforts lift humanity just a little bit closer to the stars — and to a future where we can truly call the whole cosmos home.
20230711

![]() |
Discovery Channel Special |
When the laboratory came up short on research grants in June, I personally went to the President himself when fate brought us together at the same time and place on his first trip overseas after election. The Commander in Chief, who had just arrived in Brussels for a meeting with NATO, impressed us with both his immediate familiarity with our work and and his enthusiasm in response to my earnest request for $1M in budget that had been allocated for national security priority scientific research topics through a grant newly created by Clinton with his last act in office, the 2001 National Nanotechnology Initiative.

Our living arrangements at the lab consisted of an expansive three-bedroom master suite with fully-stocked library, typically reserved for visiting prime ministers, senators, and senior diplomats. My quarters were shared with none other than the project's principal scientific investigator, Hugo de Garis. One midsummer's afternoon as the two of us strolled on a random walk through the sprawling estate and lush wooded grounds surrounding the manor, immersed in a passionate debate on the long-term promise and perils of superintelligence, the ever-eccentric


.jpg)
A recent Financial Times (FT) spinoff magazine article in Sifted highlights our research going back to Starlab, AI and time travel research projects, and my subsequent travels across East Asia to create national quantum roadmaps for US national research funding and IC agency directors. In years that followed, I continued on through research fellowships in nanoscience and the foundations of quantum mechanics with Nobel physics laureate Anton Zeilinger’s research group in Austria and across Europe, then was recruited to lead a futures initiative at NASA in collaboration with Ray Kurzweil and Google, together with leading companies, luminary scientists, venture capitalists and entrepreneurs from Silicon Valley and around the world.
From manned spaceflight training at NASA on to the summit of a volcano where the Apollo 11 astronauts trained before the first landing to put a man on the Moon, to field expeditions employing state-of-the-art sensors in rough desert terrain, from collaborations leading diplomats to advise the United Nations on critical security issues of the future to multidisciplinary teams of scientists, researchers, special forces domain experts and engineers field testing next-generation technologies in austere environments — each of these initiatives was undertaken with the singular aim to make a profound and positive impact on the future of humanity, for our children, our children’s children, and the generations yet to come.
“Our deepest fear is not that we are inadequate. Our deepest fear is that we are powerful beyond measure. It is our light — not our darkness — that most frightens us. We oft ask ourselves: ‘Who am I to be brilliant, gorgeous, talented, fabulous?’ Actually, who are we not to be? You are a child of God. Your playing small here doesn’t serve the world. There’s nothing enlightened about shrinking so other people won’t feel insecure around you. We were born to make manifest the glory of God that lies within us. It’s not just in some of us. It’s in everyone—and as we let our light shine, we unconsciously give other people permission to do the same. As we are liberated from our own fear, our presence automatically liberates others. ”
Astronaut Scientists for Hire
Open New Research Frontier in Space
At a joint press conference Monday with Virgin Galactic at the Next-Generation Suborbital Researchers Conference, XCOR, SwRI, and others, Astronauts for Hire Inc. announced the selection of its third class of commercial scientist-astronaut candidates to conduct experiments on suborbital flights.
Among those selected was Singularity University inaugural program faculty advisor, teaching fellow, and track chair Christopher Altman, a graduate fellow at the Kavli Institute of Nanoscience, Delft University of Technology.
“The selection process was painstaking,” said Astronauts for Hire Vice President and Membership Chair Jason Reimuller. “We had to choose a handful of applicants who showed just the right balance of professional establishment, broad technical and operational experience, and a background that indicates adaptability to the spaceflight environment.”
“With the addition of these new members to the organization, Astronauts for Hire has solidified its standing as the premier provider of scientist-astronaut candidates,” said its President Brian Shiro. “Our diverse pool of astronauts in training represent more than two dozen disciplines of science and technology, speak sixteen languages, and hail from eleven countries. We can now handle a much greater range of missions across different geographic regions.”
Altman completed Zero-G and High-Altitude Physiological Training under the Reduced Gravity Research Program at NASA Ames Research Center in Silicon Valley and NASA Johnson Space Center in Houston, and was tasked to represent NASA Ames at the joint US-Japan space conference (JUSTSAP) and the launch conference (PISCES) for an astronaut training facility on the slopes of Mauna Kea Volcano on the Big Island of Hawaii.
Altman’s research has been highlighted in international press and publications including Discover Magazine and the International Journal of Theoretical Physics. He was recently awarded a fellowship to explore the foundations and future of quantum mechanics at the Austrian International Akademie Traunkirchen with Anton Zeilinger.
“The nascent field of commercial spaceflight and the unique conditions afforded by space and microgravity environments offer exciting new opportunities to conduct novel experiments in quantum entanglement, fundamental tests of spacetime, and large-scale quantum coherence,” said Altman.
20230710
Two hundred years ago, if you suggested people would comfortably travel in flying machines—reaching any destination in the world in a few hours time—instantly access the world's cumulative knowledge by speaking to something the size of a deck of cards, or travel to the Moon, or Mars, you'd be labeled a madman. The future is bound only by our imagination.
Someday very soon we may look back on the world today in much the same way as we did those who lived in the time of Galileo, when everyone lived with such great certainty and self-assuredness that the Earth was flat and the center of the universe. The time is now. A profound shift in consciousness is long overdue. The universe is teeming with life. We're all part of the same human family.
This is potentially the single most momentous moment in our known history—not just for us as a nation, or us as humanity, but as a planet. The technological leaps that could come from developing open contact with nonhuman intelligence are almost beyond our comprehension. That is why this is such a monumental moment for us as a collective whole. It could literally change every single one of the eight billion human lives on this planet.
We stand on the shores of a vast cosmic ocean, with untold continents of possibility to explore. As we continue forwards in our collective journey, scaling the cosmic ladder of evolution, progressing onwards, expanding our reach outwards in the transition to a multiplanetary species—Earth will soon be a destination, not just a point of origin.
20230708
Overview

– Don Williams
We dream. It's what makes us who we are. Down to our bones, to the core of our cellular memories, passed down through eons of survival, expansion, exploration and growth. The instinct to build, the drive to seek beyond what we know. It's in our DNA.
We cross the oceans, we conquer the skies, unyielding, relentless in our pursuit of the farthest frontiers, venturing forth to launch ourselves outwards and find a new home for our descendants among the stars.
Yesterday's impossible becomes today's greatest achievement—and tomorrow's routine. The heavens beckon, parting open. A new generation of innovators and explorers heeds the call, the invitation to take our species further: not just to visit, but to stay.
Keynote on the Future of Space Exploration, broadcast live to 108 cities around the world
Carpe futurum.
– Christopher Altman
20230705
20230704
How Artificial Intelligence Could Save the Day: The threat of extinction and how AI can help protect biodiversity in Nature
The Conversation If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen As a professor of AI, I am also in favor of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.
CNN AI industry and researchers sign statement warning of ‘extinction’ risk Dozens of AI industry leaders, academics and even some celebrities called for reducing the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority.
NYT AI Poses ‘Risk of Extinction,’ Industry Leaders Warn Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.
BBC Experts warn of artificial intelligence risk of extinction Artificial intelligence could lead to the extinction of humanity, experts — including the heads of OpenAI and Google Deepmind — have warned.
PBS Artificial intelligence raises risk of extinction, experts warn Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind.
NPR Leading experts warn of a risk of extinction from AI Experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don't take control over humans or destroy the world.
CBC Artificial intelligence poses 'risk of extinction,' tech execs and experts warn More than 350 industry leaders sign a letter equating potential AI risks with pandemics and nuclear war.
CBS AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say Artificial intelligence could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a "global priority," according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the "godfather" of AI.
USA Today AI poses risk of extinction, 350 tech leaders warn in open letter CAIS said it released the statement as a way of encouraging AI experts, journalists, policymakers and the public to talk more about urgent risks relating to artificial intelligence.
CNBC AI poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn Sam Altman, CEO of ChatGPT-maker OpenAI, as well as executives from Google’s AI arm DeepMind and Microsoft were among those who supported and signed the short statement.
Wired Runaway AI Is an Extinction Risk, Experts Warn A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic.
Forbes Geoff Hinton, AI’s Most Famous Researcher, Warns Of ‘Existential Threat’ From AI The alarm bell I’m ringing has to do with the existential threat of them taking control,” Hinton said Wednesday, referring to powerful AI systems. “I used to think it was a long way off, but I now think it's serious and fairly close.
The Guardian Risk of extinction by AI should be global priority, say experts Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war.
The Associated Press Artificial intelligence raises risk of extinction, experts say in new warning Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Al Jazeera Does artificial intelligence pose the risk of human extinction? Tech industry leaders issue a warning as governments consider how to regulate AI without stifling innovation.
The Atlantic We're Underestimating the Risk of Human Extinction An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.
Sky News AI is similar extinction risk as nuclear war and pandemics, say industry experts The warning comes after the likes of Elon Musk and Prime Minister Rishi Sunak also sounded significant notes of caution about AI in recent months.
80,000 hours The Case for Reducing Existential Risk Concerns of human extinction have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge, MIT, Oxford, and elsewhere.
The Washington Post AI poses ‘risk of extinction’ on par with nukes, tech leaders say Dozens of tech executives and researchers signed a new statement on AI risks, but their companies are still pushing the technology
TechCrunch OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk In a Twitter thread accompanying the launch of the statement, CAIS director Dan Hendrycks expands on the aforementioned statement, naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI — not simply risk of extinction.”