It seems the intellectuals who dominate academia, publishing, and media companies are aquiver of late with fascination over ChatGPT. Some are shocked to fully discern, perhaps for the first time, the
This item is available in full to subscribers.
We hope you have enjoyed the last 2 months of free access to our new and improved website. On December 2, 2024, our website paywall will be up. At this time, we ask you to confirm your subscription at www.themtnear.com, to continue accessing the only weekly paper in the Peak to Peak region to cover ALL the news you need! Simply click Confirm my subscription now!.
If you are a digital subscriber with an active, online-only subscription then you already have an account here. Just reset your password if you've not yet logged in to your account on this new site.
Otherwise, click here to view your options for subscribing.
Questions? Call us at 303-810-5409 or email info@themountainear.com.
Please log in to continue |
It seems the intellectuals who dominate academia, publishing, and media companies are aquiver of late with fascination over ChatGPT. Some are shocked to fully discern, perhaps for the first time, the ruthless waves of disruption artificial intelligence will unleash on society.
ChatGPT is a conversational AI engine (AKA a “chatbot”) released for consumer experimentation by OpenAI. ChatGPT shows a groundbreaking capacity to organize publicly available information (think Wikipedia) into well-structured essays and thoughtful analyses. It can write about virtually any topic and imitate virtually any writing style. In other words, it is the most convincing journalism / creative writing / research robot produced to date.
During the Clinton / Bush epoch, the intelligentsia could be accused of a certain empathetic indifference to the plight of semi-skilled workers as automation (and offshoring) disrupted manufacturing industries. Now they fear that even their rarified stature as “knowledge workers” may be obsolesced as AI disrupts creative industries.
Educators are panicked that technology-enabled cheating has crossed a tipping point. The post-truth era – where memes supplant objective data – could soon evolve into a post-provenance future where authorship and copyright no longer anchor the written word.
The tech industry, as usual, sees nothing but limitless opportunity.
OpenAI is a venture-backed company in which Microsoft recently invested $10 billion. That investment, and viral interest in ChatGPT, shifted an existing AI arms race into overdrive. Microsoft’s stock spiked 12% when the investment became public, adding roughly $250 billion to the company’s market cap.
Microsoft says it will build GPT-4, the generative language model that powers ChatGPT, into search, browsing, and business software. CEO Satya Nadella claimed the technology will “reshape pretty much every software category that we know”.
Pundits are gushing with stories about ChatGPT and Microsoft’s integration of GPT-4 with its Bing browser. These range from lavish praise for the scope of ChatGPT’s research capabilities to the creepy ways the Bing integration fails the Turing test for sentience.
These accounts seem to be premised on the idea that generative AI has something akin to a human brain. That’s understandable, since Microsoft and Google (which has a chatbot named Bard) are selling a future where search engines could synthesize information about products on the internet with your finances and your calendar and other stuff too. But ChatGPT and Bard weren’t built to mimic sentience.
Every time the tech industry aligns around a “new new thing,” analysis is required to separate real implications from reflexive hype. Gartner Group, a premier tech consulting firm, codified the adoption process for new technologies into a famous graphical representation called the “hype cycle.”
Gartner assigns each technology to one of five phases. First is “technology trigger,” where scientific breakthroughs spawn a race to commercialization. Second is “peak of inflated expectations,” where publicity exceeds the utility experienced by early adopters. Then comes the “trough of disillusionment,” where experimental adoption wanes. Next is the “slope of enlightenment,” where adoption accelerates again as second and third generation products demonstrate reliable benefits. Last is the “plateau of productivity,” where mass adoption kicks in because efficacy and economics are clear.
In 2022, Gartner said generative AI was just entering peak hype, putting it 2-5 years from mass adoption. Natural language processing (NLP), a similar AI that understands language more contextually, had already reached peak hype but stalled in the trough of disillusionment. Gartner puts NLP, which is closer to the information synthesis vaporware big tech is describing, 10 years from mass adoption. In comparison, computer vision AI (like the facial recognition deployed in China) is already climbing the slope of enlightenment and should hit mass adoption in less than 2 years.
Artificial general intelligence (AGI) is the more accurate label for a sentient machine: software that can continually shift analytical focus across disparate tasks on its own accord, without reprogramming. Gartner says AGI is still in the innovation trigger phase waiting for a scientific breakthrough.
As described by technologist Ray Kurzweil in his magnum opus The Singularity is Near, a successful AGI could propel machine intelligence toward a vastly increased scale that quickly outstrips humanity’s collective intelligence. Contrary to the notions of rogue AGI that Hollywood fixates upon,
Kurzweil sees this as an evolutionary progression; he believes the Singularity is where human and machine intelligence merge (the 2014 film Transcendence is based on Kurzweil’s work).
AGI’s potentially explosive scalability is enabled by the steady march of Moore’s Law. Gordon Moore’s axiomatic description of transistor counts that double with each new generation of processors describes how IT disruption is yoked to the logarithmic expansion of computing power.
Prophets like Kurzweil and Yuval Noah Harari argue that exponential growth will allow humanity to disrupt its way into a future of material abundance, largely devoid of disease, want, and misery. However, there are several serious obstacles to these visions of exponential idyl.
Firstly, as climate gadfly Vaclav Smil points out, there is the fact that none of the other fundamental technologies upon which civilization depends – agriculture, energy, and transportation – have experienced the deflationary economics that increasing microprocessor capacity granted to info-tech industries.
Microchip capacity has increased about 35% annually since 1970. In contrast, the first two decades of the 21st century saw Asian rice yields increase by 1% annually, while yields of sorghum in Africa grew by 0.8% annually. Fertilizers and pesticides, starting in the 1960s, pushed US corn yields upward roughly 0.5% annually.
The efficiency of large steam turbines, which produce most of the world’s electricity, has only increased 1.5% annually over the last century. Electricity used to produce steel has declined 2% annually in the past 70 years. Battery energy density has improved 2% annually since 1900.
Even breakthroughs in solar cell manufacturing, where semiconductor economics play a more disruptive role, don’t mimic the logarithmic nature of Moore’s law because photovoltaic cells only account for 15% of the total cost for residential PV systems.
The second impediment to the innovation nirvana that technology companies and visionaries extol is a more insidious foe: human nature. Innovators always claim their creations are destined to “make the world a better place.” Some do. But many disruptive technologies end up becoming tools of tyranny instead. More on that subject in a future missive.