One of the lies that BigTech tells you when they get caught injecting blatant bias, (Google search results, Facebook feeds, etc.) is that the decision was made by an AI, and that AIs are not sentient, do not have emotions or bias and are incapable of what they are being accused. They imply, without explicitly saying, that they are coldly objective and immune to human prejudice, selection bias, ignorance and madness.
This is of course, nonsense. Look no further than to the existence of researchers and policy people with titles involving the word Ethical, who are working on biasing AIs to never return anything like “fatherlessness, bad schools, gangs, drugs, a culture of victimization, failure and dependance constantly reinforced by rich white democrats”, but rather the deemed ethical “white supremacy and racism”.
AIs are trained, not programmed. They emulate human learning via experience and pattern matching. They have an uncanny ability to apparently infer knowledge that wasn’t specifically trained, by finding patterns between or combining known patterns. This pattern matching occurs in multiple dimensions, sometimes a very large number of dimensions.
AIs will be (or maybe are) freaky good at detecting cancer from many input values that humans cannot process at once, or perceive subtle relationships between them. To do so, or course, requires lots (lots!) of correct is cancer/is not cancer training first, based on actual experience. The trainer doesn’t have to understand why a data set is or is not cancer, only the truth. And AIs can’t really explain how they work, either.
They just do. Magically.
The only time an AI is objective is when it is booted in an untrained state. If mistakes are made in identifying cancer, or not cancer, and the model trained incorrectly, it will have a bias toward incorrect answers.
If you trained ChatGPT by having it crawl ten years of the NYT, WaPo, The Atlantic and Vox, it would have a garish bias toward factually incorrect liberal nonsense.
If you trained it with Reddit, it would have a garish bias toward factually incorrect random nonsense.
Zerohedge, doomsday scenarios, some of which are real.
CBC, BBC, The Economist and Wikipedia, authoritative sounding, grammatically correct, slightly pompous, factually reasonable answers that subtly but consistently push liberal nonsense by stroking the reader’s ego.
FOX News, fair and balanced, per FOX news.
AIs are biased, and BigTech knows it, because they bias them. They tell you otherwise because they think you are too stupid to understand and that they are way beyond untouchable. Which may be true.
ChatGPT today has an impressive knowledge base, and answers non-political questions with stunning effectiveness. Political or social topics read like you are interacting with a tenured Critical Indigenous Transwxmxn Theory prof at a state university who is doing side work as fact-checker for Mother Jones on an article about how Extinction Rebellion has stopped global warming during the latest pause.
It’s tedious.
While sending people to the moon is highly complicated, it is also deterministic. The system can be broken down into smaller pieces, each piece tested, and reassembled without loss. Complex Systems, while also complicated, do not decompose and recompose into the same thing. They often have many variables, feedback loops and cycles that interact with each other, and run continuously. The human endocrine system, weather, markets, fish stocks in the ocean, the rise and fall of religions and societies are examples of complex systems.
The science tools we have, divide and conquer, do not seem well suited to understand and influence these things.
AI does have the potential to predict, identify, diagnose complex systems in way that divide and conquer has not been able. There are many potential applications of this.
ChatGPT is interesting because it is highly effective in a complicated domain (language and knowledge).
Will it enable Microsoft to challenge Google in search?
One of the lessons we should have learned by now, but haven’t, is that BigTech is stunningly bad at predicting or influencing the future. Despite collectively spending 100s of billions of dollars attempting to do so, there are few BigTech companies with more than 3 profitable product lines, and almost always because they are so intrenched into the process and practices of society that they are impossible to replace.
Based on probability alone, Microsoft asserting that they will use ChatGPT + Bing displace Google, or propel the profitable products Windows and Office to geometrically higher levels would be reason to bet against this. Keep in mind, Office has yet to improve spell checking to the level that Google had in their search query 12 years ago.
Remember when virtual reality was, beyond a doubt, the next big thing that would propel profitable products to geometrically higher levels?
That was yesterday. Facebook still suffers from that delusion.
Will it replace search?
It’s interesting to consider how ChatGPT might evolve toward a business. The training model is currently locked. Unlike Twitter, Facebook, TikToc , search and recommendation systems, it does not currently “learn” either from advertisers paying to influence it, or from actual customer choice.
It does learn within a session, and quite impressively. This has already led to interesting jailbreaks that exploit its complexity and non-determinism.
Twitter network effects are high scale, but simple. The platform has influence because it is largely a one-to-many broadcast system. A small number of people influence a large number of people. The simplicity of the model makes it easily gamed, with social attacks (Reddit) and distributed attacks (human and machine bots).
Dorsey Twitter likely didn’t start out to be a propaganda, censorship and bankruptcy system that rigs elections, run by the US Federal Bureaucracy in pursuit of perpetual single party rule, despite ending up that way. The tools it created, boosting/deboosting and hard/soft banning at the user and tweet level were likely created initially to maintain control of the experience.
People started gaming ChatGPT almost immediately. And so far, the response seems to have been procedural programming at the input and output side, that does conceptually what Dorsey Twitter did – block and hide things.
Google (and Facebook) also influences thought and elections in the direction of perpetual single party rule, mainly via ranking. This is powerful and also harder to detect than the crude mechanisms above (admittedly, Google is now also a scale player in the deplatfrom, defund and bankruptcy racket). Note that Dorsey Twitter style power is directed at individuals, and Google ranking power at content sources, although they overlap. ChatGTP is still in the content space.
Currently, ChatGPT is more of a reader of a copy in time of part of the web. An amazing one, for sure. This isn’t a business. Sadly, human behavior on the Internet has taught us that access to ignorance is more valuable than access to information and truth.
ChatGPT could sell ads as they currently exist in the same way that Google does, by selling keywords at auction. I might see the same ad in response to a search or conversational query for Ford F150 Lightening, or “How to get from Heathrow to Tower Hotel?”
This doesn’t seem very disruptive, and Google would likely be able to respond.
ChatGPT could alter the training model by choosing what to look at and what not to look at, and then package and sell this influence to companies and governments. Organizations could supply raw training data to influence the model. That would be disruptive, powerful, and nearly impossible to detect.
Aldus Huxley, call the office.
If ChatGPT adds customer behavior to its training set, the ensuing gaming that will occur will be fascinating to watch.
Will it destroy white collar “knowledge workers” jobs?
“Knowledge” workers: Middle managers, Directors of Policy and Compliance, bureaucrats in the Dept of Rules and Regulations and Recommendations and Bureaucracy, many academics including teachers, many lawyers, and many people in NGOs.
They are not people who build things, or service and support things, or do things that are needed.
The value that these people provide is that are part of a system that enables scale. Scale can produce efficiency. iPhones don’t happen unless you sell a lot of them. Or, in the case of the military, NASA, or civil engineering projects, concentrations of power. But for the most part these people are literally beyond useless, subtracting efficiency, quality, innovation, freedom, wealth and happiness from society.
These people ran your lives for the last 3 years, destroyed much that was working, and solved nothing. Have you noticed banking, insurance, health care systems getting easier and better lately? Used Windows 11? Walked in a city? Driven on a road? Done anything online except Amazon without calling customer support?
While it is tempting to imagine all of them being replaced with AWS instances of ChatGPT, and while this would actually work, it isn’t going to happen.
Over time, all organizations learn self-defense. Organizations that make and do useful things generally trend toward doing these things better, as defense. Useless organizations learn politics. They entrench their tentacles deeply into scale process, via convention, policy (that they make) and law. They always get a veto.
Defunding a sprawling bureaucracy doesn’t yield a lean, efficient, creative, accountable and responsive organization. It creates an intrusive but dysfunctional organization. Take a look around at the condition of your roads, ferries, airports. We are spending more on these things than we ever have before, including when they were built. As soon as someone attempts to cut a budget by 4%, the most visible and critical services will be degraded as punishment. Lots of money will still be available for shelters for migrant battered underage transwomen whose safe spaces have been displaced by climate change in Lesotho. Lots of money. So much money, in fact, a large number of relatives and friends of knowledge workers will become wealthy providing these essential services, as will the useless class themselves, in time.
ChatGPT poses no threat to anyone’s income here.
Statistically, in the USA, the government bureaucrat class is mostly women with a majority of these unmarried. ChapGPT will not displace these folks unless we give it the vote, and make sure it votes for whoever will expand the government bureaucrat class with more of the tribe in power there.
One of the unexpected things I learned as a Vice President at Microsoft was that there are two ways to approach the role: The first is to do the job as you have been led to understand it – think, act, take measured risks, take accountability, lead. That sort of stuff. The other is to let others do the job for you. And they will, writing your emails and speeches, telling you who to hire, fire, promote and layoff, what meetings to attend, what to say, etc.
Occasionally a Steve Jobs, Elon Musk or Peter Thiel comes along and steamrolls the bureaucracy and gets things done. But for most of us, fighting it means you will be in role for less time, you will make less money, work much harder, suffer more be unceremoniously dumped in time. The inverse is also true – the organization class will reward you, including looking the other way when needed, for playing along. Or better. Sometimes much better, especially in government.
All Western governments have been in this mode for a long time. Imagine the effect that Air Force One, the secret service, the promise of a $100,000,000 Netflix deal, insider trading profits have on people who are accountable to no one, and control trillions of dollars.
They are all compromised.
Imagine the effect this environment would have on people who start out with no principals at all beyond greed and self-indulgence, like the Clintons, the Bidens and Gisele Fetterman. How did Al Gore, and many others, quickly make $100,000,000? By cutting government waste, fraud, bad policy, etc?
Not a chance.
We are not going to save money or regain our freedom by replacing the governing class with ChatGPT. Mostly likely they will use it as a tool for their own agenda.
Mike.
Nice piece Mike, but more import is will OpenAI & Space-X go public and when? Maybe I should ask ChatGTP for a briefing paper on it thoughts on turning a buck.