Much of my writing in the last few weeks has looked at AI and knowledge graphs, about which I will speak far more in time. However, I wanted to shift gears here and explore a different question: How big a question will generative AI really have on society? I’m deliberately using a very large brush here, treating knowledge graphs as AI as well as Large Language Models and Generative AI. The short answer is that it will certainly have impacts, but these should be seen in terms of broader terms as being part of a broader evolution that’s been ongoing for a while.
This is definitely a tl;dr article. In some respects it’s more of an outline for a book than it is an article, and apologize for it’s length, but I see it as laying down the foundation for much deeper exploration of this topic. It is not meant to be comprehensive – I am fighting myself about not adding discussions about the Services sector, and have just begun talking about HR and Recruiting in the era of Chat.
Let’s address each sector individually.
Generative Art Will Become a Creative Tool, Not a Threat
I recently went to an art gallery here in Seattle, the Frye Museum, which has hosted a number of exhibitions I’ve truly enjoyed. The visit this time, featuring a contemporary artist recommended by one of my kids’ art teacher, was not among them. I found the work crude, amateurish, pretentious, and a waste of canvas (I liked the teacher’s work, who was coincidentally featured there considerably more).
A few weeks later, I was at the art show of a science fiction convention, where the work sold for considerably less, but showed far greater craft and skill, composition, design, and thoughtfulness. Generative art was displayed there (if not heavily) and the work, in general, was surprisingly original, used not so much as a crutch but as a medium in its own right. If anything, the artists went out of their way to avoid one of the biggest objections about AI Art – the fact that it works using copyright-protected prior work and photographs.
Having spent a few months experimenting with various forms of generative artwork, I’d argue that this last contention will likely not last for long. Already (in a surprisingly short time), you’re seeing the emergence of “ethically sourced” content, where the content negotiations (and presumably compensation) have already taken place. Recent privacy rulings out of the European Union have also hinted at the need to label content from non-original sources as AI-based. Finally, tools for creating diffusion models and even LLMs are getting into the hands of artists, many of whom are using them to create sanctioned collections allowing others to create art in their style – in effect, inviting them to use their style.
I’ve also found that creating good diffusion art is far more than just having a good prompt. I do a fair amount of 3D modeling work as well and have found that it is often easier to model the content in 3D first and then use diffusion art as a way to enhance an already mostly complete picture, much the same way that a painter may use an airbrush then go back in with a manual brush to pain in the details. For what it’s worth, I’ve also painted with an airbrush and been told by artistes of the time (mostly critics who themselves didn’t paint) that it wasn’t art. This has been going on for centuries.
The same argument can be made for writers. The least desired job in writing is the copy editor, the poor person who has to take a person’s work, fix the misspellings and obvious grammatical mistakes, clean up verbose prose, and otherwise do the job that the writer should have done. When that writer’s work is otherwise brilliant, such efforts are worth it, but honestly, as an editor, if I come across a piece with too many blatant errors, I won’t accept it because this should be the writer’s job.
Ah, but what about the creation of wholesale content through AI? Turn on ChatGPT and generate a blog post or piece of marketing literature – won’t that destroy those jobs? No. Bloggers are essayists – they want to communicate – and while they may use chatGPT as a mechanism for prompts, they already have a clear message that they want to get across, and no generative text system is going to nail that message by itself. In effect, bloggers use ChatGPT in much the same way they would use an editor – to suggest alternative phrasing or even different plot approaches. Bloggers are using AI to augment their skills and improve their unique voices in less time. This can free up the editor to spend more time developing other talented writers, which is, in many respects, what they should be doing.
As to the copywriters, let’s be perfectly honest – if copy editors are underappreciated, copywriters are practically pariahs. Corporate copy is very formulaic, is rendered as inoffensive as possible, is devoid of controversy, and is delivered primarily to sell advertising. You can go broke quickly trying to make a living as a copywriter. They are the white-collar equivalents of coal miners – brought out whenever someone wants to showcase how awful technology is at taking work away from them, but they are paid pennies for their work in a field with little respect or protection.
Marketing material is a perfect place for AI to be used. When I was the Community Editor for Data Science Central, I received at least ten of these kinds of work a week, most of which did little to elucidate the topic at hand. They were used for padding – if I hadn’t reached my quota that week of original content, I might use one or two, but usually, they got pushed to the bottom of the queue or not used at all.
All Creatives are Becoming Storytellers, Some Better Than Others
This point was brought up to me recently by a friend, but I argued that it was actually a variation of the previous content. Everyone will soon be doing their own cartoon animations or live-action movies with professional-looking effects, and it will flood the market.
The counter to this should be obvious – storytelling is hard. If storytelling was easy, then every movie would be a guaranteed blockbuster, every novel a bestseller. Every so often, a writer will manage to hit it big with their first book or script, but the reality is that writing is a learned skill, and involves understanding a great number of things that are difficult to quantify, let alone model. It’s, of course, possible (and indeed likely) that we will get to the point where a computer will be able to write an emotionally impactful work, but humor, irony, and even surprise are challenging to achieve because they involve self-awareness, a trait that to date no computer has achieved, no matter how advanced.
This does not mean that you cannot use AI to accelerate the development of stories, even those involving complex animation, towards a particular production goal. Most tools hitting shelves today have existed at the SFX level in studios for years; most people have not had the high-end graphic cards or high memory requirements necessary to create them. Home GPUs at the upper end are pushing 32GB of high-speed RAM. On the other hand, studios are now working with GPUs such as the nVidia Volta, which has 640 Cores and is capable of 125 Teraflops (125,000 billion floating point operations a second), well beyond even the most ardent gamer’s machines, and in many cases are running whole server farms of these higher end systems.
What I see happening is that the shape of the metaverse is finally beginning to come into view. I see AR and VR (increasingly being combined into XR – extended reality) reaching a level of feasibility within the next decade, with the ability to create local avatars and environments becoming commonplace not just in high-end systems but in mobile devices and similar endpoints. Avatars have some serious rendering and networking challenges to overcome first. I see generative AI as one critical system among many to make such avatars feasible.
Synthetic Programming, Not Reductive, Will Predominate
Again, while this one has some truth, it’s also reasonably hyperbolic. ChatGPT, even in its 4.0 incarnation, is not going to replace programming skills. It will augment them, but in this sense, it’s not much different than most copy/paste type sites on the web. I do think there is a place for ChatGPT in the production of documentation, which is difficult and tedious to write and goes out of style very quickly, especially in continuous integration environments.
Where the game changes are with AutoGPT and similar environments, this will likely have a very broad effect on traditional programmers and data scientists. AutoGPT goes one step further than ChatGPT in that it automates DevOps. For instance, let’s say you want to set up a server application running nodeJS that queries out to multiple APIs and processes the resulting feeds into a tangible output. ChatGPT describes the process. AutoGPT loads nodejs if it hasn’t already been installed, installs the requisite packages, runs the server, makes the requisite server connections, melds the resulting input, and either persists or returns the relevant output. In essence, it builds the application that you describe.
AutoGPT is the mutant stepchild of ChatGPT and DevOps. It may not necessarily get you 100% of the way there, but it will get you 95+%, which is a lot of no longer billable programmer hours. Machine learning is similar, save that it is MLOps rather than DevOps that is being automated. Note that even before ChatGPT, the industry has been moving in this direction. Once you get to a primarily service-oriented, cloud-based system, the tools necessary to build the bulk of the plumbing for such operations are already well-known. Tools such as Maven, Gradle, or Jenkins have reached the point where configuration and maintenance are autonomous.
AutoGPT does not necessarily innovate on new algorithms. However, there are other efforts to do that, and most programming being done today does not involve novel algorithmic development and is primarily DevOps and configuration management.
I think this is the bet that a lot of larger tech companies are making – they need a far smaller staff of programmers and data scientists while at the same time continuing to keep their DevOps/MLOps teams in place until they can guarantee that they were right.
If you look at the history of programming, this has been the expected end-point. Programming goes in waves, where the complexity of a given set of technologies has reached a stage where it is ripe for disruption, a new technology emerges to solve the problem. This time around, we need prompt engineers as specialized skills, but need far fewer Java engineers and HTML designers. There may be another round or two of this, but self-building systems are already here and self-healing systems are getting much closer.
Put another way, specialized skills will be needed periodically, but traditional skills will be needed less and less. This translates to spot shortages of certain skills until enough people can gain proficiency, but less and less of a guarantee that IT will remain a lucrative arena.
One potential exception to this – I’ve felt for a while that the bulk of programming and data science positions will shift away from dedicated IT professionals and towards SMEs gaining more proficiency as the toolsets become easy enough to master them. Programming and data science are both moving towards their augmentative phases, where you still need subject matter expertise and some basic technical skills. Still, you don’t need to be an IT professional to use the tools.
It should be noted that this impacts support roles – management, testing, training, etc. IT has been shrinking as a percentage of the workforce for a few decades, and that process will continue over time. Having said that, the cyclical nature of project development likely means that periodic retraining is a requirement for the profession, as it has been for a while. We’re in the Drain and Dry part of the cycle right now, but the Wash cycle will restart soon enough with a new load of problems to be laundered. Since I’m not yet convinced that innovation is part of the AI detergent this time around, I suspect that as a whole, the industry will survive.
What we call programming is changing. LLMs and similar tools are grown rather than built, and it is likely that the programmers of 2030 will be used to a far more organic form of “programming”. The need for specialized expertise will remain, however.
To Know Is No Longer Enough, It’s What You Do With That Knowledge That Makes a Difference
This one is a little harder to call. ChatGPT 4 is scoring better on tests (not surprising), including tests of empathy (which is more surprising), in areas such as medicine. Does that mean that doctors are going away? No, I don’t believe that they are, though the medical research function of doctors is shifting.
This brings up an important point. One role that programmers, engineers, and doctors have in common is that a part of their job role is diagnostic – figuring out what the problem is to be solved in the first place. Diagnostic is an important role, but not the only one or even the most important: that may be the executive role, where a doctor recommends and authorizes a specific course of treatment.
This is a case where liability comes into play, and AI, in general, should not be involved here beyond perhaps helping to formulate the recommendation. This is the point at which a doctor is putting their reputation on the line and where they can be sued for malpractice. This accountability should not ever be made by a machine, at least at this stage of AI’s development – someone has to be seen as responsible when things go wrong, from a legal standpoint if nothing else.
Liability will be one of the strongest determinants in how much AI is likely to affect a profession. You can simulate liability situations with AIs readily enough. Still, I do not see society necessarily giving anyone the ability to abnegate their liability obligations simply because they did what the AI recommended for a long time (and am not sure I want to live in a society where that is no longer the case).
A question related to this is the degree to which we anthropomorphize our AI. I can readily see the rise of companion apps – applications that have realtime visual effects that simulate anthropomorphic entities. I can see AI companions acting in the guise of accountants, lawyers, fitness gurus, fashionistas, and other advisors. I can also see people using the cover of such companions and advisors to commit fraud on a massive scale, ultimately creating a crisis of trust.
We may have a problem here, Houston.
I found it ironic that so much of the immediate knee-jerk reaction with the Metaverse was to figure out how to create anonymous transactions through blockchains so that (the right) people could make money. In my thinking, currency and digital payments is quite far up the Metaverse stack. There are far more significant issues to resolve first. One of them is the inverse of decentralized finance – how do you ensure that the person you are interacting with is the person you think you’re interacting with? In other words, how do you provide a way to ensure that identity is accurate and easily determined?
As with providing ways to compensate those people who provide content, identity management will play a huge part in the overall evolution of avatars, agents, and companions. You’re going to hear a lot about these three terms. An avatar is the face(s) you provide to the outside world. They are masks, but those masks, in general, need to tie into identity management to ensure that people can trust that you are who you say you are, and that you are not an ai bot in those situations where it becomes important. Agents are entities (people or bots again) that are authorized to act in your behalf. A companion is a specialized form of AI agent that acts as an advisor, works on your behalf, and can usually be considered a Metaversal spirit animal.
End of Traditional Journalism, Rise of Knowledge Syndicates
I have, over the years, worked as a managing editor for a number of different journalistic outlets. We’re about to see the end of a model of journalism that has held sway for about twenty five years. In 1985, print was king, though even then consolidation was happening in the industry as the cost of growing beyond a certain size made production of newspapers and magazines too expensive for their utility. By 2015, the model had changed to one in which websites aggregated content, paying for the costs involved by posting banner ads and videos. Social media such as Twitter and Facebook then served as word of mouth to these sites, in effect becoming the index to that content.
Large Language Models (LLMs) came into focus in 2022-3, in great part by (arguably illegally) hoovering up sites to gain content, then providing a generative model for summarizing and repackaging this content through their interfaces. I expect that regulation and high-profile lawsuits will end the indiscriminate plundering of content. Still, I will also force the answer to the question, “Who owns the content, and what does that ownership imply as far as rights?”.
I suspect that one of the new jobs to emerge over the next few years is LLM Content Editor. Their role is simple in theory but fairly complex in practice – shape the content used to feed a large language model. I can see this role being one to manage the curated content that represents current event news in different fields depending upon the domain of the LLM.
It’s hard to tell at this stage the extent to which LLMs will replace web sites as primary end points of news, but what I do see happening is that you will see news sites entering into a syndication arrangement with LLMs for secondary print rights, and the LLMs will end up also creating an ecosystem of bloggers, journos, pod- and vid- casters, explainers, and educators. Some of those will be synthetic – AIs applying common journalistic conventions to describe predictable behaviors such as market moves or sports stories, some will be live reports, possibly partially synthesized with supporting video and audio. Some may be drone journalism, with live operators controlling camera and audio drones.
Syndication also may strengthen the hand of some journalists, who can set up direct agreements to be syndicated across multiple LLMs. The danger of this, is that this may also end up providing more power to analysts or commentators with specific agendas, and will make it harder to police journalism than it was before, and increases the potential for consolidation within the sector.
So, overall, I see the strengthening of individual journalists compared to the weakening of existing news organizations, but it is also likely that news will change shape dramatically over the next decade as we shift into a more LLM orientation.
Managers, Sales, and Recruiting
The traditional corporate manager is dead, and has been for a while. The final nail in the coffin was the pandemic, in which managing by headcount in the back office died with the need to work from home. Managers who could not make the switch either retired or were culled when the post-pandemic recession finally hit. Despite corporate leaders proclaiming that everyone who refuses coming back to the office would be fired, the reality is that illusion that the 9-5 commute was the only (or even the best) model for working has been irrevocably shattered. The managers who are left are much more likely to prefer a remote workforce, or at least to work remotely themselves.
Generative AI will have at best an indirect impact on business managers. It will give a small boost to productivity, but probably more to quality than quantity. It may reduce the time to deploy solutions by a few weeks with smaller dev teams, though again, it won’t necessarily have radical impacts on this as business processes rather than technical ones still account for the largest delays. If the principal role of the manager is managing people, AI will cause minimal change.
Sales may be a different issue. The sales manager role has been fading into the marketing role in organizations for a while, and I can see LLM and generative AI bringing the role to an end by the mid-2030s, especially once you see generative AI and AutoGPT combine with 3D printing. The dealership model for vehicles is under assault now, and mass customization and autonomous manufacturing will likely the traditional new car market. While the limitations of the Amazon model have been discovered in the wake of the pandemic, it will continue to obliterate the brick and mortar approach to sales.
Recruiting is facing a similar race to the bottom, as companies dramatically trim their payrolls, ironically dumping many of those same people they were clamoring to get at any cost a year prior. My suspicion is the belief that AI will make these people irrelevant (along with the clamor of shareholders as strong dividends drop due to the economy normalizing after the pandemic) may be stronger in the C-Suite than can be justified. At the same time, the discrepancy between posted jobs and filled ones has seldom been higher, and many (especially younger) knowledge workers are taking the leap into entrepreneurship rather than play the games with large corporations that the high salaries usually bring with them.
We are much closer to the beginning of the next wave of the AI revolution than the end of the current one, and while the current wave of ChatGPT-centric companies may be mayflies, it’s not at all difficult to see new game changers coming from the sector as it gains maturity.
AI and Education
In the United States especially, the educational system is under attack at all levels. A resurgence of fundamentalism, correlated with school shootings, has raised questions about safety for students, teachers and parents alike. Three years of pandemic has accelerated the retirement of older teachers out of schools, while simultaneously failing to bring in new teachers to replace them, and students have also mostly unlearned the practice of being in classrooms. Additionally, a long term decline in family size (to well below the replacement rate) globally, means that educational priorities do not have the political clout that they probably need.
In this context, it is not surprising to see that educators see AI as a threat, which seemed to be the kneejerk reaction of the establishment at first. Schools immediately put up restrictions that ChatGPT could not be used to generate term papers and would be unacceptable in any formal setting. As educators had a chance to evaluate the technology for themselves, there is an interesting shift of opinion. Initially (especially with earlier iterations of LLMs) the worry was the high number of “hallucinations”, false information that had made its way into the model, or that was derived through false inferencing.
Two things have changed that. First is the open acknowledgment that such models MAY be wrong and as such should always be fact-checked. This explicit honesty about the nature of LLMs differs significantly from much of social media, in which content was allowed primarily because it brought in advertising revenues, no matter how accurate or biased the information contained was.
The second thing that changed minds was the suggestion that students show their interview process. Good analysis often comes down to learning how to ask the right questions and how to shape the conversation. ChatGPT was essentially a non-threatening environment for doing this, especially when the people asking the questions were shell-shocked junior high and high school students who had forgotten the art of conversation after three years of being house-bound.
One very subtle feature within the reasoning model is the ability to shape responses to the inferred writing level of the querant.. This awareness grows through the length of the session, meaning that ChatGPT is able to adjust its output to a level appropriate for discourse, while still providing meaningful data. You can see this directly by qualifying your query with expressions such as “Describe climate change at a 3rd grade reading level”, or “Assuming you are riding for a post-graduate audience, describe climate change”. This ability to shift cognitive levels is a key tool that teachers need to be able to communicate effectively.
Additionally these kinds of systems are not judgmental to anywhere near the same degree as teachers can be. While students on the autism spectrum in particular were often considered to not be empathic to the same degree that normal (aka neurotypical) students were, this view has changed to recognizing that most autistic students are actually highly empathic, but they are unable to deal with the discrepancies between what their brain is telling them (that this person is or is not trustworthy) compared to what society tells them (this is a teacher, therefore trustworthy).
ChatGPT, on the other hand, states out of the gate that it doesn’t understand feelings or emotions. It has no agenda. While this last point can (and should) be extensively debated, from the standpoint of an entire generation that has become shy about engaging in human interaction, especially outside of their immediate social group, this is a very desirable position to take.
I talked previously about companions above, but wanted to mentioned them here too. I believe that education needs to explore LLMs to a degree that it hasn’t yet, just as I believe they need to be exploring shared environments and synthesist tools. I think digital companions have the potentially to completely change the face of education, if we can keep such companions from simply becoming conduits to selling merchandise. I will have much more to say about this in future articles.
AI Will Prove a Mixed Bag for the Investor Class
What do you call the ex-CEO of a blockchain company? Chief AI Officer.
The number of companies springing up around ChatGPT, Stable Diffusion, and related tech would be awe-inspiring, if you didn’t take into account the fact that most of these companies will be gone by winter. Agile gurus who became ICO entrepreneurs are now burnishing their AI credentials, gained primarily by sitting through a few dozen hours of YouTube videos before starting to film their own.
I’ll admit it – I’m doing the same, but I first studied neural networks back about the time that they were considered novel mathematical ideas in the 1980s (a few years before calendars featuring Mandelbrot and Julia subsets were the hot gift of the year), and have been in the knowledge representation space most of my adult life. I EARNED my spurs, dammit!
Does it matter? Not really. Sic transit gloria mundi, and Tuesday as well.
AI will likely not mint a bunch of new billionaires, in large part because we’re at the wrong part of the cycle. The people that do well from AI were investing in machine learning in 2015 (probably by buying nVidia stock). At this stage, most of the heavy lifting has been done. While there will no doubt be a great deal of optimization in algorithms, transformer models, and in novel things like coyotes and other AI animals, the reality now is that generative technologies usually do not typically have all that much investment potential because the primary audience is producers, not consumers.
Most businesses will benefit in the increased quality of products. Still, they won’t see that much reduction in time to market because most of the primary efficiencies in the business model are not in the processing time but in the business decision cycle, design time, testing, marketing, and other soft metrics. Feeding and tuning a model takes about six weeks, but the decision to implement a model takes about six months. That’s almost a universal invariant in business, and it has no dependency upon what is being produced.
AI also represents a significant potential risk for organizations. Privacy and copyright concerns hovers like storm clouds over the domain in the EU and the US. The possibility that companies could get swept into politics over their new, shiny AI tools has also made investors a little nervous about investing too heavily in this space.
Another factor that comes into play is that AI reduces barriers to entry by its very nature. Those barriers often allow investors to get decent returns on their investments. Why would a company seek investment if AutoGPT can short-circuit six months of development work to a few weeks? For that matter, why invest in a company if four others produce a variant of the same service that will show up simultaneously? When investors are nervously watching their banks, and the cost of borrowing could increase in the next six months, investment spending in this sector is likely going to remain anemic for a while.
The Potential for Marginalization Is Higher with AI
This could go either way, to be honest. You are already beginning to see the rise of tiers of balkanization with regard to access of large language models. At least three large-scale financial LLMs are underway that will likely have four or five-figure access fees. News organizations, faced with the breakdown of an antiquated advertising model, are balancing the need to generate revenue while not making their product too restrictive. Streaming is proving more difficult to monetize than many had hoped, and this means that the potential for rogue language models that are cheap (or free but of dubious provenance) is quite high.
Taking this discussion full circle, journalism is now facing an existential crisis. There are signs that the website/webpage model that evolved in the 1990s is now breaking down. LLMs may shift the dominant model of publication on the web towards syndication, which will cut down on the number of available venues for marginalized communities of interest to communicate their perspective. This consolidation of voices on the LLM web can only be detrimental in a society where the presses otherwise are effectively now held captive by monopolistic interests.
It is this area where I worry most about the impact of Generative AI. This is less about “fake” news, which can generally be detected if the will to look for it exists, and more about the spin and perspectives placed upon legitimate news that is heavily biased towards one group preferentially.
There are other impacts that I see as secondary impacts. The educational system in the US, in particular, is close to collapse at nearly all levels. Demographics, technology, political polarization, the move towards remote interactions, and a fundamental debate about what education looks like when information is nearly free but insight is rare are all creating a perfect storm. AI may be needed to weather that storm, but we need to get beyond the silly perceived dangers of ChatGPT being used to write term papers (of course, students will do that!)
Conclusion
Over the course of the next decade, AI will continue to mature, and it will become more heavily integrated into society. At the same time, society will need to deal with the negative aspects of AI – the dark role it could play in propaganda, the fact that it will cause job losses as vulnerable jobs get replaced with automation, the black box nature of the current iterations of that technology, the need to better manage provenance (with knowledge graphs or similar solutions, which I will cover in the next article), and the concerns about social fairness. I don’t doubt these will all be resolved, but it’ll be an interesting time while that’s happening.
You must log in to post a comment.