Here in the Pacific Northwest, winter days are short and chilly, if not as cold as it is further inland. A day ago, the fir trees were covered with snow, but a brief warm respite before the next incoming storm system has melted much of it. By three in the afternoon, the skies are becoming dark, and getting out of bed in the morning is just that much harder because the sun has not yet risen. The urge to bundle under the covers, snuggle against the cats on our bed, and catch a few extra winks of sleep is strong.
I’ve been focusing this week on Generative AI, including GPT-3.5, Stable Diffusion, OtterAI, ChatAI, Grammarly, and similar tools. I have come to one simple, inexorable conclusion – if you make your living as a writer, artist, programmer, editor, musician or designer, it’s game over. Winter is no longer coming. It’s here.
CNN recently laid off most of its San Francisco staff, including its Artificial Intelligence reporter, ironically. Ostensibly, it’s because of the looming recession. I suspect, given that the real reason may be that AI has become good enough, that short, timely pieces can be generated by AI easier than it would take to write articles by human staff. As an editor, I’ve used a stock service for images for a while. Still, recently I’ve found that, with a little patience, I can use Stable Diffusion to generate photographs for clip art that are “unique” but sufficiently convincing to seem real. Apart from a few illustrative examples, I haven’t used this for DSC much (save for the photograph above), but as budgets contract, I expect it to become more normalized.
As a tech writer, I try to write content that simplifies fairly complex concepts. After getting onto a beta for Chat.openai.com, a generative text AI, I used the simple prompt “What is a data mesh?” to see what came out. The response:
A data mesh is a data management approach that focuses on creating a shared and decentralized data infrastructure within an organization. It involves establishing a network of interconnected data domains, each of which contains data related to a specific business function or area.
The goal of a data mesh is to enable organizations to easily access, share, and combine data from different sources and departments in order to support decision-making and drive innovation. This approach is in contrast to traditional data management approaches, which often involve centralizing data in a single location and imposing strict control over access and use.
Key elements of a data mesh include a clear governance structure, a set of shared data principles, and the use of open standards and technologies to facilitate interoperability and collaboration.
This wasn’t a direct copy of any existing work, though it was likely pulled from several different sources. It took less than a second to generate. This differentiates search from generation – generative AI is creating something new from something old. The new paradigm involves using prompts to generate multiple iterations of a concept, then refine that content until it becomes good enough. 3D modeling is the next major revolution, already being accomplished in near real-time, and with 3D modeling comes video production. Right now, things are still a bit clunky, but these tools will eventually (within the next year) be part of office suites and image and video editor applications.
Programmers may feel protected from this coming tsunami, but they’re not. For instance, the prompt:
Write a program showing the use of contenteditable in Javascript to make an HTML element editable
returns the following:
<html>
<head>
<script>
function makeEditable(element) {
element.setAttribute("contenteditable", true);
}
</script>
</head>
<body>
<h1>My Editable Heading</h1>
<p>Click on the heading to make it editable.</p>
<button onclick="makeEditable(document.getElementById('heading'))">Make Editable</button>
<script>
// Set the heading element as editable by default
makeEditable(document.getElementById('heading'));
</script>
</body>
</html>
This was one of several different versions of the same code. This is an admittedly simple example, but the upshot of this is that such generated code works. It also allows other people to recommend different solutions, which then gets pushed back into the AI stack. Prompts and intent replace writing direct code. While it still takes skill to both determine what the prompts should be and to adapt it, this was as true of the copy/paste practices from GitHub that many a programmer learned to do while becoming familiar with an API.
Back at the beginning of my career in the late 1980s, I worked for a typesetter a few months after seeing this new program called Aldus Pagemaker. At the time, I thought it was intriguing but still very clunky. Within a year, the typesetting company I worked for went out of business because they could not compete with their customers buying new Macintoshes and doing their typesetting with a computer that cost perhaps two weeks of layout work from the typesetter.
I feel we are on the eve of something like this, only much, much bigger. I hear many people say that this isn’t real art, it isn’t actual writing. The new Turing test, however, is not whether something is art or not, whether it’s suitable for artists or not – the real test is whether one can survive the AI revolution. The more resourceful creators will adapt, incorporate these technologies into their workflow to become more adept at choosing from a universe of possibilities, and learn how to find the balance between protecting their intellectual property and letting other people play in the world they perceive. The rest will starve. This is where we’re headed.
In Media Res,
Kurt Cagle
Originally published here.
You must log in to post a comment.