Twenty years ago, I wrote my first blog post. Blogging (we called it Web-Logging back then) was just emerging from its primordial ooze, and those of us who were writing back then was convinced that it was either a radical new idea that would transform the world or it would be a complete and total bust. Mainline journalists ignored us, at least until newspapers began to realize that there were a lot of writers out there that were perfectly comfortable writing for free.
At that point, many journalists still ignored us, but more than a few bit the bullet and embraced the Read/Write Web technology. Some, such as Robert Scoble, went on to fame. Others, including myself, never really went mega-big but were eventually able to support themselves relatively well through blogging. I’ve restarted The Cagle Report because I’ve realized that perhaps it is time to give it a go again, write on topics without worrying about the considerations of being an editor, and hopefully help others in the process. The site’s still rough and ready, but that never stopped me.
This is also my yearly forecast article. I’ll be looking at technical and social issues that will come to a head this year. I used to do this once a year, but the last few years have been turbulent enough that I somehow missed the opportunity. This year, when I turn 60, also seems a good opportunity to get out the crystal ball (chipped and slightly cracked down one side) and see what I can see.
First, a preface. Several science fiction writers over the past fifty years have all seemed to pick the 2020s as the era to be known historically as The Troubles. The reason is that we’re now seeing the confluence of many disparate trends that all seem to have their fulfillment about, well, now. Whether due to demographics or political cycles, trends in artificial intelligence or automation, climate change factors or evil juju from the Lost Continent of Atlantis, this decade that started with the Covid-19 Pandemic will get weirder and weirder.
This analysis (yes, it’s a long read) was intended to look at technology trends, but what is so striking to me this year is that so much of what I’m seeing now is about the Future of Work itself. This is significant, partially because I think we’re reaching a stage where the effects of technology on society, good and bad, are accelerating compared to the technology itself. This analysis is likely going to be the basis for a book next year.
If you’ve read my work from years back, some of these should be old hat to you. If you haven’t, you’re about to get the insight of many sleepless nights. I will say so up front that I don’t necessarily think that any of these trends are a foregone conclusion, only that the conditions are ripe for turbulence. I should also caution that this piece will be long, so maybe best broken up into chunks, and I do not doubt that there will be parts of this essay that you may disagree with vehemently. Please, leave your comments, one way or another..
So, I present Through a Glass Darkly, the 2023 edition:
1. Generative Adversarial Networks – AI Strikes Back
Generative Adversarial Networks (GANs) have been in the news lately. Most GANs operate by starting with two separate source data elements (such as images, videos, text selections, or music), creating various mutations of the two, then determining from them the ones closest to a given prompt or target. This is done with a relatively large dataset run through a machine-learning process. The associated model takes other input and generates the mutation that best fits new input criteria.
GANs are controversial for several reasons. They are heavily dependent upon their training data, and in quite a few cases, that data came from online sources without necessarily securing permission for that use. Dall-E and Stable Diffusion, two of the most well-known image GANs, can also create new artwork in the style of other artists, again, at the possible expense of the artists themselves. Finally, especially in the case of ChatGPT, there’s no guarantee that what gets produced as text is accurate or even makes sense. As such, the potential for such GANs to create misinformation is remarkably high at a very low cost.
This is the harbinger of the cognitive automation age, which I feel we are entirely unprepared for. If you are an artist, your career is threatened twice: first, as GAN-based art becomes more commonplace, people will commission less and less unique art. Moreover, style theft will likely become commonplace if you have a recognizable art style. Note that this applies to any creative endeavor – 2D and 3D art, photography, writing of most kinds, musical production, programming, and likely by the end of the year, video production. However, that may take a couple more years to disseminate.
This does not mean that GANs will write best-selling novels tomorrow, or perhaps ever. What it does mean, however, is that with a GANs system, the time needed to produce a “good enough” work gets compressed to a small percentage of the total time. For instance, with the proper prompts (a point to come back to) you could lay out an article on a technical subject that you know comparatively little about and produce content that will likely pass many editor’s filters. You can significantly reduce the number of writers necessary to produce catalog content, summarizations, press releases, and all the humdrum writing that keeps many people afloat. It’s unlikely that GANs will ever completely eliminate the need for writers (or any artistic professionals), but it does eliminate many jobs that kept those creators afloat.
At the same time, I will make the argument that we change the definition of art about once a decade or so. When Photoshop began being used for producing artwork, it wasn’t considered art until people began making a living with it. Specialized filters and brushes weren’t considered legitimate artwork until people began making a living with. Morphing was going to be the downfall of the artworld, now it’s a setting in most graphics programs. The use of 3D rendering software in the hands of anyone outside of a studio was for sure not art, well, until people began making a living at it. There’s a theme here. I expect GANs’ based artwork will continue to be hotly debated but already artists are deliberately building models of their own works to establish themselves as someone who’s style others would want to emulate. As the rough edges get knocked off the technology (and as people begin using this as yet another way to make art they can sell) the controversy should fade.
I think that it’s worth keeping in mind is that what differentiates a writer or an artist is as much the originality of their perceptions and ideas as it is the quality of their execution. Emulating artists is hardly new; it was a means to figure out new techniques, to explore a different way of seeing, and to learn one’s craft, but most artists, regardless of their discipline, understand that they won’t become good artists if they are living in someone’s shadow.
2. Consolidation in the AI Space
Ironically, some of the people who may be losing their jobs are the ones who developed this technology in the first place.
The term “AI winter” was coined in the mid-1970s after it became evident that the technology for supporting artificial intelligence was insufficient, and funding dried up for even basic research. It would take five decades and a genuinely exponential increase in computing power before that changed, but there is no doubt that AI has gone mainstream.
One side-effect of this has been an explosion in AI startups, which the tech giants have then been acquiring nearly as fast as they appear. Unfortunately, the market has also reached an oversaturated point, and as investors have gone from being aggressive to being cautious, many of these smaller companies are now flailing. This is only being exacerbated as the technology stacks of companies such as Google (Dall-E), Microsoft (GPT-3+), and nVidia (anything having to do with GPUs) are beginning to dominate the space.
I’ve been predicting for some time that we are likely entering a tech bust similar to the dot-com collapse of 2000. That was notable because the recession for most sectors was pretty mild, but the tech sector itself was hard hit. I don’t believe it will be as bad this time around – the tech sector, while larger in absolute terms than it was in 2000, has a much smaller footprint in comparison today, and is also more heavily integrated with other more stable industries, such as finance, insurance, health care, and entertainment. This means that, rather than people being out of work for long, they are likely to go from pure development work with startups to either working outside of their native sector or just striking out a little farther afield for work.
3. The End of Geofencing …
One reason this recession may end up being mild is that the fall of geofencing is changing the way we think about employment. Before the pandemic, remote work made up about 3% of all jobs out there and was primarily the province of writers, artists, some programmers, and others. Virtual corporations existed, certainly, but even in the tech sector, most people worked onsite.The pandemic brought with it the necessity of remote work for a great number of people – at one point (around the middle of 2021), roughly 70% of people were working from home. That number has dropped to about 40%, but getting below that has proven to be remarkably difficult, much to the chagrin of senior executives staring at largely empty office space.
What the pandemic did was to highlight the existence of an implicit “geofence.” A geofence can be thought of as the constraints that keep people tied to a specific geographic location for a job. If you must commute to work, you generally need to live within a sixty-minute travel radius of the work in question. This means that if you change jobs, either the company you’re working for has to be within that radius, or you have to move. Moving is expensive, time-consuming, and stressful, and when a company only offers the prospect of a six-month assignment, it is usually out of the question. Thus, an invisible fence incentivizes a worker to stay with a company rather than take a massive financial hit to find other similar work.
If you can work remotely, with the occasional trip onsite via an air flight, then the primary constraint no longer becomes commuting distance but time zone and language barriers. This is a profound change because it dramatically expands the options of who you can work with, as long as you’re willing to be flexible about your working hours. Being in Seattle and working for a company in London or Amsterdam is feasible for natural night owls.
This also means that when people are laid off, they no longer face being out of work for months on end but can be gainfully employed again within weeks. This means that companies now doing massive layoffs may be in for a rude shock. Geofencing also meant that people laid off were still likely to be in the area when the companies switched back to a more proactive hiring stance, often willing to take a lower salary because the jobs weren’t available. In today’s environment, however, hiring managers are likely to discover that the labor (which increasingly is responsible for the value of the company) that they are firing today will instead be less available tomorrow, likely at a higher wage than when they were let go.
Instead, companies should be thinking about negotiating a contingency plan with their employees of weeks off when times are slow with bonuses when times are flush, eliminating negotiated “bonuses” of senior management during slow periods, and reducing or even eliminating dividend payments to investors when companies fall below a specific threshold of revenue. This last point is especially important because companies have become so dependent upon investor revenue that they cease providing value to their customers.
4. … and Full-Time Employment
The other consequence of remote work and the end of geo-fencing is that the gig economy is moving its way up the stack from the most vulnerable work to the least. The gig economy is nothing new. Hourly workers have found themselves in the increasingly untenable position of being paid less in inflation-adjusted dollars each week while simultaneously constrained only to have one job because the hours assigned were irregular and capricious, eliminating the possibility of taking a second job. One thing that remote work does is that it puts the power of planning schedules at least partially back in the hands of the worker. This means that, with some care, remote workers can work for two or more employees during the same work week.
It’s important to understand that salaried employment, in the first place, was a guarantee — that if you were available for work at all times, it made it easier for an employer to spread out payments rather than having to pay for work all at once. Contractors, paid by the hour in time-and-materials type contracts, know they have to increase their fees to cover those times when they aren’t doing billable work. Hourly workers get neither the guarantee of income nor the ability to price themselves competitively in the market.
That relationship has broken down, resulting in staffing shortages, extremely high turnover, and burnout. These are worst precisely where you would expect to see such breakdowns: medical fields, hospitality & food service, retail, teaching, and transportation. These are all areas where you need to be physically at a location (or a vehicle), face unpredictable hours, and the pay is well below the amount of work being put in.
One response to this is a migration away from these problematic jobs into their online analogs. If you’re a nurse, you are probably well qualified to be a Wellness Consultant. Teachers are increasingly becoming curriculum developers or online instructors. Retail clothing workers are becoming fashion advisors or buyers or are setting up their own stores. Journalists have become video experts, marketing themselves as everything from analysts to thought leaders to influencers. The pay in most cases is not great, but when you’ve been cut from CNN, the New York Times, or elsewhere, what you have left is still a strong reputation. Moreover, there’s a growing realization that the cut jobs are not returning.
In reality, corporations are unraveling as a safe haven for steady work. The Cloud has already subsumed specific business functions, from HR to Sales to IT services to customer support. Manufacturing is becoming increasingly robotic, which means that most work there tends to be either design or maintenance. The Chinese are likely ahead of the US in that regard. On the other hand, the US concentrates more on 3D printing, and design and maintenance make up the bulk of this work.
This portends that work itself is becoming increasingly bursty, with the need for intense work for a relatively short period interspersed with longer periods of waiting for other projects from any given client. I call this the Production House Model, where relatively small companies service periodic projects (such as an SFX or post-editing shop in Hollywood) from different clients. Teams tend to be fluid from project to project.
What’s noteworthy here is that there is almost no expectation that people will be hired by the client directly. Indeed, the client is often shielded by a couple of layers of recruiters, contracting agencies, and similar companies that only exacerbate this process, as they don’t want to lose potential revenue generators to the client. This means that increasingly everyone is part of the gig economy except for a small shell of senior managers, the only difference being the length of the gig.
5. The Failure to Return to the Office
Coming up on the third anniversary of the appearance of Covid-19, one thing that is so striking is that the Return To the Office has stalled. More people are in the offices today than at the height of the pandemic. Still, it’s notable that even in the face of an apparent recession, people are reluctant to return to the commuting lifestyle and are even willing to risk losing their jobs rather than returning to the grind.
The companies making the strongest push to get employees back into the office generally differ from those that don’t in a few key areas. They are large companies, which means their management structure is older and more traditional. Moreover, they are heavily invested in office real estate and equipment, and may be susceptible to a CRE consolidation that I believe could turn what is mostly a sector specific recession into a broader one.
Part of the fear of returning to the office is that it necessitates a giving up of power on the part of workers for no real gain. This is especially true as many companies’ managements are becoming more punitive towards their workers, cutting staff, reducing benefits, demanding longer hours and lower wages, even in the face of persistent inflation.
For older workers, the writing is on the wall – it is better to retire now, while they have some control over their finances, than to be cut unexpectedly at the worst possible times. For younger workers, the equation is different but just as compelling – try to set up a business independently or in a startup with like-minded colleagues rather than stay in a position that is becoming increasingly untenable.
There are two additional factors that are exacerbating this tension. Senior management and investors in general have developed a disturbing sense of entitlement that is often at odds both with their own contributions to their success and with the civility that any business needs to have at all levels of its culture in order to keep from becoming toxic. Beyond this, the political, religious, and ethnic polarization that has become endemic in culture in the US in particular is manifesting in the businessplace as well, moving from a stance where these were not discussed at all to a stance where there is explicit coersion towards a particular ideology.
What this points to is a growing conflict between those who are invested in the company vs. those who are invested in doing their job. This is not going to end well, especially as those who are in the latter half tend to be the ones who make up the innovative core of companies.
6. The Incredible Shrinking Work Force
I’ve made this point several times before, but it’s worth bringing up demographics here, as population changes are also going to affect the nature of work. There are two related statistics that together tell an interesting story. The first is birth rate, which gives an indication of how many births occurred per 1000 population.

The second, family size, indicates the total size of families in the US, including parents.

A useful metric in the latter case is the family deficit (FD), which subtracts the family size from 4.1. The value 4.1 is considered to break even – you need two parents + 2.1 children per family for the population to neither grow nor shrink. The population grew dramatically in the 1940s and 1950s (family size in 1953 reached a peak of 6.4 or 4.4 children per family). Still, as the second chart shows, family size has been dropping since then (this same phenomenon is repeated globally).
Drawing attention back to the first chart, starting in 2008, after the population had stabilized at around 14 births per thousand for a total of 4.20 million births, the birth rate declined every year. In 2022, that number was 10.7 bpk, for a total of 3.53 million births, even though the overall population had increased by about 30 million. If there had been no immigration, the US population would have declined by about 10% in that period. Even with immigration, the US population will peak by about 2042, marking the end of a period of continuous growth going back 400 years.
This means in practice that an already persistent problem (shrinking populations of young people) will start getting much worse for about the next fifteen years or more. Part of the reason that so many fast-food restaurants and retail outlets are facing staff shortages is that fewer workers in that demographic will be far fewer by 2030. If your business depends on inexpensive youth labor, it’s probably a good idea to re-evaluate your business plan. This will translate into fewer intern-level workers shortly and general labor shortages by 2040, all other things being equal, and will likely prove a bitter pill for investors, who will be forced to raise wages and invest heavily in automation.
7. Automation, Unions and the Three Day Face Week
Regarding automation (including cognitive automation, such as GANS) – despite my lead-off section, I’m now convinced that automation will become necessary to maintain a breakeven state for most businesses. We will be facing persistent labor shortages for decades. Immigration will help some, but the same demographic trends are playing out globally, meaning that economic-driven emigration to most OECD countries will continue to decline as well, as need will drive opportunity locally.
I want to emphasize here that the conjunction here is and not or. Automation has often been used as a way to reduce the workforce. Still, the reality has been proven over and over again that no automation is perfect, in part because even our most advanced AI systems are still decades away (if ever) from being autonomous. My expectation, borne out from both interviews and personal experience, is that GANs will become one more tool in the artist’s toolbox, allowing them to be more productive.
Unions and automation have historically had an ambivalent relationship, but that may change in the near future. Indeed, I suspect that specific kinds of automation may be the basis for unions to form, not to oppose it, but to find common ground to press management. Again, it’s worth noting that automation often requires significant training to be useful and that specialization is something that workers carry with them from job to job. Especially with the growing expectation on the part of management that people come into a job fully trained, the idea of specialist Technical Societies emerging is well within the realm of possibility.
In the last two years, there have been more workplaces attempting unionization than in the last ten before that. Some of these have been fought vehemently by their associated management. One reason for such union efforts is likely because there is a sympathetic ear in the White House (even though they have blocked union efforts in particular scenarios, especially where public health and safety may be impacted).
However, unions tend to rise when a strong imbalance exists between investors and workers in the investor’s favor. It is also a characteristic of the Studio Model, where production is maintained by specialized production houses that service multiple clients. These businesses usually have union-like structures (and may have strong union affiliations among its members) but that doesn’t necessarily have the formal designation as a union.
One additional trend in this mode is what I’d call the Three Day Face Week. This is a behavior that I’m already seeing and one that will likely become more pronounced over time. This is the expectation that meetings take place during the middle of the week – Tuesday through Thursday, Monday and Friday are then left as unstructured to concentrate on projects The weekend remains personal time. This isn’t so much a move away from the 40-hour week as it is the realization that meetings are disruptive to concentration, so it is best to group them to mitigate the effects of this disruption. In practice, I’ve also found that Sunday afternoon becomes a de facto work period (Planning Eve), while Friday afternoon is usually taken as personal time (Decompress).
8. Commons, Twitter and the Mastodon Effect
One characteristic that will likely only strengthen this process is the rise of distributed communities. There’s a fascinating process taking place in the realm of journalism right now. Social media is replacing journalistic institutions, and I believe something profound is happening right now. Twitter and Facebook both started in the early 2000s, and collectively they shaped our expectations of social media over the last two decades. America Online (AOL) attempted to create a closed garden that sat on top of, but didn’t participate much with the Internet. Facebook created graphs of social connections, but its strength has always been in keeping its audience within the walled garden of the Facebook (later Meta) infrastructure. Not surprisingly, it eventually became the target of partisan politics, most famously skewing its advertising to companies. One of them is Cambridge Analytics, a company that created a massive propaganda campaign with ties to one of the political parties, leading ultimately to Congressional hearings and massive fines.
On the other hand, Twitter took a different approach, building out a short-form messaging service very quickly adopted by journalists worldwide. Twitter was an example of a commons – it generally operated at a loss. Still, for a long time, its management board took the attitude that it was operating what amounted to a public good. Former President Trump used the platform heavily and increasingly in opposition to the board’s rules of conduct, to the extent that he was banned from Twitter after the January 6 Insurrection.
Two years later, in 2022, Elon Musk, then CEO of Tesla, ended up buying Twitter in whole and going private with it. I will not speculate here on the eventual fate of the social medium platform beyond my belief that absent significant change, the pre-eminence that Twitter has held is likely at an end.
Commons are interesting. They tend to emerge spontaneously and fall in the space somewhere between being a private and a public concern. They serve a need – providing a neutral ground for the exchange of information. They seldom make a profit, but they are highly influential. Twitter, pre-Musk, was a commons, though its dependence on advertising tended to skew it somewhat towards a corporatist viewpoint. Twitter was a corporation and was run via a centralized format. It could be bought, and eventually was.
In 2021, an alternative platform was developed as an alternative to Twitter. Mastodon’s platform differed from Twitter in that it used a decentralized, federated architecture. One way of thinking about it is that Mastodon is a network of servers run by different organizations but sharing a standard communication protocol. You sign up on one of the servers, and can immediately query the news on the server, but you can also get “toots” from the various other networks.
In practice, Twitter was also federated (when you connected to Twitter, you were connected to one of potentially thousands of servers that shared protocols). Still, the difference is that the federation did not extend to ownership. Anyone can set up a Mastodon server, and many organizations already have one. Thus, it is not farfetched to imagine the New York Times, the Washington Post, and Fox News setting up Mastodon servers. The servers are moderated by their hosts, and if you violate the terms of service, that host can remove you from their server, but you can still access the network elsewhere. Hosts can also determine who they are connected to, so a rogue server can be isolated if necessary.
I’ve been involved with the Solid Project through the W3C, which used individual servers built in a similar architecture. Should the Metaverse ever get off the ground, it will likely do so through the use of federation. There are several key advantages to this approach. The first is that there is no one central server or company that a private corporation can acquire. Like networks in general, it is resilient – should one or even several Mastodon servers drop out, the network can route around them with comparatively little disruption.
In other words, Mastodon has the potential to become a network structure that is as significant as the World Wide Web. I also would not be surprised to see Mastodon and Discord adopt several shared protocols as well, as Discord servers share a great deal of similarity. Finally Mastodon and Solid seem like an intriguing fit, as the infrastructure necessary to support a Solid Server is also similar to that used by Mastodon.
What’s important with all these is that these shared communities are increasingly looking like the future of social media – distributed, federated, moderated, with common communication and application protocols. It will be harder for any country or organization to ban such servers outright, and it will also be harder to subvert the network with spam or disinformation.
9. Graph Roundup
This is a good segue to the last topic in my 2023 report – the state of graphs. Sometimes I feel I’ve been talking about graphs for years, and they were always on the horizon. I don’t think that’s the case now. We’re in the midst of the graph revolution, though it looks a little different than I thought it would ultimately play out.
First, the hoped-for consolidation on the RDF stack has never really occurred, though it is certainly farther along than I’d feared. I still believe that we need a SPARQL 2.0 working group, one that can bind together several features that are needed and that in existence in various implementations – RDF-Star, RDF List objects, the formal integration of SPARQL and SHACL, workflows, a way of declaring and implementing extensions, a redo of the federation specification, expanded predicate paths and other nice-to-haves that are incorporated on one platform or another.
I’d also like a formal specification (probably built around SHACL) to make RDF graphs interchangeable with GraphQL. Innovations specified in Gremlin and OpenCypher could also be integrated into the SPARQL 2.0 specification. GQL represents another offshoot of graph technology, providing a SQL-like language for querying graphs. However, integration between those two specs could be more complicated, as they have different philosophies about doing overlapping tasks.
More to the point, I think the next major area to tackle in the graph space relates to the previous section – figuring a way to communicate between different kinds of federated systems. Graphs are the abstract representation of networks, and we are moving to the point where being able to create and manipulate such virtual networks will likely be key to the large-scale evolution of the web in profound ways. Machine learning and GPT are simply another form of a graph, albeit one where the nodes themselves are tensors.
Summary
All in all, I believe we’re in the process of redefining the nature of work and who controls the fruits of that work. This is not socialism good/capitalism bad (or vice versa). Rather, I see several key trends:
- AI/Automation and distributed networks are muddying the question of what intellectual property is (and who benefits from its production).
- As workers can better arbitrage their labor, many assumptions about economics (which has implicitly assumed geofencing) will turn out to be wrong.
- There is a long-term trend on the part of labor to seek out asynchronous (low-interaction, mainly online) work compared to synchronous (high-interaction, in-person) work, pressuring onsite labor pools, especially.
- Demographics factor labor over investors for the next couple of decades. Automation will augment rather than supplant labor.
- a new economy is emerging based upon federated, distributed, networked social media platforms, and your immediate network will play a much larger role in your employment than any individual company (I plan to discuss this point as a separate article soon).
Here’s hoping your 2023 proves interesting and productive.
In Media Res,
Kurt Cagle
Editor, The Cagle Report
You must log in to post a comment.