I have been and likely will always be interested in artificial intelligence; I genuinely find the topic fascinating. Artificial humans have played an extremely important part of my childhood, and figures prominently in the way I see and interact with the world. So, it should come as no surprise that, as I leveled up through life, I dumped a ton of personal skill points into the philosophy and technology of AI. However, today, as I find myself trying to decide where I want my life and career to go, I look at the current AI renaissance we are experiencing and can’t help but feel deeply disappointed with it. Even though there couldn’t be a better time to be an AI expert, and investors are pumping billions of dollars into the space hoping to repeat the success of ChatGPT, I cannot help but find most of the projects that are hiring to be unexciting, unimaginative, and ultimately unlikely to go anywhere.

It’s easy to criticize the excitement surrounding the generative AI systems that dominate discussions currently. ChatGPT is definitely an impressive scientific accomplishment that surprised everyone, but it’s not hard to find reasons to complain about it. We can make privacy complaints, or copyright complaints, or even complain about the quality of the writing that it produces; but ultimately those complaints are dismissed quickly by tech enthusiasts who see the trendline of technology and can just mentally extend it further. We can fix those issues, they say, we can work on improving privacy, we can work on creating more fair and ethical datasets, and ultimately we can create better systems. We are only at the beginning of this process; who knows where we will be in five, ten, or even a hundred years. Likewise, ChatGPT isn’t even the first major success these advanced AI systems have had. It wasn’t even ten years ago1 that Google’s DeepMind had just shattered records by creating the first go-playing bot; go had previously been thought too complicated a game for computer systems. So, it feels kind of silly for me to sit here and criticize an industry that took less than ten years to move from mastering the game of go to mastering language. Yet, here I am.

AI is not AI.

I have come to hate the term ‘Artificial Intelligence,’ as it is a marketing term and not a technical one; there is no clear line that separates AI from conventional algorithms. There are definitely attempts to define AI in some meaningful way, but these definitions are either useless or don’t line up with how the term is used in a general audiences 2. Any computer program can be AI so long as the marketing team includes the term in their branding. What is a spell checker if not a computer that intelligently finds mistakes in text, what is a mapping program if not a computer that intelligently finds and displays static images, what is an operating system if not a computer that intelligently translates between user input and machine code, and what is a garage door opener if not a computer that intelligently raises and lowers a door. An AI company is just a software company that wants to stand out among other software companies. Sooner or later, everything computers do will be labeled as AI, until such a time as the marketers decide that the label is no longer advantageous. Computer Science is the science of artificial intelligence, so long as the computer is the thing being intelligent.

What we think of as AI is just a bunch of loosely associated, yet independent, algorithms developed by computer scientists. Alpha Zero, the chess- and go-playing system, is four such technologies bundled together to produce an extremely intelligent program: a Monte-Carlo tree search algorithm, a deep neural network heuristic, a conventional expert system, and an automated self-play system to generate training data. It’s a bit disingenuous to call the whole system a “deep neural network” because only the heuristic part of it is. The system as a whole is indeed self-trained in the sense that it requires zero human data3, but the algorithm that creates and manages the training data is human generated. Likewise, the whole thing is governed by an expert system4 that defines what the rules of chess are, and what it means to win. Even the system that reasons about game states, the tree search algorithm, is a conventional algorithm designed by computer scientists and governed by conventional mathematics. The only part of Alpha Zeror that uses “AI” is the intuition it has over which board positions might be better than other ones. This part’s only job is to efficiently guide the tree search algorithm, the part that ultimately gets final say over which move gets played, away from wasting precious processing time contemplating bad moves.

Sooner or later, everything computers do will be labeled as AI, until such a time as the marketers decide that the label is no longer advantageous.

This is important because the overall goal of AI research is to create a computer program that needs no input from humans whatsoever. It will just be a blob of circuitry that, when thrown out into the world, can create its own expert system, its own tree search algorithm, and ultimately its own framework for growth. Yet the theoretical groundwork needed to confirm that such a program can even exist does not exist yet. Computer science is still in its infancy, and nobody can tell you what the limits of computation are5. So arguing against AI is effectively arguing against the potential of computer science as a whole. I can’t tell you what AI can or cannot do because nobody can tell you what computers can or cannot do. However, the current craze is not about computers generally, because we’ve had those for a long time now; it’s about specific impressive technical demonstrations that ignite the imagination in new ways, and behind those demonstrations are algorithms that we do know a lot about: specifically, machine learning.

And, oh boy, the reality of machine learning does not line up at all with the way investors and the public at large talk about it.

It is not better at reasoning.

Machine learning (ML) should really be thought of as a machine “intuition” instead of machine “intelligence.” Machine learning algorithms are sophisticated mathematical objects that can approximate a function given only its inputs and outputs; however, they are fundamentally statistical beasts and work in a statistical world and not the deterministic one we are used to. This is both its greatest strength and its greatest weakness.

To return to the chess example, if the ML algorithm used in AlphaZero plays trillions of games against itself, it eventually builds an intuition about what positions are good: in the past, positions like this tend to end up in me winning while positions like that tend to end up in me losing. It can then analyze any board state and generate a probability that the state is winning. This probability is essentially a similarity metric to determine how closely a certain position resembles a position that is more likely to win. At no point does it actually “choose” a move to play. Worse, it can’t even generate a list of legal moves as it cannot, on its own, reason about the rules of chess. It can only tell you how good a position is, without any concept of how that position came to be. Therefore, it needs a framework within which it can make decisions. This is why the chess AI needs an expert system that knows which moves are legal, and a tree search algorithm that can reason about board states, because without these systems the best it can do is tell you about all the board states that would be advantageous to you, without any guarantee that the moves needed to get to those board states are legal.

Language systems operate with the same disability, just in a different configuration. Instead of a board state, a language model operates on something called a context window. The context window is the information that a Language Model can use to infer an appropriate response. Instead of the question, “is this position winning?” language models are trained to answer the question, “given this context, what is the next word in the response?” They are incredibly powerful at inferring solutions within their context window, but once that window is full, any new information requires the removal of old information, which limits the complexity the system as a whole is able to process. As well, these systems still only produce probabilities that a certain word6 will be the next word in the sequence and it still needs a decision framework to pick which one gets chosen. So it could just select the token that is most likely to appear, but that is not a safe way to make decisions.

Consider the following game of chess (I’ve linked to the specific move I want to talk about, but the entire video is interesting):

Up until this point in the game, the ChatGPT chess bot has played near perfect chess and shown fairly conclusively that a language model can reason about the rules of chess. However, as the game goes on longer and longer, and the context window fills up, suddenly loses its ability to reason about everything all at once and makes an illegal move. It has created a board that its probability engine likes, but the rules do not, and it can no longer hold within itself the context necessary to reason correctly about both at the same time. Once this happens, there is no going back to intelligent moves. We humans can’t even interject and inform the computer about its mistakes because such information would need to be stored in its context and therefore push something else out. This creates a hard limitation on what these types of systems are capable of reasoning about.

Yes, we could increase the size of the context window, we could invoke Moore’s law and argue that memory sizes will continue to double every two years. However, that is only helpful if the complexity of the problem scales with the context required to solve it. The relationship between an algorithm and its required resources is governed by a thing called Complexity Theory7 which I do not have the time to go into right now. However, the short form is that it doesn’t take twice the amount of resources to compute twice the amount of data. The second move of chess is twenty times more complicated than the first move because the first player has twenty options before them, and the second player needs to know how to respond to all of them. Doubling the resources dedicated to remembering a board state could possibly only improve the program’s reasoning ability by a single move.

Yet why should we be impressed with a language model being able to play chess when Alpha Zero was doing it at a superhuman level ten years ago? Sure, it’s a scientific achievement that a pure ML system can learn the rules of chess, but it’s utterly impractical. Alpha Zero runs up against the same hardware limitations that ChatGPT has, it has only so long to decide on a move, but it never makes illegal moves because it doesn’t waste resources inferring the rules of chess; its expert system is so efficient at reasoning about game rules that its processing time is negligible and is almost always forgotten about when talking about the algorithm. So why would we want a pure ML-based solution when a conventional expert system can do it faster, more efficiently, cheaper, and without error? There are two arguments.

The first is that human programmers are expensive, while processing power is relatively cheap. So any system that can train itself replaces a human and therefore saves money in the long term. However, this argument falls apart when we consider that a programmer’s salary is a fixed cost, while cpu usage can continue to grow forever. Sooner or later, the cost of dumping money into a cloud computing platform will exceed the cost of hiring someone to just do the job directly. We will come back to this.

The second is that these systems can be trained to reason about things that either aren’t governed by hard rules or whose rules aren’t known. The idea is that this technology could allow us to build models that can answer questions that previously only a human could answer.

It does not replace people.

Training data seems to dominate the discussion around this new generation of AI more than it did the previous generation. Alpha Zero gathers data by playing games against itself and building up an intuition about which moves generally result in won games. Language doesn’t work like this. We can’t just get two computers to talk to each other and learn English because we can’t build an expert system that defines what valid English is. How do we know which reply to a statement is the correct reply? The philosophy of language is deep, impossible to summarize and, unlike chess, there is no universally accepted set of rules around what is and is not valid language. So, the only method we have come up with to create such an intuition around language is to ask a human (one who is presumably good at language) what constitutes a good response.

It is a fundamental misconception that, with time, these algorithms can perform better than humans, because so long as they depend on humans to generate training data, the best they can ever perform is on par with the performance of whoever created that data.

Language isn’t a mathematically defined object and doesn’t exist in the same way that winning a game of chess does. It is a social construct that exists because we humans collectively agree that it exists, and we can also change it at any time. When an ML model needs to learn about these types of systems, it has to create a strict mathematical approximation of something fundamentally chaotic8 with no way of validating the accuracy of this approximation except by asking humans if it got it correct. The only reason these systems can write books is because books already exist. The ML model can see a book, learn from that book, and ultimately build an intuition around what books are. If there were no humans to create the books it learned from, it would never know what a book is. So these systems can only replace a human in the same way a mirror9 replaces a human. So long as there is a human to reflect it will do so with amazing accuracy, but once that human is gone, the reflection disappears. Once all the authors lose their jobs, the ML system’s intuition around what constitutes a good book can never be challenged. When asked, it will endlessly generate the same book because its mathematical brain believes that this one book is the correct answer and nobody is left to tell it otherwise.

It is a fundamental misconception that, with time, these algorithms can perform better than humans, because so long as they depend on humans to generate training data, the best they can ever perform is on par with the performance of whoever created that data. So in order to create a good AI system, we always need at least two people: one person who knows the subject and can accurately assess how good the machine’s response is, and another who can manage these datasets and train the model accordingly. If we argue that AI is coming to replace people’s jobs, we should remember that at least those jobs still exist; we have just outsourced them to the AI company.

It cannot innovate.

Remember how I said that ML machines don’t make decisions and only produce likelihoods? Well, that theoretical framework has other side effects. If we accept these likelihoods as a decision, without any other processing, then what we produce will always be the statistical average of how the algorithm was trained. If I give a computer a million pictures of a car and ask it to produce another one, it will produce a picture that looks reasonably like every other picture it was trained on. This is because the training system is optimized around identifying the likelihood of a picture being a picture of a car. This car would be average sized, be shaped like a car, be painted in a car-like hue, and have four wheels, two window wipers, headlights, taillights and maybe a person driving in it 10. In effect, it would be a car close to the statistical average of what a car should be.

The same is true for writing: books, articles, poetry, and anything else you ask it to write will end up being the statistical average of whatever writing it was trained on, no matter what you ask for. This is great if all you want out of your writing is massive quantities of average articles, but do we really want that? Sure, I can write a book using ML, but what value will that book have if it is designed specifically to disappear on a bookshelf, have a title that sounds like all the books around it, or appear in roughly the middle of search results surrounded by millions of other bland and uninteresting titles.

Does that mean we cannot play with the decision engine to sometimes pick less likely options? Absolutely! This is exactly how AlphaZero works, in fact. It may use its ML intuition to sort options, but it still uses rigorous mathematical principles, and conventional algorithms, to decide on which one is the best. However, the reason it can do this is because “the best option in chess” is a clearly defined mathematical constant that does not change and can be translated perfectly into working code. The same is not true for writing. There is no algorithm for “good writing,” because good writing becomes bad writing if there is too much of it. Good writing needs to change, and ML systems will never be able to keep up because their definition of “good writing” is frozen inside their training data.

We can work around this with effort. Humans can modify the training data to move the statistical average wherever they want it to be. Or, we can wrap the machine intuition inside a decision algorithm of some sort. However, both of these options destroy the self-trained illusion these systems depend on. What good is an AI system that depends on human intervention to stay intelligent?

Data Collection is Adversarial

Imagine getting funding for a project; let’s say, to scrape data from Reddit. We hire a programmer who builds an application that begins downloading the website. The mission is declared a success, and we inform our funding source about this. After much discussion, the executives decide to expand our scraping efforts to another website. However, while sharing this new direction with the programmer, they gave us terrible news: Reddit has changed their API, so the software no longer works and needs to be rebuilt. Later, this happens again, and again even later. Eventually, after months or possibly years of rebuilding the application, the funding source begins to demand explanations as to why we haven’t moved on to the new website. There is no good response to give them. Telling them that we are still working on the Reddit scraper would imply that we lied to them in the past about the project being done, but giving them any timeline would be lying to them today. The reality is the Reddit scraper will never be done. So long as the other side keeps making changes, we will always need to invest our own resources into keeping up with those changes. Reliability on such a technology is also limited, as we have no control over the work that the other side is doing. Anything built can break at any time. Worse, if Reddit is incentivized to prevent our access to their data, they will be innovating on new ways to prevent such access. This means that we also need to innovate just to keep our existing system operational. This is adversarial technology, when two companies work with the same commons but have opposite goals11.

One way to make money off of transforming one job into two is to find ways to use work that has already been done. Early versions of ChatGPT are built on datasets that are relatively easy to gather. The internet is designed to push content to as many people as possible under the assumption that all interactions with said content are beneficial to the content creator. However, now that that first demo of ChatGPT has proven the value of these datasets, things are already changing. Twitter12 has shut down its public API, so has Reddit, and any website that feels its data has value is now incentivized to do the same. What was once free now costs money, and as more companies bid on access to these datasets, the price will only increase. Every successive generation of these language models will be trained on more expensive data, either because they will have had to pay ever higher prices to the data providers, or they will have had to invent technologies to bypass the defenses that are put up. The only other option for these models is for them to generate their own data. However, this is not a good long-term option as it invalidates the business model. Instead of replacing human jobs, they are now creating them.

Its only use case is spam.

What about the economics of scale? Even if it takes a massive amount of human labour to create a model, can we still make money if it services an even larger customer base? This is normal in the business world and works well in situations where one part of a business is important, but not a differentiator. Multiple competing convenience stores can use the same point of sale (POS) system without issue because the POS is not the reason customers enter the store. Inventory management is important, but it is never the reason why a store exists; therefore, it is easy to outsource to another company. The solution I am using can just as well be the same as the solution everyone else is using, and I can focus on the things that make my company unique.

The rise of AI is coming in an era where products increasingly do not matter.

The same is true for generative AI. It works best when the things you are generating are not a differentiator to the business model, and are therefore easy to outsource. A good use case would be a video game world designer. There might be an ocean in their game that needs a sea floor that players can technically swim down to, but would play no significant part in the narrative. They would want there to be something down there so that the players don’t glitch out of existence if they make the journey, but it isn’t a differentiator, my sea floor can just as well be the same as my competitors. Another more cynical example would be a link’s thumbnail image. If I didn’t care about how the image represented my content or my brand, I could outsource its creation to an algorithm that specializes in creating enticing clickable images. It doesn’t matter that my enticing clickable image would look just like the image everybody else who used the service got, because I would still be getting more traffic than if I had made the image myself.

There is one business model that needs a lot of text, or images, but the text itself is not important to the business: spammers. They don’t care about standing out or creating something unique. In fact, they want to blend in. If their email looks indistinguishable from a reputable company, or their website looks indistinguishable from a reputable website, then that only means that users are more likely to trust it. Even better, if their marketing looks legitimate, but is easily forgotten, they are much more likely to get away with the scam.

Unfortunately, this is where things get depressing very quickly. The rise of AI is coming in an era where products increasingly do not matter. Once I have paid for a product, taken it home, and voided the warranty by opening the box, it no longer matters if the product stands out on its own merit; profit has already been made. We cannot train an AI to create a quality article, but we can train it to create an article that is more likely to be clicked on or shared. I can tune a marketing bot to create a logo that is more likely to be purchased, and another that is less likely to be returned. The fact that my branding won’t stick out in my customers’ mind is a feature. Once they find out how shit the product is, it’s best if they can’t remember where they got it from or where to leave the review.

This is the realm of spam, and this is the business that benefits most from this technology. They are the only business that cares so little about their products that they would be willing to pay an automated system for the privilege of getting the same trash that everyone else is getting1314.

It is not profitable

What is the main drive behind generative AI? Why is it inevitably the future? Why is it so important that generative AI be inserted into everything from search engines to household appliances? What value does generative AI bring today that it didn’t bring before ChatGPT entered the picture? Unfortunately, the answer I always get is efficiency. Generative AI can, supposedly, create works cheaper and faster than a human being, which will, apparently, increase productivity across the entire economy. However, I find that claim dubious. Business models based on nothing except return on investment are questionable at best, and if they do function, it’s because they have created a technology that simplifies, or outright eliminates, steps in a workflow. If it takes me six steps to bottle a can of soda, it will likely save a ton of money if someone figures out how to do it in five. Even then, the amount of money that I’m willing to spend on such an innovation is limited by how much I am already spending. The only way that I would consider spending money on a system that cuts ten percent of my workflow would be if you were charging less than ten percent of my cost. Generative AI is not this.

Generative AI systems are more inefficient at reasoning.
Generative AI systems still need human workers.
Generative AI systems are adversarial.
The cost of such systems will only grow over time.

All in all, I just don’t see how investors can possibly get any return on their investment with a technology that is fundamentally more expensive than the technology it is supposed to replace. Yet this isn’t new in the tech world. ChatGPT seems cheap today because the cost to research it – and the computer power needed to run it – are heavily subsidized by investors. So, of course, it seems inexpensive; anything would if it’s being funded by people who have so much money that they warp entire economies around themselves. However, nobody has infinite money sooner or later, these systems will have to turn a profit and, just like every other business, the price they charge will have to cover all of the many costs I have gone over up to this point while somehow staying below the price of the workers they are replacing15.

I don’t see the value here. In fact, I would go as far as saying that any generative machine learning system that relies on human data will never be economically viable, regardless of future technological progression. And yes, maybe someday language or art systems can learn to create their own training data, but that technology does not yet exist, and I doubt the folks selling algorithmically generated website copy are working on that problem.

Conclusion

The greatest lie about automation is that it is inevitable; it’s a scary bogeyman coming for all of our jobs. However, it’s hard to take these claims seriously when we still live in a world where our clothing is assembled by hand, our electronics are assembled by hand, and everyone who scrambled for a degree in hopes of avoiding automation in the trade industry now have shittier and less-stable jobs than those with a blue collar career.

Automation works because we can contain some level of expert knowledge inside a static system16. It’s expensive to do, but can pay out over the long term. We can build automated systems to fill two-liter pop bottles because bottles are functionally identical to what they were when I was a child. A machine that filled pop bottles back then will still work today, and likely will still work long into the future. If I had invested in such a system, then the massive upfront cost I paid back then would still be returning dividends today. Yet tech has a hard time staying relevant for even a few months, let alone thirty years. There are new iPhone models out every year, marketing departments need to react in real time to cultural movements, and journalists must react to events as they happen. This all requires a fluid and adaptable workforce, which humans excel at and machines do not.

This new generation of AI is supposed to get around this by automating our adaptability, and it works to a degree. ML systems can be more malleable than conventional expert systems, but that isn’t always a good thing. If you ask a chess engine what the capital of Ontario is, it will break. It doesn’t even have the capacity to allow such a question as input. However, if you ask the same of a language model, it will give you an answer regardless of whether it knows the correct one or not. Language models, by design, have no guard rails; they will always return the most likely answer no matter how unlikely it is.

AI’s ability to fail gracefully may seem like an improvement until you realize that hard errors are an intended feature of modern programming languages. Circuit boards are complicated things, but they still operate on the basic laws of electricity. If you put a signal in on one side, you will always get something out the other. If we remove hard errors from our code, it will continue to run no matter what inputs it received; however, if those inputs produce a situation that the program cannot handle, its output becomes random and possibly destructive. It’s like how early forms of cryptocurrency sold themselves as an improvement over conventional monetary systems because they removed the need for a central authority and transactions couldn’t be reversed, while forgetting that appealing to a central authority to reverse a transaction is something we occasionally need to do in cases of theft and fraud. These things are features, not bugs.

So it may seem like these systems are adaptable in the same way as a human, but they can only do so within some context of rigidity, and that context is poorly understood. It’s a new technology; there are more things we don’t know about it than we do know. Sadly, it’s common in the tech world to mistake unknown boundaries with no boundaries at all17. We see a cool new technology and immediately start imagining all the things it can do, which is not a problem in and of itself. But if we put those ideas into production too quickly, we risk finding those limitations in the worst way possible, by forcing our entire society, the largest possible audience, to experience these consequences as they are discovered.

Perhaps I’m wrong about all of this, though; maybe the current AI revolution will change the world, just like the internet revolution changed the world. Maybe in fifty years, we will somehow have solved all the issues that I have brought up, and our new AI overlords will increase productivity18 and put the entire economy into overdrive. But even if all of that were true, I would still say that investing in AI expecting to make a profit is stupid. The internet was genuinely a foundational technology that changed everything, yet that didn’t stop most of the companies that jumped on the dot-com bandwagon from going bankrupt, and then taking the economy with them. It takes time for new paradigms to settle down, and for businesses to learn how to use a technology sustainably. Early adopters are gambling that their company will be the one to do this, but even they aren’t gambling on the profitability of that company. Google, arguably the most successful dot-com company, only issued its first dividend in March 2024 – over 20 years after it had been founded; an eternity in investor time19. No investor is willing to wait this long. Instead, they make their money by selling their shares to someone else. So long as they can convince this person that the “potential” to make money is there, then they make money20. The reality of the technology is unimportant.

it may seem like these systems are adaptable in the same way as a human, but they can only do so within some context of rigidity, and that context is poorly understood.

This is the disappointing part of all of this to me. Nearly all the tech money earned over the years comes from selling shares in an idea to someone else. Companies live and die not on the success or failure of their business, but on this nebulous quantity called “investor confidence.” This creates a culture where someone’s job and livelihood rest not on their ability to make an idea work, but instead on how well their pitch fits into whatever hot new tech fad is sweeping the industry21. I am interested in technology, but technology is less important than the hype around technology. You can’t work in AI, or even adjacent to it, if you aren’t willing to sell it as some world-altering force for good. Because if you aren’t contributing to the hype, then you are not working in the best interest of the industry, or at least in the best interest of the person paying your salary.

So yeah, when I said AI is a meaningless term, I lied. It just has nothing to do with technology. AI is the willful misunderstanding of an object in order to profit from it22. And yes, the technology of AI might never be profitable, but that doesn’t mean the finances of AI aren’t. I just have no interest in playing that game.

And yeah, maybe one day someone will come up with a cool usage of AI that expands human ability instead of replacing it, while also being cheap enough to justify its own existence. I’m all for that; that stuff is genuinely exciting, but I will discuss it when it happens. Which isn’t today.

  1. About when I entered the field professionally. 

  2. Google CEO Sundar Pichai defined AI agents as, “intelligent systems that show reasoning, planning, and memory. They are able to “think” multiple steps ahead, and work across software and systems, all to get something done on your behalf, and most importantly, under your supervision.” Which is just a fancy way of saying a “computer program.” 

  3. That is what the ‘zero’ in AlphaZero means. 

  4. Expert systems are sometimes referred to as “rule-based systems.” An expert designs a set of rules that can be programmed into a computer. The computer then operates intelligently according to those rules. In this case, the rules are the rules of chess. 

  5. If you have ever been confused as to why programmers have such a hard time explaining what is easy and what is hard to do with a computer, it’s because we don’t know. There is currently a one-million-dollar bounty available to whoever proves that solving a sudoku is harder than confirming that a given solution is correct. That’s how little we know about this stuff. 

  6. LLMs operate on fixed vocabularies. If a word does not appear in its training data, it cannot be selected as part of a reply. 

  7. and the relationship between data and computation power does not look promising

  8. At some point, I want to write a post about chaos theory, as my usage of the word “chaotic” here is very specific. However, absent that, your intuition on the term is good enough for now. 

  9. “Computers are very good at reflection, and that is perhaps the scariest thing about them. When we give it a part of ourselves that is exactly what it will spit back. Spend enough time staring at one, and eventually the only thing looking back at you will be yourself.” Ryan Chartier link 

  10. No passengers. 

  11. Whenever someone claims that “innovation” or “productivity” are good for our economy, remember that this is only true if the problems they are working on are not adversarial. Otherwise, they are just really good ways to burn through resources faster without long-term benefit. 

  12. Or more specifically, the social media company formerly known as Twitter. 

  13. The fact that there is some talk about automating marketing and promotional material is actually hilarious. If you believe that your company’s public perception isn’t a differentiator, then you are a spam company. 

  14. Propaganda is another very powerful use case, but that is a whole article all to itself. 

  15. Seeing as how sweat shops still exist, this is an incredibly small number. 

  16. AKA expert systems

  17. I also want to write a blog post about this concept specifically. For now, I leave this comment as a placeholder for what will eventually be a link to that post. 

  18. This also assumes that infinite productivity is a good thing. See my previous post for a detailed writeup on why optimizing around single measurable values is a terrible idea. 

  19. Facebook issued its first dividend around the same time. Amazon has never issued dividends. 

  20. Once the general population has run out of money, the only way to get more of it is by scamming other investors. 

  21. The games industry is an obvious example of this. Tango Gameworks, the developers of the extremely successful “Hi-Fi Rush” was shut down due to Microsoft’s “reprioritization of titles and resources” link. Meaning that creating a successful game isn’t enough to make a studio a priority for a company that makes games. 

  22. Which might be a good definition for ‘intelligence’ as a whole, but that is another essay.