Introduction
Disclaimer: This is not an anti-AI post. It's also not an indictment of Shopify, the policy seems like sensible due diligence. However the pattern of forcing a technology upon people is icky. IT needs to learn what informed consent is.
https://www.forbes.com/sites/douglaslaney/2025/04/09/selling-ai-strategy-to-employees-shopify-ceos-manifesto/
The story was popular some time ago but it's increasingly becoming apparent that middle and higher management are enthusiastic about AI products and have a de-facto policy. As someone on the receiving end of AI stupidity, the prospects are looking increasingly grim. There are genuine cases where AI can flag interesting data which would take a human hours to find without assistance. Considering the current generation of LLM products are based on BERT aka an inovation from a search company, it's not surprising that LLMs are good at... search, and copy-and-paste. I can't read a thousand pages a second, but a bunch of servers sure can perform a lookup like this. But extending out of this safe area things become complicated.
Mandating that AI has to be used pretty much guarantees that 1) confidential data will end up in a server somewhere 2) the people having to deal with AI shortcomings will be enthusiastic to point out its flaws, not its triumphs. But then again, poor understanding of foot soldier conditions by management is about half of sci-fi (Solaris, Alien, System Shock).
Forcing people to use AI can only lead to backlash against AI. Which might be the intention of some of these policies.
The false equivalence fallacy strikes again
Physicists are usually the most enthusiastic about their models of their world since they kinda work. But spherical chickens traveling through a vacuum aren't all that great in figuring out a firing solution for a battleship. The enemy will always try to feed you nonsense data so that GIGO (garbage in, garbage out) kicks in. Not accounting for things like the curvature of the Earth when making the ballistic calculations are also problematic.
There are notable cases already online of "vibe coding" where the resulting application got hacked and their AWS accounts hijacked to mine crypto. So no, the stuff on Stack Overflow is not good on its own. Copy-pasting is a big source of errors in code, and configuration errors have earned a spot on the OWASP Top 10. Having a coding assistant import the wrong dependency for a blockchain could be pretty disastruous. Fixed-point math understanding is also less commonly known among programmers. It increasingly seems to me that most AI systems are only useful if you already know what you're doing and have an established software development process that catches human stupidity. Only now the same systems will have to catch human+AI stupidity. In other words, code reviewers and other gatekeepers will see increases in workload.
Forcing people to use AI only makes overloaded teams more cranky
It's one thing for a giant company to mandate AI use to try and make a 1000+ employee workforce more productive. It's quite another for smaller companies with a handful of people to expect silver-bullet results. If your workforce is already firing on all cylinders AI magically won't make things better.
Policies mandating the use of AI have questionable consequences - siphoning data from companies
In other words, it allows large companies to use (that is, steal) the data of smaller companies to improve their LLM product. And some companies have been caught to copy the products on their own marketplaces.
The only reason western AI is functional is because of forced arbitration clauses
Here in Europe, that kind of shit doesn't really fly with any competent court. The abandonment of deterministic and controllable behavior opens up a lot of legal liability.
In other words, not only is most AI based on stolen data (questionable fair use), it's application is a legal bonaza for irresponsible behavior, because AI regulation is pretty lax.
Fraud will only hurt long-term adoption of AI products and undermine trust even in reliable technologies.
Systems that use AI should be designed from the ground-up to use AI
If the system isn't from day 1 designed to deal with the non-determinism of LLMs, then there's going to be problems. For example, let's say you want to replace a sysadmin with a bot that does maintenance. The ideal case would be for the bot to ask for permission when it wishes to restart a server, or any service for that matter. This isn't some groundbreaking improvement, but it does show that even a LLM should follow best practices for change management. So yes, the LLM should be able to generate a report on what it wishes to do on the server and which steps are likely to cause problems. That still means that a bunch of due diligence needs to be done.
Again, if you know what you are doing, it's obvious that AI isn't saving you much time. Even the UNIX graybeards noted that most time spent programming isn't spent writing code. It's all the other stuff that takes up time:
Well over half of the time you spend working on a project (on the order of 70 percent) is spent thinking, and no tool, no matter how advanced, can think for you. Consequently, even if a tool did everything except the thinking for you – if it wrote 100 percent of the code, wrote 100 percent of the documentation, did 100 percent of the testing, burned the CD-ROMs, put them in boxes, and mailed them to your customers – the best you could hope for would be a 30 percent improvement in productivity. In order to do better than that, you have to change the way you think. http://quotes.cat-v.org/programming/
If your AWS bills could fund a small startup, this post is not for you. The fact that system that use RAFT cost 33% more than active-passive systems are probably meaningless to you anyway. It is quite paradoxical considering Moore's Law + Dennard scaling that infrastructure seems to cost more per dollar with each year.
Need to get back to writing change management proposals...