If you need to use dark UX patterns to try and prevent users from turning off or avoiding your AI that should tell you something... #ai #enshitifcation #tech #ux #pdm
If you need to use dark UX patterns to try and prevent users from turning off or avoiding your AI that should tell you something... #ai #enshitifcation #tech #ux #pdm
#enshitifcation at play...
I saw an image today that I was 80% sure was AI, so I did what I usually do: Google Image Search to find the original.
It identified what was clearly a *different photo* as a 100% match, and the gave me *that* photo's source. That other photo was real... and had clearly been used to train the AI version that I was trying to trace. They were identical compositions of similar subject matter.
But identifying the wrong photo as an exact match was a new one for me.
Google tricked UCSD into a great deal for Workspace, gave faculty big space to do backups etc. Now they want over $1440 per 10TB to let us keep our files in the "free" space. The U is just passing the confiscatory pricing onto individual faculty after helping Google hook us. What new enshitification hell is this! It was never fast but I could run backups into this cloud at around 2-5 Mbps. Guess how fast I can exfiltrate my ones and zeros to avoid charges? 350 Kbps. K! Not M! #enshitifcation
I have been thinking a bit about the old #enshitifcation thing and #FLOSS, obviously because of @pluralistic so thanks to him.
I am a tinkerer by inclination but have no real tech capacities so I a real outsider to all this. Still it seems to me that libre software and the movement around it, in conjunction with publicly spirited university departments, +/- the military was sufficient to build the software infrastructure on which all our shiny IT depends. The same forces could, and indeed do, provide perfectly usable applications for people to take advantage of readily available hardware to communicate with each other and find new and better ways to do everyday tasks. All very good.
What can public orientated not do? Make hardware cheap enough and ubiquitous enough that the majority of the world's population ( sticking my neck out here) has access to some sort of IT. It also can't, I think, build products that encourage/force mass adoption.
It seems to me that those two things, shiny, cheap tech and shiny, user friendly apps, go together. One encourages the other and both require the profit motive.
I suspect that one reason that tech firms got so into rent seeking is that it is very hard to get people to stop talking and start buying. Online stores reduce staff and rent costs. Streaming services proved that people would pay for the facility of watching on demand over the hassle of filesharing, if things were cheap enough. Both obviously good capitalist endeavours, still required mass adoption of tech, not something I think that they could have incentivised by themselves (and both are fully enshitified).
So where does that leave us? Do we need to accept that tech will destroy us? No, of course not. I think that we need to realise that we don't really need it.
#AI, or specifically #LLM, is a case in point. What is the value in being able to generate text or cheaply? It allows companies to write copy cheaply to sell stuff that we neither need or want. They might become better artists than any human, but just as no one cares about AlphaZero beating stockfish, we won't care about this hypothetical robot. It does nothing of use to humans.
If we stamp out rent seeking and venture capitalists can't make money out of tech, we won't be any worse off. In fact, I reckon we'd be happier. We would be using the tech for things that we need, like sharing info, sharing art and sharing ideas over distance. That seems like liberation to me.