Young people have grown increasingly skeptical of artificial intelligence, even those who use it daily, according to a new Gallup poll of more than 1,500 people aged 14 to 29.
There is no decline in AI use among Gen Zers, but there is also no increase since the same poll was conducted in 2025. The latest poll found that AI use was plateauing among young users, accompanied by rising concern about the technology’s consequences.
The findings are significant because Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes, meaning that their adoption could determine the trajectory of broader societal AI adoption. Gen Z has already overtaken Boomers in the workforce. Right now, the AI world is preparing for a massive jump in expected demand, and the top tech and financial companies are investing billions upon billions of dollars into building out the supply. Experts have warned that if demand does not pan out exactly as expected in the short term, then it could have disastrous consequences for the economy.
I am once again asking you to petition your local school boards to block generative AI usage by students and teachers alike. AI is being pushed on these kids at a young age, and I feel with the downward-trending attention span of Gen Alpha and Gen Z, that will form into a lifelong dependency. Plus, the loss of the licenses from the school system will be a significant dent in the AI metrics.
Here are some demands: block ChatGPT/Gemini/Copilot/DeepSeek on school networks (like porn and gaming sites are blocked); prohibit use of AI-generated images and text on assignments and teaching materials; ensure no assignments will require or recommend the use of AI output at any point.
These suggestions are based on reports I’ve heard from students. Please feel free to comment your own recommendations or information.
Just fucking pop this stupid bubble already. I want to be able to buy cheap ram and hardware again.
Once you use AI enough you start to peer behind the curtain and see how it’s all just a magic trick and not actually magic like it seems to begin with. So yeah I think its unsurprising people would come to this conclusion.
It’s quite useful and fun to play with. Certainly not reaching AGI from scale alone…
I think we’re reaching the top of the S-curve with regards to LLMs specifically. They’re a neat gimmick and likely will have an important role to play, but I don’t think they’re going to meet the (completely artificial and grifty) hype Altman et al have been slinging.
Which was pretty easily predicted, but no one knew for sure. Such financial-class leeches always feed on unfulfilled hopes, and sell dreams of the future.
They’re good enough to act as natural language translators which is an absolute revolution for computers so they’re useful for automating some tasks that are too fuzzy or vague for basic programs.
This is what I’ve found among a lot of professionals when asked about AI. Every task it does could be easily done by anyone with enough domain knowledge and moderate scripting ability. It just cuts out the need to learn a CLI and scripting language in exchange for lack of scalability or efficiency, plus has more domain knowledge sets than any one person (though not too deeply).
E.G. It is often used as a poor man’s awk or perl for analyzing emails. But for a lot of people being able to scan 10000 documents, find all references to a soft regex and tabulate them is something they genuinely couldn’t do on their own before “AI”. Nevermind that you hand that problem to any sysadmin worth their salt and they probably already have an alias for it. Not surprising when you realize that the average person thinks that Penelope Garcia is an accurate aepiction of how such tasks are done and think such abilities are so far beyond teir own capabilities.
It doesn’t matter because the companies are mandating their workforce to use it regardless if you like it or not. For personal use, yes interest may be waning, but wanting to use AI is not a factor when you are being forced to use it at work.
Enterprise is really the only option companies like Anthropic and OpenAI have left. They’re drastically underpricing the service for what it costs to provide and users have shown that price hikes don’t fare well.
But enterprises? Well you just made AI a core part of your software development workflow, what are you supposed to do, start manually reviewing bitbucket merge requests? Rewrite your Jenkins pipelines? No, when the price hikes come, businesses will pay, and then downsize to reduce that opex.
I’m thinking that eventually the corporations will all be AI all the way through and control even more than they do now.
Small business and normal people will still do a lot of the work by hand because they will be priced out.
But I also suspect there might be a good market for hand crafted code similar to other hand crafted goods. Some people will prefer it.
If the entire software industry mandated AI, I would either create some kind of startup, or just reeducate myself into some other field, like landscaping or something. Because fuck AI.
My personal experience substitute teaching plenty seem fine with it.
It’s useful. A little too useful. Enough that it starts to break all of the systems we have set up for our society. Especially capitalism. Now we will either need to adapt those systems or shun the technology for people to be hopeful. Otherwise it puts the technology at odds with its users.
It’s useful for CERTAIN tasks. AI is significantly over applied as a “miracle product”.






