In 2025, AI introduced us new fashions that had been much more able to research, coding, video and image generation and extra. AI fashions may now use heavy quantities of compute energy to “suppose,” which helped ship extra complicated solutions with higher accuracy. AI additionally acquired some agentic legs, which means it may exit onto the web and do tasks for you, like plan a trip or order a pizza.
Regardless of these developments, we’d nonetheless be far off from artificial general intelligence, or AGI. This can be a theoretical future when AI turns into so good that it is indistinguishable from (or higher than) human intelligence. Proper now, an AI system works in a vacuum and would not actually perceive the world round us. It may mimic intelligence and string phrases collectively to make it sound prefer it understands. Nevertheless it would not. Utilizing AI each day has proven me that we nonetheless have a methods to go earlier than we attain AGI.
Learn extra: CNET Is Choosing the Best of CES 2026 Awards
Because the AI business reaches monstrous valuations, firms are transferring shortly to satisfy Wall Road calls for. Google, OpenAI, Anthropic and others are throwing trillions in training and infrastructure costs to usher within the subsequent technological revolution. Whereas the spend may appear absurd, if AI does really upend how humanity works, then the rewards could possibly be huge. On the identical time, as revolutionary as AI is, it always messes up and will get issues improper. It is also flooding the web with slop content material — corresponding to amusing short-form movies which may be worthwhile however are seldom precious.
Humanity, which would be the beneficiary or sufferer of AI, deserves higher. If our survival is actually at stake, then on the very least, AI could possibly be substantively extra useful, slightly than only a rote writer of college essays and nude image generators. Listed here are all of the issues that I, as an AI reporter, want to see from the business in 2026.
It is the surroundings
My greatest, most instant concern round AI is the influence large data centers will have on the environment. Earlier than the AI revolution, the planet was already going through an existential threat as a result of our reliance on fossil fuels. Main tech companies stepped up with initiatives saying they’d intention to succeed in net-zero emissions by a certain date. Then ChatGPT hit the scene.
Do not miss any of our unbiased tech content material and lab-based opinions. Add CNET as a most popular Google supply.
With the large energy demand of AI, together with Wall Road’s insatiable want for profitability, knowledge facilities are turning again to fossil fuels like methane gas to maintain the GPUs buzzing, the instruments that carry out the complicated calculations to string phrases and pixels collectively.
There’s one thing extremely dystopian in regards to the finish of the planet coming by the hands of ludicrous AI-generated videos of kittens bulking up at the gym.
Every time I get a chance, I ask firms like Google, OpenAI and Nvidia what they’re doing to make sure AI knowledge facilities do not pollute the water or air. They are saying they’re nonetheless dedicated to reaching emissions targets however seldom give particular particulars. I think they don’t seem to be fairly positive what the plan is but, both. Perhaps AI will give them the reply?
On the very least, I am glad that the US is reconsidering nuclear energy. It is an environment friendly and largely pollution-free vitality supply. It is only a bit unhappy that it is market calls for that’ll deliver again nuclear and never politicians preventing to guard the planet. A minimum of the US can take inspiration from Europe, the place nuclear energy is more common. It is simply irritating that it takes five or more years to build a brand new plant.
I need my telephone to be smarter
For the previous three years, smartphone makers corresponding to Apple, Samsung and Google have been touting new AI options of their handsets. Typically, these displays present how AI may assist edit pictures or clear up texts. Even then, customers have been underwhelmed by AI in smartphones. I do not blame them. Folks flip to smartphones for high quality snaps, communication or social media. These AI options really feel extra like extras than must-haves.
Here is the factor: AI has the aptitude to repair many ache factors in smartphone utilization. The know-how is means higher at issues like vocal transcription, translation and answering questions than previous “good” options. The issue is that for AI to do this stuff effectively, it requires a whole lot of computing. And when anyone is making an attempt to make use of speech-to-text, they do not have time to attend for his or her audio to be uploaded to Google’s cloud in order that it may be transcribed and beamed again to their telephone. Even when the method takes 10 seconds, that is nonetheless too lengthy in the midst of a back-and-forth textual content chain.
Native AI fashions can be found to run on-device to do these kinds of fast duties. The issue is that the fashions nonetheless aren’t able to getting it proper on a regular basis. In consequence, issues can really feel haphazard, with quality transcriptions working only some of the time. I am hoping that in 2026, native AI on telephones can get to some extent the place it simply works.
I additionally wish to see local AI models on phones that can be more agentic. Google has a characteristic on Pixel telephones referred to as Magic Cue. It may routinely pull out of your e-mail and textual content knowledge and intuitively add Maps instructions to a espresso date. Or for those who’re texting a few flight, it will probably routinely pull up the flight info. This sort of seamless integration is what I need from AI on cell, not reimagining pictures in cartoon kind.
Magic Cue continues to be in its early levels, and it would not work on a regular basis or as you’d suspect. If Google, OpenAI or different firms can determine this out, that is after I do really feel customers will actually begin to recognize AI on telephones.
Is that this AI?
Scrolling via Instagram Reels or TikTok, every time I see one thing really charming, humorous or out of the odd, I instantly rush to the feedback to see if it is AI.
AI video fashions have gotten more and more convincing. Gone are wonky actions, 12 fingers and completely centered pictures with uncanny perfection. AI movies on social now mimic safety digicam footage and handheld movies, and added filters can obscure the AI-ness of a video.
I am uninterested in the guessing recreation. I need each Meta and TikTok to straight-up declare if the video uploaded was made with AI. Meta really does have techniques in place to try to decide if one thing uploaded was made with generative AI, but it’s inconsistent. TikTok is also working on AI detection. I am not solely positive how the platforms can achieve this precisely, nevertheless it’d actually make life on social far much less of a puzzle.
Sora and Google do have watermarks for AI-generated movies. However these are getting simpler to evade, and many individuals are utilizing Chinese language AI fashions, corresponding to Wan, to generate movies. Whereas Wan does add a watermark, folks can discover methods to obtain these movies with out it. It should not be incumbent upon a couple of within the feedback part to delineate whether or not a video is AI or not. (There are even subreddits that survey users trying to discern if a video is AI.)
We’d like readability.
I am uninterested in the fixed guesswork. C’mon, Meta and TikTok — what is the level of all of the billions in AI funding? Simply inform me if a video in your platform is AI.
