It looks like hardware improvement is shifting to AI capabilities. If so, in 7 years pixel 8 will be hardly interesting for anyone from the target audience.
Seriously, what possible actual, real life use case does the average user (even Pixel user) have? Image processing, maybe, but that’s nothing groundbreaking.
Every single demo of anything AI related I’ve seen is nothing more than a nice demo. Impressive, but still just a demo.
Think about, what are you actually doing with your phone that’s so much different from what you did 5 or 10 years ago? Maybe I’m a weirdo, but I use literally 80% of the very same apps. Do these need ai on my phone? Not really.
I am no fan of AI-evangelists promising us the moon in the same vein as crypto-bros, but this is a truly bizarre take. I can tell you, and yes I am just one person, that over the last 3-4 years AI tools have fundamentally altered my work. I work in post-production for video/audio. Commercial, social media, documentary, narrative, you name it. There isn’t a stage of my post pipeline now that doesn’t integrate AI tools at some point. Adobe Audio Enhance fixes problematic (usually Zoom) audio with a click of a button in a way that used to take me days and countless hours. AI tools are making keying off of even the most mediocre green screen capture trivial (DaVinci Resolve has some wild integration on that front). AI tools are generating 90%+ accurate transcripts in a matter of minutes for me now. The list goes on.
Yeah, and how much do you want that on your phone? And how many people do you think want/need that on their phones?
I can tell you, I’m a software developer and there’s currently maybe one product in my workflow that involves ai, and I’m not even sure about that one. Sure, that might change in the future, but not on my phone. Why would it?
AI is not magic. It has its uses, but the current iterations offer nothing even remotely relevant for the average user to have on their phone.
Yeah, and how much do you want that on your phone? And how many people do you think want/need that on their phones?
This is a completely different question then the one I responded to.
AI is not magic
Not sure why you threw that out there, i never said it was. I am very familiar with what it is and am not mystified by it lol
but the current iterations offer nothing even remotely relevant for the average user to have on their phone.
“But the current iteration” is the operative statement here, and I have a feeling you have an overinflated sense of how dominant your mentality is.
Sure, that might change in the future, but not on my phone. Why would it?
This is just a lack of imagination, to be blunt. Just because you haven’t teased out uses doesn’t mean there are none.
As for your having little need of AI, the developers I work along with regularly use ChatGPT-like systems. Maybe you should consider how it could lessen your workload. Just my 2 cents.
This is a completely different question then the one I responded to.
What do you think a “Pixel 8” is exactly? A phone maybe?
But the current iteration" is the operative statement here, and I have a feeling you have an overinflated sense of how dominant your mentality is.
It’s not “mentality”, it’s actually roughly knowing the field and not just throwing words around. LLMs for example are the current iteration. Not chatgpt version XY. And LLMs already kind of hit a wall, there’s not much progress expected in the next time. Stable Diffusion and others are also rather stable and won’t turn out massive improvements any time soon. We’re at the rapidly diminishing returns phase here.
But now the actual kicker: give me a single “killer feature” for ai that normal people would actually have run on their phone and be willing to spend money on. Siri is nice and all, but it’s already running locally, there’s no need for a new phone. Photo editing, yeah, nice, but do you buy a phone because of it?
Again, I’m not saying that ai is “bad”, but I see no reason for the hype in the mobile space, especially in the “you need new hardware for that” sense.
And it won’t need to exist locally on the phone anyway. Higher bandwidth cell and wifi signals mean more and more exotic AI processing can be offloaded onto cloud resources.
It’s great when you have an app that works well when not connected to a network, of course. But most phone buyers don’t really care.
One of the biggest problems with AI that I find right now, is it outright will lie to you. I’ve been getting more and more in depth with it, and the more I do, the more bullshit I’m finding. Early days though.
AI is already in a lot of people’s lifes and they don’t even notice it’s that.
From your keyboard word correction and prediction, to power management, smart image edition and categorising and more.
And some things are done locally on your phone.
The increase in AI capability on phones can allow more things to be done locally, and maybe even get something like a local LLM to predict what you want to type. (LLM = large language model, like chatgpt, bard, Llama and others, they can be used for more than just answering your questions).
The truth is there is little in terms of a use case that directly benefits the user.
Look at the MKBHD review of the new iPhone and he summarizes it pretty accurately when talking about all their super fancy backend bullshit going into the photo software to result in “slightly better pictures” and he wasn’t even 100% sure about that part.
Apples Neural Engine delivered marginal value to the phones actual user and meanwhile the tech company harnessed that power mostly to do client-side scanning. They claimed to have ceased that effort but once again, blackbox proprietary software, so it isn’t transparent to the user.
My contention is that at this point the big tech companies are developing features to benefit their business model, not deliver features to the users. The marketing and surveillance state grows, because that is the real business that these companies are in. Most of the AI gains we hear about benefit them directly but not us.
Actually, I think the real reasons are far less “evil” than you might think: It’s marketing. People fall for that.
Phones don’t improve really, haven’t for quite some time, but you still have to sell new ones. So you add bullshit features or advertise pseudo-improvements. I mean, Apple is currently marketing, that their side button is now programmable! Wow!
The same things people say about every new technology: computers, internet, smartphones etc.
I use my smartphone very differently compared to 10 years ago. And apps which existed 10 years ago are very different now and require different HW. CPU progress is slowing, but AI focused hw could be the next wave.
Most of the normal apps on the phone are using AI on the edges.
Image processing has come a long way using algorithms trained through those AI techniques. Not just the postprocessing of pictures already taken, like unblurring faces, removing unwanted background people, choosing a better frame of a moving picture, white balance/color profile or noise reduction, but also in the initial capture of the image: setting the physical focus/exposure on recognizable subjects, using software-based image stabilization in longer exposed shots or in video, etc. Most of these functions are on-device AI using the AI-optimized hardware on the phones themselves.
On-device speech recognition, speech generation, image recognition, and music recognition has come a long way in the last 5 years, too. A lot of that came from training on models using big, robust servers, but once trained, executing the model on device only requires the AI/ML chip on the phone itself.
In other words, a lot of these apps were already doing these things before on-device AI chips started showing up in 2013 or so. But the on-device chips have made all these things much, much better, especially in the last 5 years when almost all phones started coming with dedicated hardware for these tasks.
I can barely get a phone to last three years, let alone seven. The way we use these devices anymore, there’s no way in hell it’s going seven years without some sort of maintenance and upkeep. The battery won’t last that long, and by year six the thing will be chugging like a commodore trying to run Android 19. I respect the promise, but don’t trust Google with their track record, and very few people will limp these devices into year seven, and they know it.
What was the very first thing Android 14 marketed to me on install this aft? Google Podcasts…
Complete opposite here. Typing this on an iPhone 8, and I’ve never retired a phone sooner than 4 years. Usually I give up around 6 due to lack of updates becoming a problem.
A longer support cycle would definitely sway my purchase decision.
Edit: though I am the type to replace batteries, buttons and screens myself as necessary
While this is a reasonable take, the tensor chips are supposedly focused on AI (which would make sense given their push into the AI space for phone tools like spam, photo/video editing, assistant, etc.) and this refresh builds upon AI stuff they rolled out to previous gen phones. I doubt any of it is so cpu intensive that whatever AI they’ve created in a few years wont also run on the older phone, it just might not be as snappy.
I have a different impression about new AI features backporting plans, but we will see.
My point is that ai targeting HW can potentially drive the next smartphones evolution, which is slowed down currently.
Training AI models takes a lot of development on the software side, and is computationally intense on the hardware side. Loading a shitload of data into the process, and letting the training algorithms dig down on how to value each of billions or even trillions of parameters is going to take a lot of storage space, memory, and actual computation through ASICs dedicated to that task.
Using pre-trained models, though, is a less computationally intensive task. Once the parameters are defined on that huge training set, that model can be applied by software that just takes the parameters already defined in training and applies it to smaller data sets.
So I would expect the AI/ML chips in actual phones would continue to benefit from AI development, including models developed many chip generations later.
The thing is more complicated than than. Moreover, there is a wish/needs to train/fine-tune models locally. This is not comparable to initial training of chatGPT like models, but still require some power. Juts today I read that some pixel 8 video improvement features will not be ported to pixel 7 because they need tensor 3 power.
It looks like hardware improvement is shifting to AI capabilities. If so, in 7 years pixel 8 will be hardly interesting for anyone from the target audience.
Who actually gives a crap about AI in real life?
Seriously, what possible actual, real life use case does the average user (even Pixel user) have? Image processing, maybe, but that’s nothing groundbreaking.
Every single demo of anything AI related I’ve seen is nothing more than a nice demo. Impressive, but still just a demo.
Think about, what are you actually doing with your phone that’s so much different from what you did 5 or 10 years ago? Maybe I’m a weirdo, but I use literally 80% of the very same apps. Do these need ai on my phone? Not really.
I am no fan of AI-evangelists promising us the moon in the same vein as crypto-bros, but this is a truly bizarre take. I can tell you, and yes I am just one person, that over the last 3-4 years AI tools have fundamentally altered my work. I work in post-production for video/audio. Commercial, social media, documentary, narrative, you name it. There isn’t a stage of my post pipeline now that doesn’t integrate AI tools at some point. Adobe Audio Enhance fixes problematic (usually Zoom) audio with a click of a button in a way that used to take me days and countless hours. AI tools are making keying off of even the most mediocre green screen capture trivial (DaVinci Resolve has some wild integration on that front). AI tools are generating 90%+ accurate transcripts in a matter of minutes for me now. The list goes on.
Yeah, and how much do you want that on your phone? And how many people do you think want/need that on their phones?
I can tell you, I’m a software developer and there’s currently maybe one product in my workflow that involves ai, and I’m not even sure about that one. Sure, that might change in the future, but not on my phone. Why would it?
AI is not magic. It has its uses, but the current iterations offer nothing even remotely relevant for the average user to have on their phone.
This is a completely different question then the one I responded to.
Not sure why you threw that out there, i never said it was. I am very familiar with what it is and am not mystified by it lol
“But the current iteration” is the operative statement here, and I have a feeling you have an overinflated sense of how dominant your mentality is.
This is just a lack of imagination, to be blunt. Just because you haven’t teased out uses doesn’t mean there are none.
As for your having little need of AI, the developers I work along with regularly use ChatGPT-like systems. Maybe you should consider how it could lessen your workload. Just my 2 cents.
What do you think a “Pixel 8” is exactly? A phone maybe?
It’s not “mentality”, it’s actually roughly knowing the field and not just throwing words around. LLMs for example are the current iteration. Not chatgpt version XY. And LLMs already kind of hit a wall, there’s not much progress expected in the next time. Stable Diffusion and others are also rather stable and won’t turn out massive improvements any time soon. We’re at the rapidly diminishing returns phase here.
But now the actual kicker: give me a single “killer feature” for ai that normal people would actually have run on their phone and be willing to spend money on. Siri is nice and all, but it’s already running locally, there’s no need for a new phone. Photo editing, yeah, nice, but do you buy a phone because of it?
Again, I’m not saying that ai is “bad”, but I see no reason for the hype in the mobile space, especially in the “you need new hardware for that” sense.
People often pick their phones largely on the camera quality, so I have no reason to doubt many would consider this in their purchase decision, yes.
Consider it, but not buying a new phone because of it.
Do you think, someone would pay an extra 50€ just for slightly better editing capabilities all else being equal?
People pay hundreds more for better/more cameras so yes. I also imagine it the AI integration would not be so singular as just photo editing.
Photo/video capture is one of the single largest uses of smart phones. People spend a LOT of time editing their images too.
And it won’t need to exist locally on the phone anyway. Higher bandwidth cell and wifi signals mean more and more exotic AI processing can be offloaded onto cloud resources.
It’s great when you have an app that works well when not connected to a network, of course. But most phone buyers don’t really care.
deleted by creator
One of the biggest problems with AI that I find right now, is it outright will lie to you. I’ve been getting more and more in depth with it, and the more I do, the more bullshit I’m finding. Early days though.
I don’t think you look at it from the right way.
AI is already in a lot of people’s lifes and they don’t even notice it’s that.
From your keyboard word correction and prediction, to power management, smart image edition and categorising and more.
And some things are done locally on your phone.
The increase in AI capability on phones can allow more things to be done locally, and maybe even get something like a local LLM to predict what you want to type. (LLM = large language model, like chatgpt, bard, Llama and others, they can be used for more than just answering your questions).
The truth is there is little in terms of a use case that directly benefits the user.
Look at the MKBHD review of the new iPhone and he summarizes it pretty accurately when talking about all their super fancy backend bullshit going into the photo software to result in “slightly better pictures” and he wasn’t even 100% sure about that part.
Apples Neural Engine delivered marginal value to the phones actual user and meanwhile the tech company harnessed that power mostly to do client-side scanning. They claimed to have ceased that effort but once again, blackbox proprietary software, so it isn’t transparent to the user.
My contention is that at this point the big tech companies are developing features to benefit their business model, not deliver features to the users. The marketing and surveillance state grows, because that is the real business that these companies are in. Most of the AI gains we hear about benefit them directly but not us.
Actually, I think the real reasons are far less “evil” than you might think: It’s marketing. People fall for that.
Phones don’t improve really, haven’t for quite some time, but you still have to sell new ones. So you add bullshit features or advertise pseudo-improvements. I mean, Apple is currently marketing, that their side button is now programmable! Wow!
Phones are basically game consoles nowadays, and AI is being used in render upscaling there
The same things people say about every new technology: computers, internet, smartphones etc. I use my smartphone very differently compared to 10 years ago. And apps which existed 10 years ago are very different now and require different HW. CPU progress is slowing, but AI focused hw could be the next wave.
Most of the normal apps on the phone are using AI on the edges.
Image processing has come a long way using algorithms trained through those AI techniques. Not just the postprocessing of pictures already taken, like unblurring faces, removing unwanted background people, choosing a better frame of a moving picture, white balance/color profile or noise reduction, but also in the initial capture of the image: setting the physical focus/exposure on recognizable subjects, using software-based image stabilization in longer exposed shots or in video, etc. Most of these functions are on-device AI using the AI-optimized hardware on the phones themselves.
On-device speech recognition, speech generation, image recognition, and music recognition has come a long way in the last 5 years, too. A lot of that came from training on models using big, robust servers, but once trained, executing the model on device only requires the AI/ML chip on the phone itself.
In other words, a lot of these apps were already doing these things before on-device AI chips started showing up in 2013 or so. But the on-device chips have made all these things much, much better, especially in the last 5 years when almost all phones started coming with dedicated hardware for these tasks.
I can barely get a phone to last three years, let alone seven. The way we use these devices anymore, there’s no way in hell it’s going seven years without some sort of maintenance and upkeep. The battery won’t last that long, and by year six the thing will be chugging like a commodore trying to run Android 19. I respect the promise, but don’t trust Google with their track record, and very few people will limp these devices into year seven, and they know it.
What was the very first thing Android 14 marketed to me on install this aft? Google Podcasts…
Complete opposite here. Typing this on an iPhone 8, and I’ve never retired a phone sooner than 4 years. Usually I give up around 6 due to lack of updates becoming a problem.
A longer support cycle would definitely sway my purchase decision.
Edit: though I am the type to replace batteries, buttons and screens myself as necessary
While this is a reasonable take, the tensor chips are supposedly focused on AI (which would make sense given their push into the AI space for phone tools like spam, photo/video editing, assistant, etc.) and this refresh builds upon AI stuff they rolled out to previous gen phones. I doubt any of it is so cpu intensive that whatever AI they’ve created in a few years wont also run on the older phone, it just might not be as snappy.
I have a different impression about new AI features backporting plans, but we will see. My point is that ai targeting HW can potentially drive the next smartphones evolution, which is slowed down currently.
Training AI models takes a lot of development on the software side, and is computationally intense on the hardware side. Loading a shitload of data into the process, and letting the training algorithms dig down on how to value each of billions or even trillions of parameters is going to take a lot of storage space, memory, and actual computation through ASICs dedicated to that task.
Using pre-trained models, though, is a less computationally intensive task. Once the parameters are defined on that huge training set, that model can be applied by software that just takes the parameters already defined in training and applies it to smaller data sets.
So I would expect the AI/ML chips in actual phones would continue to benefit from AI development, including models developed many chip generations later.
The thing is more complicated than than. Moreover, there is a wish/needs to train/fine-tune models locally. This is not comparable to initial training of chatGPT like models, but still require some power. Juts today I read that some pixel 8 video improvement features will not be ported to pixel 7 because they need tensor 3 power.