The problem I have always had with voice control is that it just doesn’t really seem to fit into my home automation. I don’t want to give Home Assistant a verbal command to turn on the lights. I want it to detect that I’ve entered the room and set the lights to the appropriate scene automatically; I haven’t touched a light switch in weeks. For selecting an album or movie to play, it’s easier to use a menu on a screen than to try to explain it verbally.
Don’t get me wrong. I’m hugely in favor of anything that runs locally instead of using the “cloud.” I think that the majority of people running a home automation server want to tinker with it and streamline it to do things on its own. I want it to “read my mind.” The people who just want a basic solution probably aren’t going to set up HA.
Funny…I’m the exact opposite. I don’t want it to detect that I’ve entered the room and set the lights to the appropriate scene automatically. Unless it can detect when I don’t want to go into a dark room and be blinded by lights I didn’t want on, I want to control when it turns on. Unless it can determine that I’m only home from work for a few minutes to go to the bathroom, I don’t want it to adjust the heat settings. In other words, until it can actually read my mind, I want to be able to control it and tell it what I want when I actually want it.
I’m looking into an HA setup specifically to get away from Alexa and host everything locally. I may only want simple controls, but I want to truly control everything myself.
I loved being able to control the dimmer level or color of the lights using voices controls.
I set up a few IFTTT recipes to create lighting and music scenes for things like reading, conversation, movie watching, date night, party time, and a few others and triggered them with a voice command.
It was always a hit with whoever I brought over, but mostly it just did 4 or 5 things with one voice command.
You can have it set more intelligently than on/off.
For example, what I have (I’m excessive btw, so this is just one option) is a light sensor that tells me how light it is outside, and then combine that information with sunrise/sunset times.
I use that to set the color of the lighting (circadian lighting style), the light level, and a ramp time to the max brightness I’d want. For rooms where there is good daylight coming in, if the light coming in from daylight is bright enough, the lights lower their brightness (daylight harvesting approach).
This isn’t in every room at the moment, as some of my lights are not RGBW LEDs. Those with regular white LEDs just dim.
Is it perfectly set for your eyes? No, but you can tweak it. My wife likes it bright than me, so I set values that I could tolerate for a nice compromise.
No RGB? Then drop the circadian lighting, keep the rest.
No light sensors? There are some APIs available out there for solar radiation values you can use (openweathermap for example). Less accurate, but probably close enough for what you want.
TL;DR version: add more conditions, and get what you want.
You wake up one day with a bad headache, and bright light hurts your eyes. You can close the curtains, but every room is set to turn the lights on to the brightness that you usually prefer.
How do you manage something like this? Do you have to adjust everything with your phone and reset it when you feel better?
How does it know what scene you want? If you walk in to a room and want to watch TV, you might want the lights to be dimmer than if you’re going to read a book, for example.
For selecting an album or movie to play, it’s easier to use a menu on a screen than to try to explain it verbally.
How? I can put on my best Captain Picard voice and say ‘Computer, play the album Insomniac by Green Day’ much faster and easier than I could pick up the remote, turn on the media player, scroll to music, scroll to G, find Green Day, scroll to Insomniac, and press play.
I’ve got Amazon devices (bought before I knew how bad both they and Amazon are), and they’re not great. Even with them, I can walk in to my living room in the night with my hands full and tell them to turn my chosen lights on, set the brightness and colour, start playing my chosen music, or turn the TV on and start playing certain media, all while I’m walking to my seat.
The only media that I can’t play is what I haven’t set up to use with Alexa yet, but that would be the same for any automation.
When I get around to it, I’m going to add either Plex or Jellyfin to my voice control setup, and hopefully be able to play anything from my library in the same way :)
Even ignoring privacy arguments, I think that voice control is a great use case for running services locally - lower latency due to not having up upload your sample and the option of having it learn your accent is very attractive.
That said, voice control is irritatingly error-prone and seems to be slower than just reaching for the remote control. I agree that automatic stuff would be best, but some stuff you can’t have rules for.
Something that would be interesting is a more eye- and gesture-based system: I’m thinking something like you look at the camera and slice across your throat for stop or squeeze fingers together to reduce volume. This is definitely one to run locally, for privacy and performance reasons.
But where do you put the camera? If you’re sitting in front of the TV, then near the TV makes sense. What if you’re sitting facing a different direction with a book though? What if your hands are full?
A camera based system would be much more limited, and probably wouldn’t work in the dark.
The problem I have always had with voice control is that it just doesn’t really seem to fit into my home automation. I don’t want to give Home Assistant a verbal command to turn on the lights. I want it to detect that I’ve entered the room and set the lights to the appropriate scene automatically; I haven’t touched a light switch in weeks. For selecting an album or movie to play, it’s easier to use a menu on a screen than to try to explain it verbally.
Don’t get me wrong. I’m hugely in favor of anything that runs locally instead of using the “cloud.” I think that the majority of people running a home automation server want to tinker with it and streamline it to do things on its own. I want it to “read my mind.” The people who just want a basic solution probably aren’t going to set up HA.
Maybe I’m missing a use case for voice control?
My main use cases are Timers in Kitchen, Finding my wife’s phone, and turning off music.
Funny…I’m the exact opposite. I don’t want it to detect that I’ve entered the room and set the lights to the appropriate scene automatically. Unless it can detect when I don’t want to go into a dark room and be blinded by lights I didn’t want on, I want to control when it turns on. Unless it can determine that I’m only home from work for a few minutes to go to the bathroom, I don’t want it to adjust the heat settings. In other words, until it can actually read my mind, I want to be able to control it and tell it what I want when I actually want it.
I’m looking into an HA setup specifically to get away from Alexa and host everything locally. I may only want simple controls, but I want to truly control everything myself.
I loved being able to control the dimmer level or color of the lights using voices controls.
I set up a few IFTTT recipes to create lighting and music scenes for things like reading, conversation, movie watching, date night, party time, and a few others and triggered them with a voice command.
It was always a hit with whoever I brought over, but mostly it just did 4 or 5 things with one voice command.
You can have it set more intelligently than on/off.
For example, what I have (I’m excessive btw, so this is just one option) is a light sensor that tells me how light it is outside, and then combine that information with sunrise/sunset times.
I use that to set the color of the lighting (circadian lighting style), the light level, and a ramp time to the max brightness I’d want. For rooms where there is good daylight coming in, if the light coming in from daylight is bright enough, the lights lower their brightness (daylight harvesting approach).
This isn’t in every room at the moment, as some of my lights are not RGBW LEDs. Those with regular white LEDs just dim.
Is it perfectly set for your eyes? No, but you can tweak it. My wife likes it bright than me, so I set values that I could tolerate for a nice compromise.
No RGB? Then drop the circadian lighting, keep the rest.
No light sensors? There are some APIs available out there for solar radiation values you can use (openweathermap for example). Less accurate, but probably close enough for what you want.
TL;DR version: add more conditions, and get what you want.
You wake up one day with a bad headache, and bright light hurts your eyes. You can close the curtains, but every room is set to turn the lights on to the brightness that you usually prefer.
How do you manage something like this? Do you have to adjust everything with your phone and reset it when you feel better?
deleted by creator
How does it know what scene you want? If you walk in to a room and want to watch TV, you might want the lights to be dimmer than if you’re going to read a book, for example.
How? I can put on my best Captain Picard voice and say ‘Computer, play the album Insomniac by Green Day’ much faster and easier than I could pick up the remote, turn on the media player, scroll to music, scroll to G, find Green Day, scroll to Insomniac, and press play.
I’ve got Amazon devices (bought before I knew how bad both they and Amazon are), and they’re not great. Even with them, I can walk in to my living room in the night with my hands full and tell them to turn my chosen lights on, set the brightness and colour, start playing my chosen music, or turn the TV on and start playing certain media, all while I’m walking to my seat.
The only media that I can’t play is what I haven’t set up to use with Alexa yet, but that would be the same for any automation.
When I get around to it, I’m going to add either Plex or Jellyfin to my voice control setup, and hopefully be able to play anything from my library in the same way :)
My #1 use case is setting timers. My hands are messy in the kitchen, need to set 35 different timers to get the kids outta the house in the morning.
Even ignoring privacy arguments, I think that voice control is a great use case for running services locally - lower latency due to not having up upload your sample and the option of having it learn your accent is very attractive.
That said, voice control is irritatingly error-prone and seems to be slower than just reaching for the remote control. I agree that automatic stuff would be best, but some stuff you can’t have rules for.
Something that would be interesting is a more eye- and gesture-based system: I’m thinking something like you look at the camera and slice across your throat for stop or squeeze fingers together to reduce volume. This is definitely one to run locally, for privacy and performance reasons.
But where do you put the camera? If you’re sitting in front of the TV, then near the TV makes sense. What if you’re sitting facing a different direction with a book though? What if your hands are full?
A camera based system would be much more limited, and probably wouldn’t work in the dark.
You’re assuming that we can’t have both. Why not have it as an complementary input?
I think looking at a device and talking is better than saying hey $brandname before everything, but having both would be better still.
Friends have voice stuff, it’s pretty annoying