My phone knows I'm not wearing my glasses. Why doesn't it care?
Worth commenting that constructing design templates could be worth its weight in gold, but ai pixie dust adapting these is another route. Add to this optimization and reduction of code overhead is another. Visual building seems intuitively the easiest as design, is most naturally like picking up a pen, paintbrush. However writing semantic structures is yet another art (outside of coding), in painting with words, a design landscape. Can we integrate, LLM Chat-GPT 4and Dall-E to say take a picture/art/design of choice that represents the style theme of an app and have AI effectively provide font groups, complimenting colors and build the app desired, or provide a basic ui/ux example?
It seems like so much of the UI still is not responsive to very basic things it already knows. For example, the many times you click a drop-down menu and it contains only a single option. Or you have to choose a country from an alphabetical list rather than placing the country you're in at the top of the list, or the many times when you're entering an email address and you're told the email address is invalid as you're in the process of typing it. On and on...
What I'm driving at is that there is so much in present UIs that doesn't leverage what the app already knows about, that being responsive to whether you're wearing glasses seems a long way down the pike.
That being said, I like the idea.
How would the phone detect that you took off your glasses? Is the camera on periodically to take your photo? There may be a privacy concern there...
You should check what iOS provides for disability and what API it provides for developers. Sorry, you described some existing features.
Found similar request on Reddit https://www.reddit.com/r/apple/comments/13chah8/iphone_should_autodetect_glasses/?rdt=49471
It would be even better if the phone display deliberately messed up the display for my particular eye defect so I could read it when my glasses were off. The laws of optics are well known soI can imagine how it might be possible to defocus an image so that when I viewed it with my bad eyes I saw a perfect, or near perfect, image instead. I realise this is a harder step than making things bigger.
I love the idea of getting “user current context” in the same way as we currently get context for CSS media queries. Of course it becomes yet another fingerprinting tool!
Interesting take. Imagine you could tell the AI your preferences for how you want to layout stuff.
E.g. often websites have sticky navbars that hide like 30% of the screen on mobiles. I’d be happy to be able to minimize them without developers having to program that.
As someone who wears a strong prescription, i can say this a GREAT idea. A lot of website layouts do not even hold up well to zooming on desktop.