11 Comments

Worth commenting that constructing design templates could be worth its weight in gold, but ai pixie dust adapting these is another route. Add to this optimization and reduction of code overhead is another. Visual building seems intuitively the easiest as design, is most naturally like picking up a pen, paintbrush. However writing semantic structures is yet another art (outside of coding), in painting with words, a design landscape. Can we integrate, LLM Chat-GPT 4and Dall-E to say take a picture/art/design of choice that represents the style theme of an app and have AI effectively provide font groups, complimenting colors and build the app desired, or provide a basic ui/ux example?

Expand full comment
Nov 4, 2023Liked by Kent Beck

It seems like so much of the UI still is not responsive to very basic things it already knows. For example, the many times you click a drop-down menu and it contains only a single option. Or you have to choose a country from an alphabetical list rather than placing the country you're in at the top of the list, or the many times when you're entering an email address and you're told the email address is invalid as you're in the process of typing it. On and on...

What I'm driving at is that there is so much in present UIs that doesn't leverage what the app already knows about, that being responsive to whether you're wearing glasses seems a long way down the pike.

That being said, I like the idea.

Expand full comment

How would the phone detect that you took off your glasses? Is the camera on periodically to take your photo? There may be a privacy concern there...

Expand full comment

Perhaps a gadget with Proximity sensor and Bluetooth that could be attached to the frame.

Expand full comment
Nov 4, 2023Liked by Kent Beck

You should check what iOS provides for disability and what API it provides for developers. Sorry, you described some existing features.

Expand full comment
author

I’m aware. The problem with using existing APIs is the extra design and programming investment required. Companies would rather have the next feature. I want a framework that requires no “extra” investment.

Expand full comment

It would be even better if the phone display deliberately messed up the display for my particular eye defect so I could read it when my glasses were off. The laws of optics are well known soI can imagine how it might be possible to defocus an image so that when I viewed it with my bad eyes I saw a perfect, or near perfect, image instead. I realise this is a harder step than making things bigger.

Expand full comment

I love the idea of getting “user current context” in the same way as we currently get context for CSS media queries. Of course it becomes yet another fingerprinting tool!

Expand full comment

Interesting take. Imagine you could tell the AI your preferences for how you want to layout stuff.

E.g. often websites have sticky navbars that hide like 30% of the screen on mobiles. I’d be happy to be able to minimize them without developers having to program that.

Expand full comment

As someone who wears a strong prescription, i can say this a GREAT idea. A lot of website layouts do not even hold up well to zooming on desktop.

Expand full comment