Discussion about this post

User's avatar
Christopher M Overton's avatar

Worth commenting that constructing design templates could be worth its weight in gold, but ai pixie dust adapting these is another route. Add to this optimization and reduction of code overhead is another. Visual building seems intuitively the easiest as design, is most naturally like picking up a pen, paintbrush. However writing semantic structures is yet another art (outside of coding), in painting with words, a design landscape. Can we integrate, LLM Chat-GPT 4and Dall-E to say take a picture/art/design of choice that represents the style theme of an app and have AI effectively provide font groups, complimenting colors and build the app desired, or provide a basic ui/ux example?

Expand full comment
Andrew Binstock's avatar

It seems like so much of the UI still is not responsive to very basic things it already knows. For example, the many times you click a drop-down menu and it contains only a single option. Or you have to choose a country from an alphabetical list rather than placing the country you're in at the top of the list, or the many times when you're entering an email address and you're told the email address is invalid as you're in the process of typing it. On and on...

What I'm driving at is that there is so much in present UIs that doesn't leverage what the app already knows about, that being responsive to whether you're wearing glasses seems a long way down the pike.

That being said, I like the idea.

Expand full comment
9 more comments...

No posts