
Discover more from Software Design: Tidy First?
This one goes in the “ideas are a dime a dozen” bucket. I want my phone to adapt to whether I am wearing glasses or not. Fonts should get bigger. Layouts should adjust to show fewer, larger items.
Designing & coding all this by hand would be incredibly tedious, error-prone, & expensive. Look, I can’t even get native iPad apps for most of the apps I use.
Idea: instead of current layouts frameworks, specify layouts in terms of what you want accomplished:
This text is a caption for that picture.
These items go together as a single item.
This item has comments.
This item has reactions.
This is a form to be filled out.
The pixel-precise layout is done with a constraint solver or magic AI pixie dust or whatever.
Imagine I’m doom-scrolling my Feed of News. I take off my glasses. The screen changes layout so I can still read. I put my glasses back on. Back to the original.
Once you have layouts specified semantically instead of geographically, you can really turn up the magic AI pixie dust. You can gradually evolve better & better layouts. You can personalize layouts. Maybe I react better to captions at the top & you react better to captions at the bottom. Why shouldn’t we each get what’s best for us?
The way to get started on this new framework would be to build one specific app using it. Only add elements & relationships as needed. Begin adapting to re-layout signals from the very beginning. That’s the magical demo.
Anyway, I’m not going to work on this but I hope someone does.
Idea: LayoutFeed
Worth commenting that constructing design templates could be worth its weight in gold, but ai pixie dust adapting these is another route. Add to this optimization and reduction of code overhead is another. Visual building seems intuitively the easiest as design, is most naturally like picking up a pen, paintbrush. However writing semantic structures is yet another art (outside of coding), in painting with words, a design landscape. Can we integrate, LLM Chat-GPT 4and Dall-E to say take a picture/art/design of choice that represents the style theme of an app and have AI effectively provide font groups, complimenting colors and build the app desired, or provide a basic ui/ux example?
It seems like so much of the UI still is not responsive to very basic things it already knows. For example, the many times you click a drop-down menu and it contains only a single option. Or you have to choose a country from an alphabetical list rather than placing the country you're in at the top of the list, or the many times when you're entering an email address and you're told the email address is invalid as you're in the process of typing it. On and on...
What I'm driving at is that there is so much in present UIs that doesn't leverage what the app already knows about, that being responsive to whether you're wearing glasses seems a long way down the pike.
That being said, I like the idea.