Discover more from Software Design: Tidy First?
My phone knows I'm not wearing my glasses. Why doesn't it care?
This one goes in the “ideas are a dime a dozen” bucket. I want my phone to adapt to whether I am wearing glasses or not. Fonts should get bigger. Layouts should adjust to show fewer, larger items.
Designing & coding all this by hand would be incredibly tedious, error-prone, & expensive. Look, I can’t even get native iPad apps for most of the apps I use.
Idea: instead of current layouts frameworks, specify layouts in terms of what you want accomplished:
This text is a caption for that picture.
These items go together as a single item.
This item has comments.
This item has reactions.
This is a form to be filled out.
The pixel-precise layout is done with a constraint solver or magic AI pixie dust or whatever.
Imagine I’m doom-scrolling my Feed of News. I take off my glasses. The screen changes layout so I can still read. I put my glasses back on. Back to the original.
Once you have layouts specified semantically instead of geographically, you can really turn up the magic AI pixie dust. You can gradually evolve better & better layouts. You can personalize layouts. Maybe I react better to captions at the top & you react better to captions at the bottom. Why shouldn’t we each get what’s best for us?
The way to get started on this new framework would be to build one specific app using it. Only add elements & relationships as needed. Begin adapting to re-layout signals from the very beginning. That’s the magical demo.
Anyway, I’m not going to work on this but I hope someone does.