Now hold on… when I say “bad design” here, I’m talking about bad design in the context of user experience and human-computer interaction design not the beautiful new immaterial background wallpaper graphics. The new background images and semi-opaque Vista-like windows look great! Human-computer interaction design generally has two important approaches; “easy to use” and “easy to learn.” There’s also the “easy to look at” approach to design, and that seems to be more what Microsoft is going for here as many of the “easy to use” and “easy to learn” aspects have been broken in Windows 11. That’s probably going to prove to be a problem since Windows is something that people often need to use instead of just look at. The best design would have a balanced yet high grade of “easy to use”, “easy to learn”, and “easy to look at”.
Why the centered start menu is bad
I know you can move the start menu button back to the lower-left corner where it has been by default since the mid-1990s by changing the settings, but there’s a lot to be said about changing its position by default. Most users will probably leave things at their default and frustratingly deal with the changes while gaining hatred for the operating system. The centered Start menu is new and different and eye-catching, but is it good? Let’s start with learning some interaction design basics.
Interaction design basics
One of the big basics of interaction design is part of Bruce Tog’s “AskTog: A Quiz Designed to Give You Fitts“. Also see “Designing for People Who Have Better Things To Do With Their Lives, Part Two“. Question 3 is “List the five-pixel locations on the screen that the user can access fastest.” The answer is:
- The pixel immediately at the current cursor location: Click the mouse and you’re done.
- The bottom-right corner.
- The top-left corner.
- The top-right corner.
- The bottom-left corner.
This, of course, presumes that the user is using a computer with a mouse or trackpad, which for Windows is very likely. Touch and pen interaction efficiency has different rules. Anyway, the reason the corners of the screen are the quickest and easiest to access is that they don’t require any precision to access the target. You can flick the mouse pointer in any of those directions, and it will end up in that pixel in the corner ready to click and activate whatever is there. Logic would dictate that you should add some most-used interactive elements to those corners in order to make them easiest to access.
What functions are available in the 4 corners of Windows 11 by default?
- Bottom-left corner: nothing
- Bottom-right corner: show desktop (this option can be disabled, but then the corner will do nothing. Ideally, the corner click should show the notifications bar.)
- Top-right corner: nothing or close program if a program is active and maximized.
- Top-left corner: nothing or show windows control menu if a program is active and maximized and the top left corner windows control menu exists in that program
The only command there that I would consider frequent and useful would be the “close program” command. The windows control menu is also useful, but Windows programs no longer have that as a standard component so you can’t rely on that being consistent anymore.
Other desktop operating systems are much better at this. In fact, Windows 8, had a brilliant use of the 4 corner click pixels, but nobody knew about those since there were no visual cues or identifiers for the functions. In other words, the “easy to learn” aspect was missing for Windows 8, while the “easy to use” aspect was certainly there. Without the “easy to learn” aspect, the “easy to use” aspect is often lost.
It used to be that the bottom left corner would activate the start menu by default all the time. This was great because you could easily flick the mouse pointer in that direction, click, and get access to everything there right away. It’s been like that since 1995. You don’t even have to look at the screen… you can glance out the window while navigating if you want to.
It’s not in the right place anymore
Windows 11 removes that extremely efficient and easy-to-use interaction method (by default) in favor of putting the Start menu button closer to the center of the bottom edge of your screen. To be sure, this is not one of the quickest and easiest locations for an interactive element. But it gets worse…
It’s not even always in the same new place
While the bottom-left corner was already super easy to access, you could also build motor memory for its location. All you need to do is remember “flick to the lower-left corner, click”. So efficient! Well, on Windows 11, not only do you have to be much more precise in trying to click the start menu button, but it also moves around sometimes.
That’s right, it’s not consistently in the same location at the bottom of your screen! If I launch a bunch of programs, the app icons will fill up a larger width within the taskbar. This will displace the start menu button, task switcher, widgets, and search buttons to the left. That means you can’t build motor memory for their locations and thus have to spend some brainpower to search for the proper icon using your eyes every time you need to use them. If that sounds like it’s going to require more cognitive energy and waste your time, you’re right.
You may remember that with Windows 7, Microsoft’s research found that people would often launch programs in a specific sequence so that they would continuously be listed in the taskbar in a specific order. Windows 7 allowed users to pin programs to the taskbar in their desired position so that users could build motor memory for their most-used applications and quickly switch to them. With Windows 11’s centered taskbar, you can no longer build motor memory for application locations since their locations are always displaced depending on how many other programs are running at the same time (unless you pin all of the programs that you’ll ever use.) In other words, Windows 11’s new taskbar design degrades the usefulness of pinning applications.
Application launching is much further away
The layout of the new start menu has problems too. The “All Programs” button is about as far away from the mouse pointer’s initial location as it possibly can be, and even the listing of “pinned” applications is very far away.
On Windows 10, I can arrange my start menu’s application tiles to be very close to my mouse pointer so that I can access them with minimal mouse movement. That’s an efficiency boost and a time saver. Windows 11’s Start menu does the opposite AND doesn’t allow the user to customize it for better efficiency. That’s another “bad design” aspect of the new Start menu.
Which function in the Windows 11 Start menu is the closest to the Start button for easiest access? My username? A button that I can use to sign out? Something I’ve never done? Does that really deserve the fastest access location? Nope!
Live tiles are gone and the widget window isn’t as good
Besides being much easier to customize and arrange into an efficient layout, the Windows 10 start menu also had instant access at-a-glance live tiles that could make viewing information as simple as clicking the start menu button. With one click I could see the time in a variety of time zones, the weather, my next appointment, upcoming tasks, flagged email lists, news, etc.
The new Widgets window kind of does a lot of the same things that live tiles did, but it’s all in a completely different section from the start menu now and has a lot more irrelevant info and a lot less personal info. I mean, the calendar widget doesn’t even work yet.
That being said, Windows 10 live tiles never met their full potential as they did on Windows Phone 7. Windows Phone 7’s live tiles were fantastically customizable and personal. I could pin photo albums, OneNote Notebooks or pages, plane ticket PDFs, contact groups with their Facebook/Twitter/MSN latest posts, specific email folders, web pages, movies, music albums, playlists… it was far far more robust back then.
Ambiguous icons are objectively more difficult to learn & use
All of the research on the usability of interactive elements has shown that labeling buttons in the users’ native language have many advantages that make the system both “easy to learn” and “easy to use”. If you’re someone who doesn’t have anything better to do than learn new software interaction methods then this might not be a big deal to you. For people who do have better things to do, an instantly obvious and consistent user interface is a huge advantage for a positive user experience. Here are some references for you if you don’t believe me:
One of those is an article about Microsoft’s own research from 2005! Going back to Bruce Tog’s interaction designer quiz, buttons with labels also improve interaction according to Fitts’ Law since the labels increase the target area of the buttons.
You might say that aesthetically, the ribbon-style interface of 2005-2021 looks more cluttered and that’s a valid criticism, but I can much more easily determine the functions of all of those buttons because I learned about “words” when I was 2 years old. Plus, they’re all one click away, thus improving efficiency. Windows 11 makes the buttons harder to understand, harder to explain to other users, and harder to find thus requiring more cognitive energy and time. Windows 11’s file manager removes the extremely useful customizable toolbar where I could add frequently-used commands in the order I choose for extra efficiency.
So we’ve already proven many times over the past few decades that obvious interactive elements are better. Yet, today with Windows 11, we’re going backward. The new File Explorer is a good example of a bad design. While in 2005, Microsoft proved that their new “ribbon” style toolbar interface was going to be much easier to use and easier to learn (it is), now we’re throwing that out for crowded cryptic icon toolbars that Microsoft had so many problems within the 1990s. Even Windows Vista’s File Explorer was easier to use since at least that had labeled buttons that people could understand. Microsoft never even finished consistently implementing the 2005 ribbon interface across its systems, and now we’re going back to a 90s era interface design that was proven problematic decades ago.
Windows that don’t behave like Windows
Speaking of confusing unlabeled icons, in Windows 11’s taskbar we’ve got a few icons there by default that don’t behave like the other icons. They LOOK a lot like the other icons though. They’ve got blue colors just like all the other Windows program icons and they’re the same size as all the other program icons. That’s going to cause some confusion. The Windows icon, Search icon, Tasks switcher icon, and Widgets icon are all stuck to the left side of the centered taskbar, while program icons appear to the right. Program icons can be arranged how you want and they can be “pinned” so that they’ll stay on the taskbar even when not running. The Windows, Search, Tasks and Widgets icons look the same as program icons, but they cannot be arranged in the same manner. What’s worse is that they also launch windows that look like program windows (just like the program icons on the right), but these other types of windows don’t behave like program windows.
In Windows 10, we also had a series of system icons on the left side of the taskbar for similar things like the task view, start menu, Cortana, and search, but these were designed to look different from the program icons to the right. That was a good thing. We could visually identify that these icons would behave differently… AND they did behave differently. Instead of opening up a program window like the program icons did, these icons would generally open a menu that popped up from the taskbar (with the exception of “task view” which would give a full-screen overlay).
In Windows 11, not only is the difference unclear, but now those system icons no longer open menus that are clearly attached to their icons in the taskbar… instead, they open floating graphic windows. These windows look like they could be application windows, but they don’t have standard minimize/maximize/close buttons in the upper right corners like all windows are supposed to have… and they are not resizable or repositionable like all windows are supposed to be. Why would you do that to us, Microsoft? Is there any real reason for a new type of window that’s not nearly as usable or flexible as the other types of windows? Is there any reason to make their launch mechanism look exactly like the launch mechanism for windows that do behave like we expect windows to behave? The answer is no. This creates more confusion for the users and requires them to spend cognitive energy memorizing the differences when really there should be no differences.
Either the Start window, Search window, Task view, Chat, and Widgets window should behave like program windows, or the Start, Search, Tasks, Chat, and Widgets buttons should behave like taskbar menus that are clearly different from program windows. My vote is that they should all behave like program windows as that would be very useful and much more simple for understanding the system as a whole. I’d love to be able to move around and resize the search window, widgets window, and program launcher window. Microsoft already did that with Cortana; making it a normal program window as opposed to a taskbar menu (as per one of my feature requests). The whole system would be much easier to use if window behavior was consistent as users would be able to expect the same functionality and behavior from all windows in the system.
The whole system would be much easier to use if window behavior was consistent as users would be able to expect the same functionality and behavior from all windows in the system.
How awesome would it be to have the program launcher window stay open and resizable/repositionable so that I can launch as many programs as I want with one click each? Instead, I have to re-open the start/program launcher window every time I want to launch another program because it disappears as soon as I click something.
The widgets window looks like an application window, but it isn’t. How awesome would it be if it was resizable, snap-able, and max/minimizable though? If the widgets window is a menu, why isn’t it visually anywhere near the widgets button on the taskbar? What a mess!
The search window isn’t a normal window either. It disappears as soon as you click something else, which is an awful user experience. How awesome would it be if the search window was a normal window that I could snap to the side and open results in other windows without losing the results listing? That would have been really useful! The only scenario where users actually like new windows opening on-click (See “Opening Links in New Browser Windows and Tabs (nngroup.com)“) is the scenario where Microsoft makes the actual list of results disappear thus requiring a do-over of the search every time.
With an update and a reboot, I now have 5 icons on the left side of the taskbar that LOOK like program icons, but they don’t open normal program windows and they aren’t drag/drop rearrangeable. On the right part, I have 3 actual program icons that do open normal program windows and are re-arrangeable and have keyboard shortcuts like expected. It’s impossible to visually tell which icons behave normally and which do not. Why can’t they all behave normally so that users can build a simple consistent mental model of how the system works?
Speaking of a confusing lack of consistency that causes frustration among users, let’s not forget about the ridiculous number of different scrolling interfaces we’ve got to deal with on Windows 11. Sometimes we’ve got dots for scrolling, sometimes we’ve got tiny scrollbars that are difficult to click, sometimes we’ve got larger scrollbars that are easier to use, sometimes we have no scrollbars. It’s a total mess that makes interaction complicated and inefficient for users.
In the above image, you’ll see 5 different methods of vertically scrolling content in windows. The first one in the widgets window is invisible until you move your mouse over it. The second one is just two tiny dots in the start window. The third one is a very thin vertical line in the right edge of the File Explorer window that becomes slightly larger and shows up/down arrows when you mouse over it. The fourth one is a rounded scrollbar, and the fifth one is a rectangular scrollbar. The last one is probably the best since its targets are larger and it’s easy to see.
When you learned to drive a car how many different styles of steering wheels were there in that one car that you learned to drive on? Certainly not 5, right? Windows used to have a central “Appearance” control panel where you could actually customize the scrollbars in order to change the colors or make them bigger or smaller. That was pretty awesome… and, get this… It applied to all of the scrollbars in all of the programs! So simple, consistent, and easy!
While the removal of the easy-to-use ribbon interface from the file explorer, and the centered start menu are also accessibility problems, Microsoft is also removing another accessibility advantage that was present in all previous versions of Windows… access keys.
If you want to be very efficient in your computer interaction methods, you’ll probably want to learn how to use access keys and keyboard shortcuts to navigate the interface instead of taking your hands off the keyboard and reaching for the mouse, trackpad, or touch screen. In the old days, all Windows programs had menus at the top that were accessible via the keyboard. You could simply press the Alt key once followed by the underlined letter within the menu in order to quickly access that command. This was awesome for power user efficiency as well as for users with motor skill disabilities. It still works this way in many programs today.
With Microsoft’s 2005 ribbon interface, the access keys were hidden by default but would show with a single press of the Alt key, making it easy to learn the access keys even though the ribbon UI didn’t follow the menu structure of most other programs.
Today, with Windows 10 UWP apps and now Windows 11… access keys are mostly gone. The Windows key + X menu has removed them completely, and that was one of the most useful set of access keys. On Windows 10, I could type Windows key + X, u, u, in order to shut down the computer. Or I could type Windows key + X, u, s, to go into sleep mode. Those are all gone. The File explorer’s context-sensitive menu has been degraded as well, hiding the usual menu items that we’ve come to expect behind a “more options” command that makes those hidden functions more complicated and less efficient.
Keyboard navigation has been degraded with Windows 11, too. You have to use the tab key and arrow keys to sequentially select every button in a window before you can get to the one you want. No more instant access keyboard shortcuts like we had in Windows 95-Windows 7.
What about tablet interaction?
The “Tablet mode” in Windows 10 has been completely removed from Windows 11. Tablets definitely require different interaction methods from mouse/trackpad pointers, because touch screens have different capabilities.
First of all, a touch screen doesn’t let you depend on mouse-hover interactions or tooltips because that’s not possible. I can’t show a tooltip for an unlabeled icon by hovering my finger over it because 1. touch screens don’t work that way, and 2. if they did I wouldn’t be able to see the text label because my finger and hand would be covering it. This means that interactive buttons should be even more obvious so that users don’t have to spend cognitive energy trying to guess what they do before pressing them.
If the easiest to access point on a computer’s interface is the pixel right below your mouse pointer’s current location, then what’s the easiest to access area of a touch screen tablet interface? How about the area just beneath the area within reach of your fingers?
In the above photo, I’ve circled my thumbs in red so that you can see where they’re located and what area of the touch screen would be easiest for me to access with those fingers. Of course, this all depends on how you hold a tablet, but I’m willing to bet that most people hold it by the left or right edges… certainly not the center and certainly not the bottom edge.
Windows 8 actually did the tablet interface really well, but it had hidden left and right edge gestures that would reveal some extremely useful controls on the edges right below your thumbs. I could easily get the task switcher with the left edge or the start screen and a series of “charm” control buttons on the right edge. As mentioned previously, Windows 8’s downfall was not making these things clear to new users.
Windows 11 actually does have a left and right edge gesture function that reveals some controls on a tablet, but… they’re not nearly as useful as they could be.
A left edge swipe reveals the widgets window. This is much less useful than were it to reveal the task view like it did on Windows 10 or actually activate task switching like it did on Windows 8. How many times has Microsoft tried this widget thing, by the way? I guess I did think it was pretty cool in 1996 with the Windows 95 “Active Desktop”.
A right edge swipe reveals the notifications panel that lists all of your recent notifications. That’s ok, but it could have been something much better.
Worst of all, the centered start menu is about as far away as possible from my fingers as you can get, thus making application launching much more difficult than it could be.
On the other hand, the touch keyboard is very much improved with new theme capabilities and a scaling option. The scaling option is in the settings though. Really, the better way to do it would have been to allow the keyboard to behave just like a normal application window that I can resize by dragging the corners. That’s actually how the touch keyboard was in previous versions of Windows and it was so much more flexible that way.
— Adam Z Lein (@adamzea) July 29, 2021
How about that Windows Ink support and pen interaction though? Well, it looks like there’s no improvement there. Windows Ink still wants to switch tools automatically and against my will thus making things like OneNote impossible to use. Pen interaction is obviously another user interaction scenario that deserves a different design than touch and mouse as well. For pen interaction, the corners and edges aren’t as easy to access as with a mouse or finger edge swipe. The center or top of the screen does make sense for pen interaction, however, but Windows 11 doesn’t seem to be taking that into account either as so many other pen interaction functions are broken.
The “Bring Backs” List
It looks like I’m not the only one with complaints about Windows 11’s design. It was fine when all of these things were in Windows 10X because we knew that no one would have to use that (and probably wouldn’t), but now that they’re coming into Windows 11, and will ship on all new computers in the future… that’s a problem.
The “Bring backs” list is getting pretty popular on the Windows Insiders Feedback hub.
Many other people are complaining about the user interface design changes as well. It’s not just me.
Microsoft Design’s explanation
Microsoft’s Design team wrote an article explaining a lot of the design decisions in Windows 11, and many parts sound good but don’t really make sense. See: Windows 11: Designing the Next Generation | by Microsoft Design | Microsoft Design | Jul, 2021 | Medium
For example, “After listening to people express a need for more efficiency and less noise when using Start, we designed a cleaner and simpler experience that puts people at the center by prioritizing the apps they love and the documents they need. It also adapts to modern device form factors and enables easier access for all screen sizes, from a Surface Go to an ultrawide monitor.” Yeah, that sounds good, but it’s not what they did. It’s not simpler, it’s not more efficient, and I’m not seeing it adapt to modern device form factors very well. Where’s the smartphone version? The Start window can’t even be resized by the user.
Another example, “The Microsoft Windows Design Team is driven by creative pragmatism. Designing for over one billion people requires empathy. It relies on internalizing human needs to build solutions that are inclusive of all, while still delivering a personal touch. As Windows leaps into its next era, the story of its evolution is told again through human-centered product design and a deep commitment to build the most inclusive and personal operating system.” Again, that sounds good, but Windows 11 looks less inclusive, less pragmatic, less personal, and less human-centered than ever. Those words are all kind of ambiguous though.
The above video talks about how Microsoft Research decided on the new design of the Start window, and it illustrates a common mistake in how good user research should be done when it comes to interaction design. The mistake here is asking users what they want. Users are notoriously bad at self awareness, so asking users to fill out a survey or tell you what they want is often going to give you bad results. The smarter way is to observe actual user behavior during the interaction with prototypes instead and analyze the problems within those interactions in order to realize what the best solution is going to be. See: Usability Testing 101 (nngroup.com) Observe how the users actually use Windows and determine areas which could be improved based on the time it takes users to complete frequent tasks. Do we often launch programs and then open documents or do we open the File Explorer and then open documents? Are recent documents actually used, or are they created, finished, emailed and never touched again? That kind of data would be more useful for designing around simplicity and efficiency and would probably quickly reveal the problems with Windows 11’s new designs.
Apparently, Microsoft let users arrange interactive elements as pieces of paper on a table and they found some similarities in the arrangements and went with whatever was most popular. The problem here is that most of the research participants are probably doing this with a visual-priority approach, and that’s going to be very different from a interaction-efficiency-priority approach. Yes, I’m going to look at the center of the screen and if I’m from a culture that reads from top to bottom and left to right, I’ll probably look at the top left first. That’s great, but this doesn’t necessarily translate well to efficient or easy human-computer interaction designs.
Where the market is headed
“We believe that this is where the market is headed.” That’s an excuse that we often hear when companies decide to implement bad design and probably came up during discussions of this new design. I call this the “Lemmings excuse”. It’s similar to saying, “All the other kids are doing it, so why can’t I!?” My mom never fell for that excuse when I was a kid, and neither should you.
A good example is when Microsoft implemented hamburger buttons throughout Windows 10 even though all of the usability research data showed that this design reduces user engagement by 20-50%. Yeah, that’s where the market was, but it was still a bad decision. Luckily, today with Windows 11, that design convention has started to be abandoned, and Google is abandoning it too.
Copying bad design is still bad design. Yes, I get that Apple’s Mac OS does the centered dock with icons that move around in order to break motor memory, as does Chrome OS, but maybe that’s one of the reasons Windows users don’t want to use Mac OS and Chrome OS.
What Microsoft should have done
One of the interesting things in Microsoft Design’s videos was that one of the employees said that the new design is more “human”. How do you figure that?! It’s not easier to use, it’s not easier to learn, it’s not more accessible… it’s not even more accommodating to diversity. It’s worse at all of those things.
If you disagree with any of those things, then you’ll probably agree with the next paragraph.
Design for Diversity
Diversity is probably the most important “human” thing that a computing interface should be designed for. Microsoft’s excuse for removing accessibility & efficiency features that people rely on will probably be that the user metrics they capture shows that the people who use those features are in the minority. That may be true, and if you’re in the majority, you may not care, but removing things that minorities depend on is a form of discrimination. It’s also kind of sad that people who care about user interface efficiency & usability would be in the minority these days. However, it should be obvious by now that different people prefer different interaction methods & design styles so a theme/customization structure that accommodates that makes a lot of sense.
It’s sad that people who care about user interface efficiency & usability would be in the minority these days.
There is one other computing system out there that is absolutely designed for diversity (albeit in kind of a messy way). It’s Linux! There are no restrictions to what you can do with Linux, all minorities are welcome to add and use any features they want. It’s possible to make a Linux desktop environment that focuses on “easy to learn” concepts just as it’s possible to make one that focuses on “easy to use” highly efficient interaction concepts. The users have the power to choose.
User experience design in Linux is incredibly flexible. My favorite desktop environment on Linux is the Xfce Desktop Environment. By default it may look pretty old and dated, but it happens to be extremely flexible. It’s possible to create desktop environment user interface panels in XFCE that make it look like Windows or Mac OS or some combination in between. I can put whatever menus or functions I want in the corners of the screen for the most efficient easy-to-use access to those functions. If you don’t like the XFCE environment, don’t worry, there are plenty of other completely different user interface options to install and choose from.
One of the things that helps desktop environments look different is the The GTK Project’s theme structure. That’s an open source framework that many Linux systems and applications use in order to theme their programs and desktop environments. Anyone can create a theme that completely changes the design of the entire system. Some are beautiful, some are utilitarian. That’s the beauty of a system designed for technological diversity… users can choose their priorities. Also see: How to design an OS for the future and why companies should read this (pocketnow.com)
Take a look at just a hand-full of examples of system-wide tech diversity you can get on Linux:
Granted, there is a lot more bad design in the Linux community, but at least it’s far more open to diversity and it’s far easier to refine the design for more efficient workflows… and then keep that design available to you for a very long time.
It’s still possible to get very old yet consistent desktop environments running on Linux. Mate is a good example as that’s a continuation of Gnome 2.0. Choosing a GUI that’s most efficient for you and then being able to use that forever is a huge advantage for cognitive load. Users can become accustomed to the interface and spend cognitive energy on better things like innovation and getting things done more efficiently so that they can take vacations.
Prioritize “Easy to Learn” (like Windows 95 did)
Of course, on Linux, choosing a desktop environment and a compatible theme that you like can be extremely daunting. The best choice is to design for the lowest common denominator and make the default environment as “easy to learn” as possible while still providing the tools for more advanced users to customize the environment to their preference. You can please some of the people some of the time, but you can’t please all of the people all of the time… unless you give them the tools to please themselves.
You can please some of the people some of the time, but you can’t please all of the people all of the time… unless you give them the tools to please themselves.
Neglecting the “easy to learn” aspect was a huge problem with Windows 8 and was a big reason that failed. On the other hand, Windows 95 had a huge focus on the “easy to learn” aspect and was probably the most successful computing operating system of all time. If you look at the Windows 95 desktop environment, a beginner might think, “OMG, where do I start?” Then they’d see the button that literally says “Start” and instantly know that’s going to help them figure out where to begin and what to do next. Today’s computing operating systems have nothing so obvious and easy to figure out and thus require some investment of extra cognitive energy understanding what the designer meant to do. Interaction designers often don’t realize that The Distribution of Users’ Computer Skills: Worse Than You Think (nngroup.com)!
Of course, not everyone needs or wants an operating system interface that is ridiculously easy. Some may want something that just looks good. That’s where extremely flexible themes and desktop environment options come in.
Windows 11 should have or could have been just a new theme that gives users a new desktop environment option to try. Power users should be able to select and create the environment theme that’s most useful to them, while other users should be able to select the environment that may be most attractive to them. Making the design “human” means accommodating both the majorities and minorities.
Why now though?
Windows 10 was supposed to be the last version of Windows, where it would continue to evolve as a service while maintaining all of the things users have become accustomed to. Maybe they were thinking to be kind of like Mac OS X, which had been stuck at version 10 from 2001 through 2020. Now that Mac OS 11 is out, I guess Microsoft wants to catch up. Although, the Windows as a Service concept had a lot of problems, too. See: “Windows as a Service” isn’t really working | Pocketnow
Another idea goes along with the idea behind releasing Windows Vista. One goal with that was to kick start the market into buying more new computers to run the newest software. Windows 11 seems to be heading in the same direction as it requires a TPM 2.0 module that many current devices may not have. This means many computers out now will not be upgradable to Windows 11 and you’ll need to get a new one in order to use the newest operating system. This forced obsolescence didn’t work out so well for Windows Vista as that has long been labeled one of the worst computing operating systems of all time. I was a beta tester for that and I couldn’t even get it to boot until maybe 2 service packs after the official release. Many users stuck with Windows XP for many years in order to avoid Vista… even through to the Windows 7 release! That’s how hard Vista backfired.
The “Centered on you” tagline seems to be a marketing thing as well. The centered design is clearly not something that’s going to be useful in terms of productivity, and it doesn’t even bring my personal information to the forefront like the old Live Tiles of Windows 8 and Windows Phone did. It’s more of a regression to the old stale grid of icons that we had in Windows and Mac OS of the late 1980’s, but not as good since there’s no folder structure for organizing programs into categories.
I doubt that the backlash against Windows 11 will be as severe as Windows Vista or Windows 8, but I definitely predict that there will still be some backlash. You’re seeing it in the Windows Insiders Feedback Hub, and YouTube video comments already. On the other hand, you’re also seeing some comments about how Windows 11 looks awesome. Those are likely putting more weight on the “easy to look at” approach in their decisions. Either way, it should be clear that theming and customization are probably going to be the best solution for accommodating human-computer interaction design diversity. It makes a lot of sense after all… we humans like to choose and customize our own clothes, furniture, cars, houses, yards, etc. Why shouldn’t we be allowed to choose and customize our own computer interface?
Maybe we’ll get some better design for diversity and efficiency in Windows 12.