The Value of Focus in Business

Businesses are only efficient when they have a singular focus. This could have several facets, such as providing the highest quality service while hiring local people or providing employment for a specific population. A more common variation is providing high service at a low cost or the best service as quickly as possible.

What are some mistakes businesses make when they lose their focus?

  • Trying to serve a broader market while losing focus on their core market, often with lower ROI
  • Trying to dominate broad key search terms that come at a higher price instead of narrower search terms that cost less and are easier to dominate
  • Attempting to implement local SEO based on a large geographic area, such as referencing all cities around you, instead of keeping the local search as focused as possible
  • Adding more tools, features and reports to your product instead of focusing on the most valuable feature improvements to your user base or more thorough product testing
  • Seeking to earn industry certifications like sustainable or ISO standards that have little value to the customer base, because they are the “in” certifications to have, and then investing time and effort to then maintain certification instead of value added activities
  • Adding and subtracting features based on what rival products have regardless of what your customers want or the quality of the product after these features are added
  • Trying to expand your customer base while neglecting those who buy most of your products, often ignoring the niche uses you could market to without hurting the main market or changing the product; a classic failure is dumping a profitable customer market because they aren’t the young adults many companies think they have to cater to
  • Trying to increase sales through the addition of new products regardless of their suitability to your customer base, instead of entering complimentary marketing agreements
  • Collecting as much data as possible in the hope of it being useful instead of determining what information is needed to make good decisions and collecting only what is necessary

Why Slow and Steady Wins the Innovation Race Almost Every Time

Everyone seems to want radical innovation. We see minor improvements and re-branding of existing products as the next big thing, because it is seen as so desirable. Let’s look at the reasons why slow and steady or incremental innovations are almost always the better business plan.


  • Most radical innovations fail. Companies may compound the risk by putting all their resources into the new big idea and fail themselves if the product doesn’t succeed.
  • Even if the radical innovation succeeds financially, they may take decades to be accepted and become highly profitable.
  • Incremental innovations are common and you may have a wealth of such ideas already in your organization but ignored because managers were looking for a “big” idea. And in waiting, you fail to improve in any area.
  • A series of incremental innovations can lead to dramatic savings in cycle time, quality, manufacturability and every area of a product’s manufacture. Lowering ongoing costs for existing bread and butter projects increases profits now.
  • By seeking the ready opportunities for incremental improvements, you see savings or improvements in any area quickly for often a low cost and high return on investment. In contrast, the big new thing may be expensive to make and may not pay off.
  • Incrementalism allows for A/B testing of ideas for acceptance whether complimentary products or new features. Offering a new combined product in addition to the current one lets you see if the new one is actually better for the market without a major expenditure.
  • There is a bad tendency to look at radical innovation as “one and done”. The designers often rest on their wilting laurels as others come out with a similar product or service with incremental improvements and take over the market.
  • When a company succeeds with a radical idea, they tend to focus on finding the next big innovation instead of improving their current “big” idea or incremental improvements to other products. And that lightning may not strike a second time.

Signs of Good Design in UX

What are the signs you have a good design in terms of user experience or UX?


  • Your suggestions are requests to add new features that aren’t part of your primary purpose. In short, you do what you need to do and all the suggestions are nice adds but not essential.
  • Your complaints are regarding factors unrelated to user experience on the interface like speed, price or interoperability. You may still need to adopt streamlined code or reducing the number of images and integrating caching, but it isn’t essential to deliver to most clients.
  • The suggestions you receive affect a small percentage of the user base, such as complaining about an app’s function on Blackberries or Microsoft phones. The ideal UX is nearly universal across all devices.
  • The complaints are literally cosmetic like issues with the color palette – unless the problem is the 10-15% of the male population that is color blind not being able to figure out key functions.
  • Your user interface relies on patterns of behavior your users already know, even as you change the functions and code behind the interface.
  • Your user interface doesn’t require constant attention by the user or perfect knowledge of the current state of the status of the app in order to have it work right. In short, your app doesn’t expect people to abandon their human failings to work correctly.
  • People who aren’t fluent in your language can use your app, and they never have to learn special lingo or niche technical terms to use your software user interface.
  • You don’t put up hindrances or barriers to someone’s use of a function unless they’d want to stop and think about it, like deleting their saved files or cached passwords.
  • The user interface will be as easy to use and interpret in ten years as it is today. Whether new or five years old, it can be seen as classic and eternal.


Implementing Resilience Engineering in IT

A Definition of Resilience Engineering


Resilience engineering requires designing systems and equipment to fit human nature. One of the classic signs something is wrong with your manufacturing processes is when the people who work there look like they work out because they are using their bodies that hard and long as part of their jobs. This type of problem means that your process will fail when someone is not strong enough or too tired to do what the process requires. There are other problems caused by human failings, when we assume that people fit the process by acting just like the machines. And that is the fault of the designers that resilience engineering seeks to correct.

Resilience engineering goes beyond poke-yoke or mistake proofing that has only one way a product can be assembled or safety buttons that have to be held down when someone is operating a press by designing the equipment or operations to fit human nature to make it almost impossible to make a mistake by design.


How to Implement Resilience Engineering


Your equipment and processes must also be designed to suit the human mind. Your process cannot rely on humans have perfect memory, be fully attentive and alert for the entire time they are on shift. People forget confusing procedures, get distracted (often by parallel processes) and get bored when they are doing the same thing. Sometimes they cannot keep up and prioritize the constant stream of notices and alerts that compete for their attention, so they don’t know what to do or take the wrong action.

I’ve previously written about human attention as the most limited resource in the modern era. Processes are regularly created that assume that pop-up informational notices and warnings are value added, neglecting the time it takes for someone’s attention to shift back to the task at hand or the serious distraction the constant stream of pop-ups and notification beeps creates. For example, a user interface that throws up so many informational notices that someone may not see an urgent warning for some time has created its own failure mode.

When the system generates many competing notices of varying priorities, it creates distractions and confusion on the part of the user that increases the odds of failure. Or users get in the habit of ignoring and closing pop-ups, useless informational, barely useful and critical warnings. All of these cases are the opposite of a mistake-proof design or resilience engineering.

Another problem with system design is with work systems that put too much of an intellectual burden on the employee. For example, systems that assume people can immediately switch gears when multi-tasking and then give them multiple tasks to do simultaneously increase the odds someone will make mistakes. Perhaps they forget what they were doing and fail to return to it, or they return to it but miss steps. Or they continue the actions they were doing but it is now applied to the wrong item. Over-work leads to fatigue and errors, but systems are typically designed to assume people don’t get tired at the end of a shift or when working overtime. Demanding people work from home or on the go doesn’t solve this problem, since shifting their attention from personal affairs or driving can lead to an incorrect decision so they can get back to what they were doing.

These are the times people just select the default option or the first auto-fill suggestion before moving on.  You can reduce the errors by requiring attention checks, not allowing auto-fill on critical tasks that require care and reducing distractions. “Are you sure?” pop-ups are hardly of value in these cases because it is as easily clicked and closed as the other selections the person made without thinking about it.

When the default solution in a company is the blame the people who made the mistakes and train them, it prevents root cause analysis that shows that the bad process is to blame. In fact, the end result may be altering the process the person followed to make it more complex and training the person who made the mistake in the new process, but all too often failing to train the other employees in the new process. Now the solution for one user nearly ensures mistakes by the others.

Knowledge based errors include applying the wrong procedure when an error occurs and not knowing what to do. The former case occurs when someone can’t figure out what an error message means. The latter situation may be solved by training, but it can occur when someone seeks help but can’t find it. I’ve even seen this error occur on help desks when company policies punished seeking subject matter expert advice or escalating tickets to a higher level. The end results ranged from the first level tech support applying the wrong process to the problem because they couldn’t ask if it was the right one to spending an extensive amount of time trying to troubleshoot a matter that the expert could solve in a fraction of the time.

The unavailability of knowledge workers can also leave users forced to make knowledge based errors at their own level because they couldn’t get the expert opinion on the right course of action. Managers and knowledge workers can make knowledge based errors themselves when users and lower level employees simply don’t give them all the information for fear of the consequences. When it is considered bad to report bad news, the problem gets worse before it gets solved. A climate of fear or scarcity thus creates the environment for more mistakes to be made.

Rules and procedures are often based on legal compliance, even if the procedures don’t fit the work environment. This results in people getting in the habit of violating the rules to get the work done. This increases the odds people start breaking critical rules in order to do things outside the standard, non-working process. Think about users getting in the habit of jailbreaking their phones to install software they want or adding people to project roles and then asking for permission because a higher level manger told them to do X.

Rules and procedures can become a legalistic hamstring without any way out by applying logic and intelligent action, such as when your user can’t confirm via two competing processes they are who they say they are because of one situation that prevents both from verifying. You must have a formal process for someone to handle the exceptions or rule conflicts to avoid the problem of people going outside the formal process to do their job.

Access control limits often hamstring workers, leading to complex rules to determine who should have access and work-arounds by employees trying to do their jobs. The real solution is streamlining rules and regulations and simplifying the system, but the default solution is adding one more loop on a process chart that already looks like a bowl of spaghetti spilled on a table.

Your processes should be as simple as possible, but no simpler. For example, a website that refers people to a phone number if they have problems and a phone number that takes you only to a recorded message that they should go to the website is simple – and a failure. This isn’t a hypothetical scenario – I actually had to deal with it once.

Sometimes the solution seems to be to go outside the rules, such as when someone tries to implement a fix or work-around. This creates new problems if not major ones, such as when someone restarts a service without telling others or puts in a software patch without testing it thoroughly. The better solution is having a formal process for testing improvements and new solutions in a deliberate, controlled manner and updating all processes when it is found to be an improvement.

Sometimes the solution is supposedly “go look at the process” and “update the process document”. Then the users run into problems because they weren’t notified about the process changing. Now they are running off of an old process and may call up tech support asking why the process they are accustomed to isn’t working right.




Design your IT processes from software interfaces to user support to take human failings into account. Design processes that don’t require humans to be machines, such as demanding 100% attention, incredible reaction time, data processing skills akin to a computer or perfect memory. Have formal processes in place to handle the exceptions and odd events without making the standard processes insanely complex.

Do take the time to train users, but also take a look at your processes to see if you can make them simpler … and then train users on the new processes to avoid new problems. Ensure that people have access to the knowledgeable experts and documents they need to make the right decision, and don’t throw too much information or distractions at them or they are sure to make more mistakes.