comment 0

Rochester Chapter Lean Interest Group

ATTN: Business and company leaders whom aspire to excel,

I was recently elected by the Institute of Industrial and Systems Engineering (IISE) body of membership to be President of the Board of Trustees. During this year as President, I would like to celebrate the successes of Industrial and Systems Engineers and share the value the discipline brings to many organizations.

Meet John Kaemmerlen, (BS and ME, Industrial Engineering and Engineering Management), Lecturer, Rochester Inst of Technology

iise-logo

Rochester Chapter Lean Interest Group
An idea for other chapters to try? Or improve?

“Rochester (NY) Chapter 44 of IISE is a chapter with a long history of involvement and accomplishment. Thirty years ago, it was a very large chapter with hundreds of members who worked at Kodak, Xerox, Gleason, Bausch and Lomb, and a number of other western NY companies that had a strong manufacturing history and presence.

Today’s Rochester chapter is much smaller (40 to 50 members). There are still members from local companies, many of whom do their own manufacturing, and a good sized contingent of professors from the Rochester Institute of Technology. The chapter’s current president, John Kaemmerlen, spent 31 years at Kodak before transitioning into a teaching position at RIT in 2007.

He and the Rochester chapter board, in the course of their strategic planning efforts and board meetings, have been searching for a way to energize the chapter. The board members have contacts around the community and were aware that a number of local companies are working to implement lean practices in their businesses. So an idea materialized – what if these companies had a means to help each other? And what if it didn’t matter if all the IE’s who worked together in a to-be-defined approach were IISE members or not? Maybe if they saw the value in getting help, they would be inclined to join IISE over time?

So an entity called the “Lean Interest Group” was formed. They had a couple of initial meetings in 2014 to bounce ideas around, and then had a more focused meeting where they defined a Mission and Vision, which defined a basis for working together. The agreed to model is based on the principle of  “we will take turns as students and teachers.” Let’s say a member company is trying to implement lean, six sigma, TQM, TPM, business process reengineering, or whatever, with the intent of improving some key business metric. This could be an attempt to solve a long standing problem, or it could be an effort to seize an opportunity. Ideally, the group process works like this:

  • The company identifies a problem or opportunity
  • They complete a first cut at the first 3 sections of an A3 (current state, future state, gap)
  • They invite interested members of the lean interest group to their facility so the participants (hosts and helpers) can get familiar with the workplace and the specific business problem via “going to gemba”
  • The group meets in a conference room at the plant, discusses observations, asks questions, and offers ideas, focused on completing sections 4 and 5 of the A3 (root cause of the gap, and planned countermeasures)

Follow-on activity may or may not occur, at the discretion of the host company. The group feels that it is important for the host company to feel in control of the type of help they get, how much, and how quickly.

Companies that are represented on the team or have participated at some level to date are: Gleason, Harris RF, Optimax, PGM, Liberty Pumps, Orafol, Brunner International, Delphi, Wegmans (bakery), Thermo-Fisher, Qualitrol, Ingersoll-Rand, Corning, and Gorbel. These are companies with a local presence, most of whom do manufacturing, and none of which directly compete with each other (although in group meetings IP issues are handled appropriately).

The desired outcomes are:

  • For the host company – the perspectives of others, which may help them in attacking the problem or opportunity
  • For the volunteers – the opportunity to see another plant, see the tools they are using to run their business, and maybe leave with some ideas they can take back to their companies

The methodology was beta tested in July of 2015. The group met at Gleason Works for 3.5 hours to work on a lead time reduction effort they have in one of their businesses. Ajay Khaladkar (Director at Large for the Rochester Chapter) did the prep work, and Paul Spencer and Bob Balme (managers at Gleason), served as the hosts. The chapter recognized and appreciated Gleason’s willingness to go first in this effort.

In January of 2016, the 2nd round of activity occurred at another of the member companies, with a focus on TPM. Matt Jackson, another Rochester Director at Large, arranged this event. It has been reported that this company took some actions following the lean interest group meeting, and experienced some OEE improvements as a result. The 3rd cycle is being planned at another company, with the target of having a meeting in early August. The first draft of the A3 has been completed.

The Rochester Chapter of IISE typically provides food / refreshments, as a part of incentivizing participation.

Going forward, the group has agreed to a goal of holding an event about once per quarter.”

John Kaemmerlen
Chapter President
Rochester Chapter 44
jxkpdm@rit.edu

Industrial and System Engineers provide incredible value to any organization in any industry and I am really excited to share these stories and inspire you and your company to hire ISE’s.

Blessings to you all!

Best Regards,
Michael Foss
President, Institute of Industrial and Systems Engineering
www.iise.org

comment 0

The Importance of Information in Information Technology

To paraphrase Peter Drucker, information technology tends to focus on the technology, not information.

Instead of focusing on more data collection, greater storage capacity or faster data transmission, the objective of IT needs to be giving people the information they need when they need it to make good decisions. Information presentation exists, but “dashboards” and reports may not yield the information people need or give it in the format they can readily make use of. This is where delivering data in an easily digested format is often mistaken as the key deliverable, ignoring the fact that the report may not be what key decision makers actually need.

Sometimes this mistake is compounded by collecting more types of information in the hope that it can be made useful. Big Data experts are well paid precisely because we gather too much information and then try to figure out what we need and how to deliver it so that knowledge workers can use it to make decisions they need.

According to Peter Drucker, lots of executives are so busy counting this and analyzing that that they forget that any measurement is meaningless at best and counter-productive at worst if it is not done solely with the goal of helping the organization meets its mission. Collecting information you don’t need, analyzing it when there is no need to do so and pouring over reports looking for the answer to a question instead of having it provided on demand are all types of wasted effort.

What do users need? Think of the statistical process control charts that give a machine operator a clear warning that something is wrong and that they need to call for service, versus a screen full of metrics and trying to figure out if much less how to act.

When there are reports with useful information that isn’t readily available, alter the reports or standardize their transformation so that there is a simple and consistent delivery of answers. The only thing worse than someone having to analyze a report to get the answer is wasting time running the analysis and acting on the wrong information because of a mistake they shouldn’t have had the opportunity to make in the first place.

Industrial engineers need to be involved in the process of delivering information to users. You can’t rely on technical wizards to determine what data to collect and report. I know from personal experience that their goals focus on their own area of expertise. Statistics like website views, system uptime, network speeds and data loads matter more than whether or not one key knowledge worker received the right report on time or the resources invested in converting exported reports into the final form other knowledge workers need to know what to do.

There are also times where the metrics IT wants to deliver aren’t the granular pieces of information that are needed to improve operations. For example, the 97% customer satisfaction rate won’t give you the specifics on the 3% who were unhappy, much less their demographics or the details of their cases necessary to make them happy. Statistics on how many level 1 outage tickets versus number of level 4 enhancement requests came through don’t give managers the suggested improvements to reports or user interfaces that would have a strong ROI if implemented as the next IT process improvement project.

The solution isn’t yet more data to wade through but smarter, more efficient data collection and analysis.

comment 0

The Retro-Future Presented by Cloud Computing

Larry Page once said the perfect search engine understands what you mean and gives back exactly what you want. The first part of that statement is increasingly defined by voice SEO. The second half of that statement is determined by still learning artificial intelligences, driven by both improving AI technology and data mining.

The original computing model per Golden Age science fiction was one city sized computer running the world. Every computer in the world was really a terminal connected to the world’s true computing hub. The internet that arose in the 1980s relied on connections between millions of smaller computers and many servers. It was distributed and democratic. Cloud computing brings efficiencies of scale, and it is delivering centralization.

While there are still home brew servers hosting Minecraft and personal websites, even many individuals pay for a small partition on a cloud server or blog and store data for free on a cloud server. While we are not yet back to the retro-futuristic vision of one massive city sized computer, we are approaching it when there fewer massive data centers handling most of the load. Personal computers and hosting won’t go away, but there is a point where Pareto’s law applies, rendering the small players nearly irrelevant. And we are at that point where most web searches, data storage and computing is controlled only a few major players.

Artificial intelligence requires significant investment by top talent and hardware, software and networking resources few can match. For that reason, we’ll use the names of big players in AI as the few major firms that count. They are Amazon (Alexa), Microsoft (Cortana), Apple (Siri) and Google (Assistant). These companies are likely to remain in the lead, too, because they already have ways to monetize the voice search queries and, in most cases, the information appliances they sell. Whether it is making money off apps, advertising, content sales like Google books or product sales, their ability to make money delivering products and services ensures they’ll remain dominant. The sheer amount of data they collect on consumers allows them to sell better, too, to the market.

Whether or not we need to review the biases in content delivery and censorship of these few large players becoming similar to public utilities is a whole other debate. The fact remains that they are so large now that they nearly crowd out anyone else. The second tier competitors are either catering to niches these main companies choose to ignore or don’t fully serve.

Some social media sites are trying to reach that same level. Facebook and WeChat’s growth of apps that work entirely within their websites and procurement of unique content, giving rise to an information ecosystem that keeps people there for almost all their needs is an attempt to rival Google or Apple. However, without the ability to use information appliances and the solid income stream from a portion of almost everything sold to consumers, they may not have the resources to grow and reach the top tier.

The massive data collection by the biggest players, their Big Data analysis tools and top talent provide the information that leads to AIs learning. Their AIs learn how to understand voice search faster, recognize shifts in conversational context more readily, collate data more effectively to make the ideal product suggestions and predict individual’s preferences more accurately.  The greater convenience these more intelligent AIs give consumers and improved performance is why it is logical to predict the big players will remain in their dominant positions, and why consolidations will be among those few big players or their acquisitions of companies in the second tier.

And with every merger and purchase, we move closer to the one super-computer that watches all, knows all and tells us what it thinks we’d like to do or should do. That’s Page’s vision. The only difference between the future and the retro-future is the fact that the AI is in a cloud distributed across dozens of data centers each the size of a small city instead of one large city.

comment 0

As Search Engines Grow Smarter and People Less So, the Impact on Users and the Web

Amit Singhal, the head of Google Search, stated that the more accurate search results become, the lazier the questions users ask become. As artificial intelligences behind the search engines become smarter, people’s queries of them become more generic, less precise and often simpler in general. Let’s look at these changes in greater detail and their impact on both websites and users.

 

Location Data

 

When the search engine already knows where you are located based on the geographic area in your profile or specifically where you are based on GPS coordinates, the end result is queries with far less location information. Instead of a query “cheap pizza near intersection of X and Y”, the user asks for “cheap pizza” and assumes the results will only show pizza places near their current location. When they search for an emergency room, they may only enter “emergency rooms open now” without any location information on the assumption the nearest ones will be the only ones presented. And if you have location tracking set up for your device, it will be.

Another side effect of having devices navigate for us is reliance on those same devices to help us navigate places, whether familiar or unfamiliar. Your website cannot simply say “we’re on the northeast corner of Main and Jones”. It needs to have a map capsule next to the address showing where the business is located. Navigation information when walking from the nearest major venue or driving down from the highway to your parking lot is essential for your website today – and increasingly, for the users themselves. They are less able to navigate now without the guidance of the device and follow its directions often as gospel, as the repeated news stories of people driving into the ocean or small alleys where they get stuck demonstrate.

Context

 

Initially detailed queries can become less detailed in follow up questions because search engines remember the context. For example, an initial query on “what famous person did X?” followed by “when did he do Y?” answers the second question based on the identity of the person called out in the first. Queries on the Apple iPhone error messages with a later query simply for “repair services” is likely to come up with authorized Apple repair services or the nearest Apple store because it remembers that context. This context based help will only grow as AIs learn individual users’ habits and the AIs themselves grow smarter in recognizing context. To what degree people start using privacy modes to not have embarrassing voice search results come up during company is to be determined.

When you’re creating content for these increasingly vague search queries, your context must be clear. You can’t rely on search term density to rank well because search engines are forced to determine the context of the query and the context for your content and try to match them well. This makes latent semantic indexing more important, so related terms pepper content so the context is properly understood by the AIs behind the search engine.

Voice search is raising the priority of highly detailed queries like “What does X error message on a Y mean?” in addition to the search term “X error message” and “Y errors”. Even then, content that clearly identifies the context helps it rank well. Mixing detailed queries with vaguer ones also allows the content to rank well whether with long tail voice SEO or more general follow up questions on the topic. Websites themselves have to alter content to include the full questions users are asking in order to rank well with voice search.

Querying What We Used to Remember

It has been said that the smarter devices become, the stupider the users become. The more likely explanation is that skills unused atrophy, and as we get used to having devices do things for us, the weaker those skills become for us.

More people rely on quick internet searches for information instead of trying to recall what they learned in school. This is part of the drive for instant answers and search engines’ preference for sites that provide them. Fewer people remember facts, and more of them will query for those facts instead of trying to remember them. And if you can always get the answer quickly through a query, why commit the information to memory in the first place?

If you’re creating content, you aren’t going to be able to compete with instant answers from higher authority domains. You can still create content that answers the long tail search terms and detailed queries few others will, such as trivia that doesn’t make it into Wikipedia and local knowledge few others document.

Another side effect of searching the internet for answers is the practice of using search engines as calculators. Whether it is asking what day is 19 days from today, unit conversions or actual calculations, search engines are more often being used to do math. For very simple calculations like 583 divided by 4 or the 15% tip for a $15.99 dinner, there are long tail queries that you can still monopolize. And you see more online calculators for the more complex calculations no matter how obscure the niche because of the growing demand and their value for long tail search queries.