AI(ArtificialIntelligence)

Artificial Intelligence and Continuous Improvement

Artificial Intelligence (AI) has the potential to significantly impact the field of continuous improvement by providing organizations with new tools and techniques to identify and eliminate waste and improve performance.  Some ways that AI can be used in continuous improvement include:

  • Predictive Maintenance: AI can be used to analyze sensor data from equipment and predict when maintenance is needed, reducing downtime and increasing equipment efficiency.
  • Process Optimization: AI can be used to analyze data from production processes to find bottlenecks and inefficiencies and suggest ways to optimize performance.
  • Quality Control: AI can be used to analyze data from quality control systems to find patterns and trends in defects and suggest ways to improve quality.
  • Inventory Management: AI can be used to predict demand for products and optimize inventory levels, reducing waste and increasing efficiency.
  • Root-Cause Analysis: AI can be used to analyze data from various sources to find the underlying causes of problems and suggest ways to eliminate them.
  • Predictive Modeling: AI can be used to predict future trends and patterns, allowing organizations to make more informed decisions and plan for future challenges.

It is worth mentioning that AI is not a silver bullet, the success of AI in continuous improvement depends on the quality of data that it is being fed, the quality of the model, and the expertise of the people using it. Organizations must be sure to have a clear understanding of the problem they are trying to solve and the capabilities of the AI technology they are using.  Additionally, organizations should also ensure that they have a robust data governance and management strategy in place to ensure that the data used to train and operate AI models is accurate, reliable, and ethically sourced.

Continuous Improvement and Lean Failures

Lean manufacturing, also known as the Toyota Production System, is a method of production that aims to eliminate waste and increase efficiency.  Despite its popularity and success in many industries, lean manufacturing can also fail in certain situations.

One reason for failure is a lack of buy-in from employees.  Lean manufacturing requires a culture of continuous improvement, and without the support and participation of all employees, the system will not work effectively.  Additionally, if employees are not properly trained on the principles of lean manufacturing and how to apply them, they will not be able to identify and eliminate waste in their work processes.

Another reason for failure is a lack of focus on the customer.  Lean manufacturing is designed to create value for the customer, but if the focus is solely on reducing costs and increasing efficiency, the end product may not meet the needs and wants of the customer.  Additionally, if the customer’s needs and wants are not continuously monitored and incorporated into the production process, the product may become obsolete before it even reaches the market.

A third reason for failure is a lack of flexibility.  Lean manufacturing is based on the concept of flow, where materials and information move smoothly through the production process.  However, if the production process is too rigid and inflexible, it will not be able to adapt to changes in customer demand or market conditions.

To ensure success, it is important to remember that lean manufacturing is not a one-size-fits-all solution.  It must be customized to fit the specific needs of the organization and the industry in which it operates.  Additionally, it requires a long-term commitment and a continuous improvement mindset to achieve and maintain success.

Continuous Improvement and Six Sigma Failures

Six Sigma is a quality management methodology that aims to eliminate defects and improve overall performance by finding and removing the causes of problems. While Six Sigma has been implemented by many organizations with great success, it can also fail in certain situations.

One reason for failure is a lack of leadership commitment.  Six Sigma requires a strong leadership team that is committed to the methodologies, invested in its success and willing to drive the change throughout the organization.  Without this support, the Six Sigma initiative may not gain the momentum needed to achieve its objectives.

Another reason for failure is a lack of proper training and resources.  Six Sigma requires a significant investment in terms of time and money to properly train and equip employees with the necessary tools and skills to implement the methodology.  If employees are not properly trained and the company does not provide the necessary resources, Six Sigma may not be implemented effectively.

A third reason for failure is a lack of focus on the customer.  Six Sigma is designed to improve quality and efficiency but if the focus is solely on internal processes and not on customer needs, it may fail to meet customer’s requirements and may not result in customer satisfaction.

A fourth reason for failure is an over-reliance on data and statistical analysis.  Six Sigma relies heavily on data and statistical analysis to identify and solve problems.  However, if an organization relies too heavily on data and statistics, it may neglect the importance of human judgement, intuition and creativity in problem-solving.

To ensure success, it is important to remember that Six Sigma is not a one-size-fits-all solution, and it must be customized to fit the specific needs of the organization and the industry in which it operates.  Additionally, it requires a long-term commitment and a continuous improvement mindset to achieve and maintain success.

In conclusion, while lean manufacturing can be a powerful tool for increasing efficiency and reducing waste and Six Sigma can be a powerful tool for improving quality and performance, the likelihood of failure is great if not implemented and executed properly.  Success depends upon several factors that include; employee buy-in, customer needs, and flexibility as well as leadership commitment, proper training, and balance of data and human judgement.

The differences between Lean Six Sigma and Operational Excellence

Lean Six Sigma and Operational Excellence are both methodologies that aim to improve the performance and efficiency of an organization, but they have some key differences.

Lean Six Sigma is a methodology that combines the principles of Lean manufacturing and Six Sigma to eliminate waste and defects in an organization’s processes. It is focused on finding and removing the causes of problems using data and statistical analysis.  The goal of Lean Six Sigma is to improve the quality and efficiency of processes, resulting in cost savings and increased customer satisfaction.

Operational Excellence, on the other hand, is a broader methodology that aims to improve the overall performance of an organization by aligning all aspects of the business with the company’s vision and strategy.  It emphasizes the importance of leadership, culture, and employee engagement to drive continuous improvement and create a culture of excellence.  Operational Excellence is not limited to specific tools or techniques but aims to bring the entire organization together to strive for excellence.

In summary, Lean Six Sigma is a methodology that specifically focuses on improving the efficiency and quality of processes using data and statistical analysis, while Operational Excellence is a broader methodology that focuses on aligning all aspects of the business with the company’s vision and strategy and creating a culture of excellence throughout the organization.

Concerns I have about AI-generated content.

As an experiment, the above sections were created using an AI text generating app, which has left me rather unsettled.

There are several potential dangers associated with AI-generated content, including:

  • Misinformation: AI-generated content can be used to spread false or misleading information, especially in the form of deepfake videos and fake news.
  • Bias: AI models are trained on large amounts of data, and if that data contains biases, the model will likely produce biased content. This can perpetuate harmful stereotypes and discrimination.
  • Privacy concerns: AI-generated content can be used to create deepfake videos that depict individuals in compromising or embarrassing situations.  This can be a serious invasion of privacy.
  • Impact on jobs: As AI-generated content becomes more sophisticated, it may replace human jobs, particularly in the fields of journalism, content creation, and entertainment.
  • Impact on creativity: AI-generated content may be able to produce high-quality content, but it may also lead to a decrease in human creativity as the use of AI becomes more prevalent.
  • Impact on society: AI-generated content can have a significant impact on society, both positive and negative, it is important to be aware of the potential consequences and take steps to mitigate any negative effects.

It is important to note that AI-generated content has the potential to be a powerful tool for good, but it is crucial to be aware of the potential dangers and take steps to mitigate them.  This could be done by creating regulations, guidelines and ethical rules that are followed by all the parties involved in creating and distributing AI generated content.

Just kidding.  The previous section was also created using AI.

I wrote an article entitled ā€œDeepfakes and the Uncanny Valleyā€ in January of 2022 (exactly one year prior to this article).  In it, I expressed concern about the ability to manipulate, even fabricate, what we see and hear.  Unchecked, there is great peril.

In this exercise, I used ChatGPT to generate the content. ChatGPT was created by an organization called OpenAI, which touts itself as; ā€œa research and deployment company.  Our mission is to ensure that artificial general intelligence benefits all of humanity.ā€

After the text was generated and the article created, the Microsoft Word Editor function gave an initial Editor Score of 86% for Professional Writing with the errors and suggestions being:

  • 5 Grammar Flags
  • 19 Clarity Flags
  • 3 Conciseness Flags
  • 3 Formality Flags
  • 16 Punctuation Conventions
  • 5 Vocabulary

Following the Editor guidance, I increased the Editor Score to 94%.

Also, 2% of the text was similar to text found online which indicated a potential for a plagiarism violation.  But although similar, it was not close enough for me to consider it plagiarism (the construction of the sentence was not very close). 

When I went to the source cited (see below) on Reddit, I found it was removed by the moderators (why, I do not know).  That none of the rest of the content in the AI-generated article was flagged for plagiarism is quite remarkable. 

Predictive Maintenance: AI can be used to analyze sensor data from equipment and predict when maintenance is needed, reducing downtime and increasing equipment efficiency.

Can AI replace engineering before architecture?   https://www.reddit.com/r/EngineeringStudents/comments/10d2df3/can_ai_replace_engineering_before_architecture/

There are apps that can determine if a text has been plagiarized, but none that I know of that can determine whether a text was AI-generated.  For me, this is problematic.  Whose thoughts and analysis are we reading; a personā€™s or a computer’s?

Just consider the impact on academic papers and peer-reviewed journals in particular.  What can we believe and to what extent?  Are we intentionally being steered towards one point of view or another?  Just look at the tug-of-war that played-out in reporting on COVID?  How many points of view were first deemed ā€œfake newsā€ and banned (along with the people sharing the information) only to be (eventually) determined to be correct?

Other than edits suggested by the Word Editor, headings, and this section which I wrote (or did I?), the only other changes I made to the generated text were to merge the conclusions for Lean and Six Sigma into one (which I did because the system generate each conclusion separately) and to add a single introductory line for the section ā€œConcerns I have about AI-generated contentā€ to create the illusion that I wrote the content in the section.

ā€œTo grunt and sweat under a weary life,
But that the dread of something after death,
The undiscovered country from whose bourn
No traveler returns, puzzles the will,
And makes us rather bear those ills we have,
Than fly to others that we know not of?ā€

ā€“ William Shakespeare; Hamlet, Act 3, Scene 1

In this passage, Hamlet refers to the ā€œundiscovered countryā€ as the afterlife with our lack of knowledge about it and fear of it.  Be it text, or pictures, or video, or sound; this is how I feel with regards to AI-generated content.  I can see there is some benefit, but also great peril.

Who is the keeper of the truth?  The content we read might be generated by AI and this AI-generated content might find its way into respected sources where we might take it as fact when it is not.  Already we see the echo chamber with one news source citing another without independent checking just to keep command of the news cycle.

Robots are robots, whether physical or in the realm of cyberspace.  Isaac Asimov, a professor at Boston University and a prolific writer of science fiction, suggested three-laws of robotics which he proposed to protect humans from interactions with robots.  They are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Noble as these laws might be, my concern is ā€“ throughout history and without exception ā€“ people have always used newly discovered technologies and capabilities to kill or otherwise harm one another.  Why should AI be any different?

It took me 15 minutes to create the AI-generated content in this article.  Another 15 minutes to clean it up using the editor, and a few hours of contemplation to write this summary of my experience with an AI-content generator and share my thoughts ā€“ my thoughtsā€¦

And I can only imagine the proliferation of content whose quality is suspect and will only add to the information noise, making it more difficult to differentiate what is real and what is fake or even the ability to make that determination.  How many ā€œthought leadersā€ and ā€œexpertsā€ will be conjured out of thin air, or cyberspace?  How many of them will be hired, or elected, into positions for which they are not qualified?  Not possible?  Just look at George Santos who lied his way into being elected a Congressman.

Personally, I do not trust AI.  Mostly because I do not trust people and human nature with AI.  And if I had my druthers, I would require that all AI content, whether used in its entirety or partially (like this article), be marked as such.  

In the United States, the Federal Trade Commission (FTC) already has rules and regulations regarding ā€œsponsored contentā€ which requires the creator or publisher of the content to disclosure any benefits for its being sharing.  It is time to apply the same to AI-generated content.

About the Author

Joseph Paris

Paris is an international expert in the field of Operational Excellence, organizational design, strategy design and deployment, and helping companies become high-performance organizations.  His vehicles for change include being the Founder of; the XONITEK Group of Companies; the Operational Excellence Society; and the Readiness Institute.

He is a sought-after speaker and lecturer and his book, ā€œState of Readinessā€ has been endorsed by senior leaders at some of the most respected companies in the world.

Click here to learn more about Joseph Paris or connect with him on LinkedIn.

Similar Posts