The memory jogger 2 : Tools for continuous improvement and effective planning. A Six Sigma mentoring guide presents a clear outline of the DMAIC model and guidelines for implementing specific teaching and leadership methodologies into an organization within a specific timeline.
The Memory Jogger 2. Affinity diagram 2. Skip to content. Team techniques and collaborative decision-making tools, including the seven management and planning tools, are also included. Read it cover to cover and refer to it often. Increase your company's profitability Easy-to-follow, step-by-step approach makes the Six Sigma process transparent to users.
Team techniques and collaborative decision-making tools, including the seven management and planning tools, are also included.
Read it cover to cover and refer to it often. This book uses a problem-solving model based upon a variety of data and knowledge-based tools.
The emphasis of this model is on root cause analysis and innovative solutions. Use this book as part of a self-study program or as a reference before, during, and after training to learn the concepts, methods, and basic tools for effective problem solving.
As a teaching tool for team members, it has no equal; there are numerous examples, illustrations, and tips throughout the book. Comprehensive yet concise, it is written from a training perspective so that every topic and every page goes quickly to the critical point of interest.
It is the perfect place for mentor and student to come together and begin to build new levels of Six Sigma expertise. The exam is open book and you can take it in your own time. Richard A. I keep having to buy myself copies of this book as my employees take mine! Each operator will measure two parts per batch and the results will be analyzed.
Nested refers to parts being unique to the operator. Depending on the test, the objective of the study would be to ensure operators can either discern between good and bad or rank a characteristic on a scale and get the correct answer. In this test, the operator is frequently the gauge.
Identify thirty to items to be evaluated. Have a person who has knowledge of the customer requirements for this characteristic rate these items on the scale that is used in daily operations. Identify the people who need to measure the items. Have the first person rate all thirty to items in random order and record these values.
Repeat step four such that each operator has a chance at a repeat measure and record this data. Several important measures from this test are identified in the following example.
The number of times an operator can repeat the measured value. If an operator measured thirty items twice and successfully repeated the measures twenty-six times, then he or she had an Each operator will have a success rate for their repeat measures. The number of times an operator not only repeats the measures, but these repeats match the known standard. Although an operator may have successfully repeated their measures This implies that an operator may not understand the criteria for the known standard.
The number of times all of the operators match their repeat measures. If three operators evaluate thirty parts and all of the operators match their repeats twenty-two times, then this is a The number of times all of the operators match their repeat measures and all these measures match the known standards.
If three operators match all of these measures twenty times out of thirty, then this is a success rate of Typically, the solution is either training of the operators, better definition of the known standard, or an improvement to the environment in the area where the item is being measured. Example: When a customer places an order, a system operator is responsible for finding information in a database and then transferring that information to the order form. Recently, customers have been receiving the wrong product, and the data suggests that the problem may be in transferring this information.
To do this, the Black Belt creates thirty different fake orders, with known correct answers. Next the operators are asked to find the required information in the database to determine whether the operators can get the correct answer or not.
The answers to this test are included in the following chart. Attribute Attribute is a score of the individual error against a known population and shows when the operator agrees on both trials with the known standard. Attribute is the total error against a known population and shows when all operators agreed with and between themselves and agreed with the known standard.
Process capability refers to the capability of a process to consistently make a product that meets a customer- specified specification range tolerance. Capability indices are used to predict the performance of a process by comparing the width of process variation to the width of the specified tolerance. It is used extensively in many industries and only has meaning if the process being studied is stable in statistical control.
Short-Term Capability Indices The short-term capability indices Cp and Cpk are measures calculated using the short-term process standard deviation. Because the short-term process variation is used, these measures are free of subgroup drift in the data and take into account only the within subgroup variation. Cp is a ratio of the customer-specified tolerance to six standard deviations of the short-term process variation. Cp is calculated without regard to location of the data mean within the tolerance, so it gives an indication of what the process could perform to if the mean of the data was centered between the specification limits.
Because of this assumption, Cp is sometimes referred to as the process potential. Because Cpk takes into account location of the data mean within the tolerance, it is a more realistic measure of the process capability. Cpk is sometimes referred to as the process performance. Long-Term Capability Indices The long-term capability indices Pp and Ppk are measures calculated using the long-term process standard deviation. Because the long-term process variation is used, these measures take into account subgroup drift in the data as well as the within subgroup variation.
Pp is a ratio of the customer-specified tolerance to six standard deviations of the long-term process variation. Like Cp , Pp is calculated without regard to location of the data mean within the tolerance.
Ppk is a ratio of the distance between the process average and the closest specification limit, to three standard deviations of the long-term process variation. Like Cpk , Ppk takes into account the location of the data mean within the tolerance. Because Ppk uses the long-term variation in the process and takes into account the process centering within the specified tolerance, it is a good indicator of the process performance the customer is seeing.
Because both Cp and Cpk are ratios of the tolerance width to the process variation, larger values of Cp and Cpk are better. The larger the Cp and Cpk , the wider the tolerance width relative to the process variation. The same is also true for Pp and Ppk. A Ppk of 1. However, a Six Sigma process typically has a short- term Z of 6 or a long-term Z of 4. The following mathematical formulas are used to calculate these indices. Five randomly selected spark plugs are measured in every work shift.
Each of the five samples on each work shift is called a subgroup. Subgroups have been collected for three months on a stable process. The average of all the data was 0. The short-term standard deviation has been calculated and was determined to be 0. The long-term standard deviation was determined to be 0.
Because Cp is the ratio of the specified tolerance to the process variation, a Cp value of 1. Any improvements to the pro- cess to increase our value of 1. Cp, however, is calculated without regard to the process centering within the specified tolerance.
A centered process is rarely the case so a Cpk value must be calculated. Cpk considers the location of the process data average. In this calculation, we are comparing the average of our process to the closest specification limit and dividing by three short-term standard deviations.
In our example, Cpk is 0. In contrast to the Cp measurement, the Cpk measurement clearly shows that the process is incapable of producing product that meets the specified tolerance. Any improvements to our process to increase our value of 0. Note: For centered processes, Cp and Cpk will be the same. Our Pp is 0. Because Pp is the ratio of the specified tolerance to the process variation, a Pp value of 0. Any improvements to the process to increase our value of 0.
Pp, however, is calculated without regard to the process centering within the specified tolerance. A centered process is rarely the case so a Ppk value, which accounts for lack of process centering, will surely indicate poor capability for our process as well.
Note: For both Pp and Cp, we assume no drifting of the subgroup averages. Ppk represents the actual long-term performance of the process and is the index that most likely represents what customers receive. In the example, Ppk is 0.
Business Process Example: Suppose a call center reports to its customers that it will resolve their issue within fifteen minutes. This fifteen- minute time limit is the upper specification limit. It is desirable to resolve the issue as soon as possible; therefore, there is no lower specification limit. The call center operates twenty-four hours a day in eight-hour shifts. Six calls are randomly measured every shift and recorded for two months.
An SPC chart shows the process is stable. The average of the data is These numbers indicate that if we can eliminate between subgroup variation, we could achieve a process capability Ppk of 0. Graphical analysis is an effective way to present data. Graphs allow an organization to represent data either variable or attribute to evaluate central tendency and spread, detect patterns in the data, and identify sources of variation in the process.
The type of data collected will determine the type of graph used to represent the data. Described below are some common graphs for different data types.
Histograms Histograms are an efficient graphical method for describing the distribution of data. However, a large enough sample greater than fifty data points is required to effectively show the distribution. The vertical axis shows the frequency or percentage of data points in each class. The modal class is the class with the highest frequency. The frequency is the number of data points found in each class.
Each bar is one class or interval. The horizontal axis shows the scale of measure for the Critical To characteristics. Software packages are available that will automatically calculate the class intervals and allow the user to revise them as required. The number of intervals shown can influence the pattern of the sample. Plotting the data is always recommended. Three unique distributions of data are shown on the following page. All three data plots share an identical mean, but the spread of the data about the mean differs significantly.
Tip Always look for twin or multiple peaks indicating that the data comes from two or more sources e. If multiple peaks are evident, the data must then be stratified.
Symmetric Data Mean Mode Frequency Median 50 0 20 30 40 50 60 70 80 90 This is a normal distribution, sometimes referred to as a bell-shaped curve. Notice that there is a single point of central tendency and the data are symmetrically distributed about the center. Some processes are naturally skewed.
This distribution does not appear normally distributed and may require transformation prior to statistical analysis. Data that sometimes exhibit negative skewness are cash flow, yield, and strength. The median is between the mode and the mean, with the mean on the right. This distribution is not normally distributed and is another candidate for transformation. Data that sometimes exhibit positive skewness are home prices, salaries, cycle time of delivery, and surface roughness.
The whisker line is drawn to the largest value in the data set below this calculated value. If there are data points above this value, they show up as asterisks to indicate they may be outliers.
The same is true for the lower whisker with a limit of Q1 - 1. The whisker line is then drawn to the smallest value in the data set above this calculated value. Multiple dot plots can be constructed for discrete levels of another variable. Notice how the discrete levels for Divisions A and B lay above one another, making the dot plot an effective tool for comparing central location and variability within and between divisions.
Scatter Diagram The scatter diagram is used to determine whether a qualitative relationship, linear or curvilinear, exists between two continuous or discrete variables. The scatter diagram on the following page shows a strong positive relationship between the number of customers and the number of suppliers; as the number of customers increases, so does the number of suppliers.
Tip The scatter diagram does not predict cause and effect relationships; it only shows the strength of the relationship between two variables. The stronger the relationship, the greater the likelihood that change in one variable will affect change in another variable. Run Charts allow a team to study observed data a performance measure of a process for trends or patterns over a specified period of time.
Decide on the process performance measure. Gather data. Create a graph with a vertical line y axis and a horizontal line x axis. Plot the data. If there are no obvious trends, calculate the average or arithmetic mean. The average is the sum of the measured values divided by the number of data points. Draw a horizontal line at the average value. Tip Do not redraw this average line every time new data is added. Only when there has been a significant change in the process or prevailing conditions should the average be recalculated and redrawn, and then only using the data points after the verified change.
Interpret the chart. Is it where it should be relative to a customer need or specification? Is it where the organization wants it to be, relative to the business objective? Tip A danger in using a Run Chart is the tendency to see every variation in data as being important. The Run Chart should be used to focus on truly vital changes in the process.
Simple tests can be used to look for meaningful trends and patterns. Remember that for more sophisticated uses, a Control Chart is invaluable because it is simply a Run Chart with statistically based limits. The trend is statistically significant because there are six or more consecutive points declining. A Pareto Chart focuses efforts on the problems that offer the greatest potential for improvement by showing their relative frequency or size in a descending bar graph.
Decide which problem to focus on. Using brainstorming or existing data, choose the causes or problems that will be monitored, compared, and rank-ordered. Choose the most meaningful unit of measurement such as frequency or cost.
Be prepared to do both frequency and cost. Choose the time period for the study. Look first at volume and variety within the data. Tip Always include with the source data and the final chart, the identifiers that indicate the source, location, and time period covered. Compare the relative frequency or cost of each problem category. List the problem categories on a horizontal line and frequencies on a vertical line.
List the unit of measure on the vertical line. Optional Draw the cumulative percentage line showing the portion of the total that each problem category represents. Fill in the remaining percentages drawn to scale. Interpret the results. Dealing with these problem categories first therefore makes common sense.
But the most frequent or expensive is not always the most important. Always ask, What has the most impact on the goals of our business and customers? Example: Consider the case of HOTrep, an internal computer network help line.
The parent organization wanted to know what problems the people calling in to the HOTrep help line were experiencing. A team was created to brainstorm possible problems to monitor for comparison. They choose frequency as the most important measure because the project team could use this information to simplify software, improve documentation or training, or solve bigger system problems.
HOTrep help line calls were reviewed for ten weeks and data from these calls was gathered based on a review of incident reports historical data. System System Print Reflections misc. The variations used most frequently are: A. It can be drawn as one chart or two separate charts. Change the Source of Data in which data is collect- ed on the same problem but from different departments, locations, equipment, and so on, and shown in side-by-side Pareto Charts.
Change Measurement Scale in which the same categories are used but measured different- ly. Weight Reconciled Debt Bynd. NYC arb. Before and After 0 Debt Misc. Class Bynd. Quantitative inferences about the data set can be made by analyzing the many statistics that a graphical summary provides.
Described below is the information contained in the summary. The bars in the histogram are the actual data. Shown to the right of the figure is the summary information. N is the number of observations. Because the p-value is greater than 0. Departures from zero can indicate non-normality.
Positive kurtosis indicates a greater peak than normal. Negative kurtosis indicates a flatter peak. This example is very near zero. Normal Probability Plot A normal probability plot NPP is used to graphically and analytically perform a hypothesis test to determine if the population distribution is normal. The NPP is a graph of calculated normal probabilities vs. A best-fit line simulates a cumulative distribution function for the population from which the data is taken.
Data that is normally distributed will appear on the plot as a straight line. Normal Probability Plots Normal. In addition, the p-value 0. Therefore, the data in this example is from a normal distribution. Additional distributions have been plotted below. Notice the departure of the plotted data from the line for the positive-skewed and negative-skewed distributions.
Negative-Skewed Distribution. Multi-Vari studies help the organization determine where its efforts should be focused. Given either historic data or data collected from a constructed sampling plan, a Multi-Vari study is a visual comparison of the effects of each of the factors by displaying, for all factors, the means at each factor level.
It is an efficient graphical tool that is useful in reducing the number of candidate factors that may be impacting a response y down to a practical number. Each cluster shaded box represents three consecutive parts, each measured in three locations. Each of the three charts represents a different process, with each process having the greatest source of variation coming from a different component. In the Positional Chart, each vertical line represents a part with the three dots recording three measurements taken on that part.
The greatest variation is within the parts. In the Cyclical Chart, each cluster represents three consecutive parts. Here, the greatest variation is shown to be between consecutive parts. The third chart, the Temporal Chart, shows three clusters representing three different shifts or days, with the largest variation between the clusters. The positions within part were taken at random and were unique to that part; position 1 on part 1 was not the same as position 1 on part 2.
The result is an app that can work as a live broadcasting tool for your phone just as easily as it could a video recording tool. The app is free and after initial setup, is very easy to customize for your particular needs. While the clip art positioning and actions are inherently limited, the app makes it possible to edit anything you place on screen with ease to customize how your storyboards look.
The entire process is quick and easy and the results are surprisingly lifelike--potentially making for some hilarious photos as well as planning for your next tattoo.
0コメント