Most project managers would say that cost and schedule are the two most important measures of a project’s performance. These two indicators, however, fail to provide the whole picture of whether a project will be successful or not. Delivering the project’s output on time and within the budget is not enough: you must also ensure that it meets the quality standards defined by both external and internal stakeholders.

Quality control is the process of working towards minimizing product defects and the waste of resources, as well as increasing the reliability of your project output. To help you with these three goals, we will use this article to present a detailed and comprehensive guide with multiple techniques to address different concerns related to quality management.

Similarly to cost and schedule control, quality control is cyclical and builds on previous knowledge to improve the current processes. There is no “end” to quality control, and there is always something new that can be learned from the current and from past projects.

1. Starting with the right foot: The requirements of a successful quality control and monitoring process

Before we start discussing the many techniques and tools that can support the quality control and monitoring processes, we must briefly present which documents, metrics, and data must be available so an efficient control process can take place.

The first document you need to have readily available is the most updated version of your Quality Management Plan. Part of the project management plan, the quality management plan defines in details how the quality control and monitoring process is to be implemented. While this document can be fairly broad and without much detailed information, it’s highly recommended that it includes at least clear and specific directions of how to properly conduct the quality control activities. More specifically, it should define which intermediate and final outputs should be inspected, when such inspections or monitoring processes should take place, what is the most adequate technique to control the quality of a specific output, who is responsible for such quality controls, among other crucial bits of information.

In addition to the quality management plan, it’s also important to gather the planned quality metrics. In simple terms, a quality metric is a detailed and specific description of a single project attribute, as well as the expected quality standards and how they are going to be measured and monitored. The measurement specifications in the quality metrics are normally linked to quality checklists, a powerful tool to ensure that all the necessary quality dimensions of the targeted project attribute are measured.

Last but not least, you want to make sure that the work performance measurements are available and ready for analysis. While some of the tasks related to quality control and monitoring include the data collection itself on their scope, others are targeted at analyzing datasets that are collected over time (for example, employee productivity and punctuality). Having the data already cleaned, organized, and ready for analysis makes the quality control tasks considerably more efficient and precise.

2. The tools and techniques for better quality control and monitoring

Now that we have several documents and inputs containing the information we need to compare actual and planned project outputs, it’s time to dive deeper into the different tools that help us actually do that. While some of the tools offer numerical analyses (such as Statistical Sampling, Pareto Diagrams, and Control Charts), others are more descriptive and involve a more qualitative approach (such as the Fishbone Diagram and Manual Inspection).

2.1 Cause and effect diagrams (fishbone diagram)

The first tool we will cover is the Cause and Effect Diagram (also known as Ishikawa or Fishbone Diagram). It focuses on identifying how different factors relate to a certain problem in the execution of your project. The basic idea of this tool is to go beyond the apparent cause of a defect and to identify its root, primary cause. Figure 01 presents the general form of the Fishbone diagram.

Figure 01: The General Shape of a Fishbone Diagram (also known as Ishikawa or Cause and Effect Diagram)

There are six major categories that accommodate the vast majority of causes for production-related issues. Some articles and books suggest two extra dimensions: Time and Energy, but they are quite abstract in nature and hard to investigate. In addition to that, time and energy are normally related to waste rather than to actual defects on the project’s output. Because of these two reasons, we prefer to leave these two components out of the model and stick to more concrete and generally accepted categories under which the causes can be classified. Let’s briefly go through each of them to understand what types of problems should be listed under each category.

  • Measurement: The measurement category involves causes related to the data that is collected during the execution of the process and to its processing.
  • Machine: The machine category involves causes related to machine issues, such as maintenance periods, whether the machines are up to date and working properly, whether there is enough equipment to produce the required output, among other concerns.
  • Material: The material category refers to the raw material that is received and used during the production process. Common causes for defects include wrongly supplied or below-quality material, lack of proper storage conditions, and excessive waste of raw material.
  • Methods: The methods category refers to the production process itself. Poorly designed product flow and undocumented and inaccurate production processes are two of the most common causes for defects under this category.
  • Manpower: The manpower category refers to all issues related to the personnel involved in the production process. High absence rates, lack of proper training, and low employee motivation are just a few examples of issues that fall under this category.
  • Environment: The environment category involves all the environmental factors of the production site. Noise levels, lighting, smell, and layout of the office or of the production plant normally appear here.

Below we provide a more detailed Ishikawa diagram, including some of the factors we mentioned above. Notice that, under the section “Methods”, we went one step further and identified some of the possible causes of a poorly designed production process. The idea of the Fishbone diagram is exactly that: to dig deep and discover whatever is the real cause of the defect under analysis. Due to limited space, we went only up to the third level, but you should definitely be more precise and create more elaborate diagrams.

Figure 02: Practical Example of a Fishbone Diagram

Even though our examples are based on an actual production plant of a manufacturing industry, the Fishbone diagram is extremely flexible and as valid and legit for the services sectors. If you think about a software development company, the many causes of issues could include inaccurate work tracking (measurement), old, slow and outdated computers (machine), outdated IDEs, servers, and other development-related supporting software (material), poor issue tracking system (methods), lack of proper skills for the job (manpower), and poorly designed working offices (environment).

The single most important takeaway for this method is the simple process behind its execution:

  1. Choose a specific enough defect that you would like to eliminate.
  2. Ask “why” and “how” questions, and repeat this step several times (the 5-Why Technique defines 5 as the minimum number of times you should repeat the questions, each of the times digging deeper into the root cause of the problem).
  3. Classify your answers under the six main categories provided by the Fishbone Diagram, and represent them graphically to facilitate visualization.
2.2 Control charts

The next tool to be covered here is the Control Chart. Control charts are an effective way of identifying whether a process is producing stable and reliable outputs. Their application range is not limited to repetitive processes that have a well-established number of units to be produced; in fact, they can be applied to a wide variety of processes: time measurement, frequency of change requests, and cost variances are just a few examples of how you can use control charts to compare actual and expected performance and decide whether you should take corrective actions.

Control charts have four main components:

  1. Planned target (or goal): The planned target refers to the goal defined in the planning documents. This can be a defined number of units for the output of a machine, the target cumulative hours of a team’s work, the number of defects per thousand units produced, among many other possible indicators.
  2. Upper and lower control limits: The upper and lower control limits define the boundaries of the allowed variance before any active control measure is actively implemented. The control limits are normally lower than the specification limits exactly to allow the project manager to take action before the actual performance crosses the stakeholder-defined boundaries.
  3. Upper and lower specification limits: The upper and lower specification limits are the thresholds defined by the client regarding a specific indicator. You should, at all costs, avoid allowing the respective indicator to go outside this interval.
  4. Actual performance: The actual performance contains the data collected during the execution of the project.

Let’s approach these four elements from the perspective of an actual example. Imagine we are in the process of developing a new software, and we want to ensure that the number of hours worked stays within the boundaries of our initial plan. We, therefore, set up a simple control chart to measure the cumulative amount of hours worked and compare it with our initial schedule estimations. Take a look at Figure 03, which includes both planned and actual data from several measurements carried during the initial phases of the software development project.

Figure 03: Practical Example of a Control Diagram

As you can see in our example (Figure 03), the initial performance of the team in terms of cumulative hours is close to the initial goal, but shortly after the beginning of the project execution, there is a sharp increase and a trend that threatens to cross the upper control limit very soon. As a manager, you should pay close attention to how the trend develops over time: it might be the case the higher hours are due to an isolated incident that needed extensive revision and fixing, but it might also be the case that the amount of work was wrongly estimated and it needs to be revised.

Control charts have the advantage of eliminating guesswork by providing hard limits for implementing corrective measures in the production flow. A common approach among project managers is to “wait and see” how an unwanted trend develops, with the hope that, at some point in time, the trend will reverse back to the planned targets of the project. This approach is extremely inefficient as it hinders corrective actions, reduces the quality of the project output, and increases the waste in the production project. Control charts are a great way of avoiding this trap by automatically requiring the project manager to take action once the actual performance of an indicator falls outside of the control interval.

2.3 Histogram

Histograms are an efficient way of quickly identifying the most recurrent events in the project execution process. Put simply, a histogram is a vertical bar chart that lists the different attributes of a problem or situation on its x-axis and their frequencies on its y-axis. Figure 04 gives a simple example of how a histogram looks like.

Figure 04: Practical Example of a Histogram

A histogram doesn’t need to be uniquely about the frequencies of several events. Perhaps you are interested in tracking the financial impact of several events, as it might be the case with critical events that have a strong financial impact on your project. You can use histograms to track the total financial impact on a per-event basis and have a clear picture of which ones are draining the most of your resources.

That said, histograms don’t offer much more than a quick comparison between the consequences of different attributes of a problem or situation. They don’t provide any time dimension, so it’s fairly hard to track progress only using histograms. Although they can be helpful in certain situations, we suggest you don’t rely solely on them as the main tool for your quality monitoring.

2.4 Pareto chart

Pareto charts, on the other hand, are a more complete and insightful form of histograms. They are based on the Pareto Principle, which states that, in many situations, roughly 80% of the outcomes are generated by roughly 20% of the causes. From a project management perspective, you can think that roughly 80% of results come from roughly 20% of the effort. If you think about it (and if you have already some experience as a Project Manager), you will easily identify many situations where your team reported a fast progress in the beginning but a much slower pace afterwards. The fast progress is normally related to the development of the core features of your project, while the slower progress is normally related to fine-tuning the project output to implement all the details and technical requirements defined by the final customers.

Figure 05 shows a simple chart with some mockup data to help us understand how the Pareto Chart works. In the context of this article, Pareto Charts will be used to identify the “20%” of events that are behind the “80%” of the defects in our project execution process. Notice that I am using quotation marks because these numbers will not always reflect reality; instead, the important realization it that we are looking for the most critical few events that are responsible for a large share of quality defects.

Figure 05: Complete Example of a Pareto Chart with Cumulative Data

Visualizing the critical causes becomes much easier when we use Pareto Charts. The information is ordered by magnitude from the highest to the smallest in the scale being used to build the chart, and it quickly shows which few events are responsible for the greatest share of defects or inefficiencies in your project.

Pareto Charts have three main components:

  • Columns for each of the attributes of a situation (exactly the same as with histograms). They represent the individual frequencies of each event, and they should be ordered from highest to smallest frequencies. Notice that our chart has a column named others, which is a catch-all variable to account for other causes that are not so relevant to our analysis. It’s ok to keep the “others” column at the end, even if it violates the ordering principle. However, if your “others” column is disproportionally high, you should actually spend some time to understand why there are so many different causes for the problem being analyzed.
  • A table with the individual frequencies for each event and the cumulative percentage. At the bottom of our chart, we present the numerical dataset on which we based it. It’s important to include this to avoid any confusion when interpreting the data presented in the chart.
  • A line that represents the cumulative percentage of the variable under study. This is the unique contribution of the Pareto Chart, and it allows you to visualize how many attributes of a situation are responsible, when accounted together, for a certain percentage of defects or issues.

Building a Pareto Chart doesn’t require much effort apart from calculating the individual and cumulative percentages for each event and putting them together as an ordered dataset. If you are using Excel to generate your reports, Microsoft itself offers a detailed tutorial about how to use it to build a Pareto Chart.

2.5 Scatter Diagram

Scatter Diagrams show the relationship between two variables and study whether they are correlated or not (and, if yes, to which extent they are correlated). The process for building a Scatter Diagram starts with you choosing a phenomenon or variation you wish to study. This is known as the dependent variable. Ideally, it should be numerically measurable or easily converted to a numerical scale, as this makes the process of plotting the scatter diagram considerably easier. The dependent variable is normally the defect or issue under observation, and you choose it as so because you want to understand how it relates to several possible causes.

The second element to choose is the independent variable. The independent variable is normally thought as the “potential cause” for the variation in the dependent variable. When choosing the dependent-independent pair of variables, you should keep two questions in mind:

  1. “What do I want to study in details?” - That’s your dependent variable (because its variation depends on some other phenomenon).
  2. “What might be a potential cause for the observed variation?” - That’s your independent variable.

Once you have chosen the two variables, the next step is to collect and clean the data. Since this stage varies greatly from business to business and from industry to industry, we will not discuss it in details. Once you have collected the data, the process is as simple as plotting a basic chart in Excel or any other similar software. The independent variable is normally placed on the x-axis and the dependent on the y-axis. Figure 06 presents a simple Scatter Diagram.

Figure 06: Practical Example of a Scatter Diagram
Interpreting the results from a Scatter Diagram

Scatter Diagrams generally assume one of the three main shapes presented in Figure 07.

Figure 07: The Three Possible Trends in a Scatter Diagram

In the first case (Figure 07, chart A), there is a clear positive correlation between the dependent and the independent variable. As we move towards higher values in the x-axis, the y-axis also starts to increase. What we can say for sure is that the dependent and the independent variables move in the same direction, but we cannot conclude that the independent variable is a cause for the dependent variable. We will explore this discussion of correlation versus causation in the next section.

The second case (Figure 07, chart B) is when there is no clear relationship between the dependent and the independent variable. As you can see in the second chart from Figure 07, we cannot conclude that higher values in the x-axis are related to higher or lower values in the y-axis. There is not much information that can be extracted from uncorrelated variables, except for the fact that the independent variable is unlike to be behind the variation in the dependent variable.

Last but not least, in the third case there is a clear negative relationship between the two variables. As the value in the x-axis increases, the value in the y-axis decreases. This could represent the interaction between performance bonuses (on the x-axis) and slacking off at work (on the y-axis). The higher the performance bonuses, the lower the time that employees slack off.

Scatter Plots, however, fail to offer more comprehensive information about the different aspects of the relationship between the two variables. If they are correlated, by how much do we expect the dependent variable to vary if the independent variable increases or decreases by one unit? What is the expected value of the dependent variable when the independent variable equals zero? These question (and several others) can be highly relevant if you are trying to optimize the quality of your production processes.

You can address such questions by conducting Linear Regressions. Explaining all the details of how to conduct a linear regression is out of the scope of this article, but here is a great article with a very thorough explanation about the process, its many different components, and how to use Excel to extract all the information from your data.

A word of caution

Before we move to the next tool, we must discuss the fact that correlation does not imply causation. Correlation happens when two variables move together in the same (positive correlation) or in opposite directions (negative correlation). Causation, on the other hand, is much stronger and happens when one variable is clearly the cause behind the behavior of another variable.

A Scatter Diagram can show only whether two variables are correlated or not. It cannot show whether the independent variable is causing the variation in the dependent variable. It might be the case that there is an underlying and common cause for the variation in both the dependent and independent variables and that they just happen to move in the same direction but don’t really impact each other.

Here is an example for us to think about. Imagine that we want to check whether higher machine downtime leads to more defective products. We choose defective products as our dependent variable and machine downtime as our independent variable. We then collect the data and plot the following chart (Figure 08).

Figure 08: Example of a Misleading Scatter Diagram

When we look at it, we can clearly see a positive correlation between machine downtime and defective parts. By only looking at the chart, we are inclined to think that more machine downtime implies a higher defect rate. But is that so? If we stop to think about it, will reducing the machine downtime actually solve our problem? Will a machine that produces defective products suddenly produce a high-quality output if it starts running for more time?

As you can see, there is a problem with our choice of variables. Machine downtime is not the cause of the high defect rate. In fact, both defect rate and machine downtime are very likely the consequence of another phenomenon - for example, poorly maintained machines. The cause is somewhere else, and looking only at our variables will not give us the right information about how to solve the problem.

This is why we alert you to think well before choosing your variables and making inferences from your chart analysis. There must be a clear reasoning behind the choice of variables, otherwise you might end up falling in the spurious correlation trap.

2.6 Inspection

Manual inspection is one of the most efficient ways of controlling and monitoring the quality of your project and output. As the name suggests, this technique consists of manually comparing the expected and the actual work outcomes of each phase of the production process. Notice that it is recommended to have several manual inspection checkpoints throughout the execution of the project rather than a single inspection at the very end. This way, you will prevent defects and inefficiencies from piling up and completely ruining your final product or service.

Manual inspection is best suited for cases when the project outcome is highly customized to very specific customer needs. Automating quality control and monitoring in this scenario could require a considerable amount of time and resources since all the automated quality checks would have to be adapted for each new unit of the output being produced. If the cost of adjusting your quality control systems is higher than having experts perform the quality checks, you should resort to manual inspection.

Manually inspecting items doesn’t require you to abandon automated checks. Instead, technological advances are considerably facilitating the integration of machine and man when controlling the quality of a project’s output. In manufacturing industries, for example, computerized quality checks can ensure that each and every unit of the output meets the quality standards in terms of the quality of the materials, the physical dimension of the units, the resistance of the pieces, among other relevant metrics. The services world can also benefit from automation. Think about software development businesses. Automation plays a crucial role here: automated deployment tests, Continuous Integration and Deployment (CI and CD) and Lint tests (related to the syntax of the code) are just a few examples of how widespread the influence of automated tests can go.

2.7 Statistical sampling

The last method we will analyze is statistical sampling. Even though inspecting every unit of the project’s output is the most thorough way of identifying unconformities between the project plan and the real output, manually comparing each output unit might be too costly or simply not feasible. This is where statistical sampling comes into play. By correctly selecting and analyzing a limited sample of the entire population, statistical sampling allows you to infer the general output quality with a high degree of accuracy.

The concept of statistical sampling is fairly straightforward. It involves selecting a sample from a target population, running several quality control and monitoring tests, and generalizing these tests to the entire population.

The implementation, however, must be carefully executed to ensure that several important steps are met. Otherwise, the results produced by statistical sampling methods might be heavily biased or completely meaningless. To help us better understand the important steps of using statistical sampling for quality control and monitoring, as well as the consequences of not following the procedure properly, let’s think about a simple project that has four small teams working on different functionalities of a new mobile app. Each team is expected to work a certain amount of hours per week and to have a revision to approval rate of testing smaller than 20% (meaning that less than 20% of the features sent to testing need rework or present bugs). While this example is about the development of a new software, you can also think from the perspective of producing physical goods: a set of ten machines working simultaneously, each machine with well-defined expected output and defect rates, is a good example of how statistical sampling can be modeled in manufacturing industries.

Now that we have an example in mind, let’s move forward to discussing the crucial aspects of Statistical Sampling. The first element to think about is how you will choose your sample. The mandatory keyword here is randomness. When selecting a sample among your population, you must ensure that each and every element has the same probability of being chosen and that the final sample is indeed representative (has the same general characteristics) of your population.

Before choosing the sample, however, you must ask yourself several questions related to the nature of the analysis being conducted. For example: do you want to monitor the output quality of one specific team? Or do you want to have a general overview of how all the four teams are performing? Also, is there any team with unique characteristics that might affect your analysis? An example would be to have a team where the revision to approval ratio is already expected to be much higher than the other teams. These questions will help you define the population from which you will later extract a representative sample.

Once having defined the population, it’s time to move on to selecting the sample and verifying whether the sample is similar enough to the population. Assuming that we want to run quality control checks on the overall status of our software development project and that all teams are expected to have the same productivity rates, a representative sample should contain work data from all teams and with roughly the same proportion.

Another crucial element is the size of your sample. This has a direct impact on the reliability and the confidence of your results. Simply put, you want to ensure a large enough sample so you can generalize the results to the entire population. If you have 10 members in each team, selecting the work of one person from each group is not enough: you will very likely produce unreliable results. So how big should the sample be? This highly depends on how big your population is. For cases when the population is small (assuming 10 people per team, we would have a total of 40 people), the ideal sample size is at least half of the population. If, however, your population is larger (imagine a machine that produces 5000 units per day) the sample size doesn’t need to be as large. This sample size calculator provided by SurveyMonkey, despite being intended to determining the sample size of market surveys, can be used to determine the sample size for most types of statistical sampling (including the one we are discussing here).

The next step after the sample selection is to perform the adequate statistical tests. Covering each test in details is out of the scope of this article, but we recommend you take a closer look at this source that provides an overview of the most common statistical tests and their general intent. The most relevant statistical tests for project managers are the t-tests (which compare the means of two different variables, in our case the actual and the expected performance), and simple and multiple linear regression. Performing the calculations by hand requires a lot of effort and can quickly become infeasible, so we recommend using a statistical software or Excel (in addition to the previous article about how to perform linear regression in Excel, here is a guide to conduct t-tests).

3. Conclusion

Properly implementing quality control processes will give you a granular overview of whether the cost and time inputs are being properly translated into valuable deliverables. While cost and schedule monitoring tell you whether you are within budget and time, it is the quality reports that will provide a detailed picture of how efficient and valuable the work invested in your project is.

This is why the three elements - cost, schedule, and quality - form the three pillars of a successful project. If you want to increase your chances of success and reduce the probability of failure due to unhandled risks and unforeseen events, make sure to thoroughly address each of these three areas.

Chapters