If, however, you elect self-checkout then you stand in a single line and step up to the next available machine when your turn comes. In my case there are 4 machines in operation. This is called single-queue, multiple server and is denoted as M|M|4. This is also referred to as demand pooling or variability pooling, since all the natural variation in processing times is combined into a single pool. Intuitively this seems faster, if only for the seemingly universal principle that I will always choose the slowest checkout lane. But Murphy's Law aside, what does queueing theory say about this?
Let's look at an example. Beginning with the M|M|1 "traditional" example, let's say you have customers showing up at an average rate of 24 per hour. (This mean a customer every 2.5 minutes on average.) The arrival rate (lambda) is thus:
λ = 24
Let's further assume that the average time it takes to process a customer through the checkout process (not counting wait time) is 2 minutes. Therefore the checker can process an average of 30 customers per hour. This is called the service rate, and is expressed as mu:
μ = 30
We know then that the server utilization (rho) is the arrival rate divided by the service rate:
ρ = λ / μ = 0.8
Thanks to some magical queueing theory math, we also know that the average number of customers waiting in line will be:
E(Lq) = ρ2/(1-ρ) = 3.2
And the total time in the system (the time a customer spends waiting in line plus checking out), also called the sojourn time is:
E(S) = ρ/(λ (1-ρ)) = 0.1667 hrs. = 10 min.
So there you have it. Given the above assumptions about arrival rate and process time, in a traditional M|M|1 checkout queue, whichever lane you choose, on average you will be in line with 2.2 other people and you'll spend about 10 minutes total in the process.
Now it turns out that the math for a M|M|4 system, like the one for the self-checkout machines, is much more intense and gets into some messy probability stuff that I'm still trying to wrap my head around. So, I found this neat little web app that does the job for you. You enter the arrival rate, service rate, and the number of servers; and it spits out the answers for you. For reasons I don't quite understand, you have to normalize the service time to 1 in order to get it to come out right; so keep in mind that any outputs in time must be multiplied by 2 (our given lambda from above).
In order to make a fair comparison I assume that the rate of customers arriving in our self-checkout area is 4 times what it would be at any single checkout lane (since you have 4 times the capacity). Therefore you have:
λ = 96 and μ = 30
Divide both by 30 in order to normalize μ to 1 and you get:
λ = 3.2 and μ = 1
This will make the total server utilization (also called "occupation rate") equal to 0.8, just as it was in the previous analysis.
So, the output of the app looks like this:
Notice the number of waiting customers is now 1.586 (compared with 3.2) and the sojourn time is 1.746 (remember to multiply by 2). So the total time through the system is about 3.5 minutes (compared with 10). That's a pretty big difference and a compelling case for demand pooling.
Okay, so I know what you're thinking. You're thinking that I can't possibly check myself out as fast as a trained checker who's processed thousands of customers and knows all the tricks of the trade. Therefore it's unrealistic to maintain that the service time would be the same. So let's take a look.
Assume the arrival rate remains the same, but the service rate decreases by 15%. This means that customers are still arriving every 2.5 minutes (λ = 24), but now the service time is 2.3 minutes (μ = 26.1). This makes the effective service utilization 0.92, compared with 0.8 before. As service utilization approaches capacity, i.e. 100%, the queues begin to grow exponentially. (But that's a whole topic unto itself). So we should expect this to lead to more people in line and longer wait times.
Plug these numbers into out handy app, and here's what we get:
Now we have on average more than 9 people in line, not altogether unsurprising given the occupation rate. However, notice that the sojourn time is now 3.6. Remember that now we must multiply this by our new service time of 2.3, and now you get 8.3 minutes. So, even with a slower service rate and a much larger queue we still spend less time overall in the process. Quite a remarkable result.
Although this doesn't really answer the question of whether or not self-checkout is lean, it does present an interesting real world example of the benefits of using queueing theory to improve process flow.
If you're as interested in queueing theory as I am, here are a couple good resources:
And if you're interested in the application of queueing theory to product development, this is a great book:
I'm still pretty new at this, so I'd welcome any feedback from the experts out there.