Imagine you’re a dog walker who has five clients who need their dogs walked in the afternoon. You start your first day walking the first dog. You return with the first and start walking the second. Then, you return with the second and get the third, and so on. After three hours, you have finished walking all five dogs and return home.
You repeat this daily until one day you have an idea. What if you walk all the dogs together? You perform the same task for each dog so walking them together will save you some time.
This is the idea behind Single Instruction, Multiple Data (SIMD). If the same function is to be applied to each data element, one instruction can be called to act on all the elements. SIMD is one approach to data-level parallelism and is almost as old as the modern computer era.
The different applications of SIMD are:
- Vector Processing
- SIMD Extensions
- Graphical Processing Units (GPUs)
The dog walking example highlights a very simple concept that was used for instruction-level parallelism like pipelining, and can now be used for data-level parallelism like SIMD. If you are going to be doing the same thing multiple times why not pool resources and do them all at once.
As we can see from the image in the workspace the dog walker on top is getting pretty tired. The dog walker on the bottom is happy. Why do you think? Maybe they are thinking about what they are going to do with the rest of the day since they save so much time walking all dogs at once.