What Is Parallel Processing, or Parallelization, in Computing?

What Is Parallel Processing, or Parallelization, in Computing?

 

What Is Parallel Processing, or Parallelization?

In parallel processing, “different parts of a computation are executed simultaneously on separate processor hardware,” says Tao B. Schardl, a postdoctoral associate in the electrical engineering and computer science department at the Massachusetts Institute of Technology.

“That separate processor hardware can be separate processor cores in a multicore central processing unit, or separate CPUs or other processing hardware in the same machine, or separate connected machines — such as a computing cluster or a supercomputer — or combinations thereof,” Schardl says.

In practice, parallel processing can be one of two things, says Jim McGregor, principal analyst at TIRIAS Research. It can be “the ability to run multiple tasks in parallel, such as multiple data processing batches or multiple instances of Microsoft Word, or breaking down a single task and running multiple portions of the task simultaneously — parallelizing — such as a neural network model.”

RELATED: Can serverless GPUs meet the demands of artificial intelligence?

How Does Parallel Processing Work?

In parallel processing, a software program is written or modified to identify what parts of the computation can be executed on separate processing hardware, Schardl says.

Those parts of the computation, or tasks, “are then run on separate processor hardware, typically using some combination of the operating system and a scheduling library,” he says.

“Parallelizing a program is often challenging, because it is generally up to the programmer writing the software program to figure out how to divide the program’s computation into tasks that can be executed simultaneously, in parallel, in a correct and efficient way,” he adds.

To do this effectively, software developers need to figure out when and how the different parallel tasks must synchronize and communicate with each other, Schardl says.

A wide range of technologies — including shared memory, distributed memory and various hardware synchronization operations — exist to support synchronization and communication among tasks. These technologies provide different capabilities and impose different costs, he says.

“Programmers must figure out how to use available parallel processing hardware and communication and synchronization technologies so that tasks can execute efficiently in parallel,” Schardl says. Running various tasks in parallel also helps users avoid long wait times, which can slow down the computation overall.

If there is a software function that is relying on another process to give it data to get started, that processing is going to be serial in nature, and thus slower, Hoff says. “But if I write my code really smartly, I can make it so that there’s very few of those dependencies, and I can run these things parallel,” he says.

link

Leave a Reply

Your email address will not be published. Required fields are marked *