...
doParallel allows you to run some jobs in R over multiple processing cores in CRUG. doParallel is a a “parallel back-end” for the foreach R package, so it will let you execute for loops in parallel. doParallel will not work for R jobs that are not based on for loops; if you need to run such jobs in parallel, refer to the Rslurm section of this page.
You MUST have both doParallel and foreach loaded in your code to use their functionality! However, you will need to do more than simply importing the doParallel library. You must use the package's registerDoParallel function before trying to use doParallel with your job, specifying either the number of cores or the cluster you would like to use (these are doParallel's workers). For example, the code registerDoParallel(4) would cause your job to run over four cores, and registerDoParallel(c1) would run the job over the previously defined cluster c1. If you do not give registerDoParallel an argument, the result depends on what computer system you're using; on Windows, you will automatically get three workers, while Unix-like systems will give you a number of workers equal to about half your total available cores.
...
For another explanation of how to use doParallel along with examples of code and documentation about the rest of the package's functionality, please visit this link.
doParallel Example:
Here's what the example is doing at important lines of code:
1) We import the doParallel library
4/5) We create a lot of trials of R data for testing
Note: The middle part here (7-15) tests the baseline performance on one core before we use multiple cores as a benchmark test of the speed-up from parallel processing.
17) We create a cluster to run our job in parallel over
18) We register the cluster with doParallel
21) We make our for loop to run R commands on the data
27) We stop the cluster when our for loop is finished
What if I need help using parallel processing or I have other questions?
...