...
To use Rslurm, you must first create a function and a data frame of parameters values that the function can be applied to. In the data frame, each column must correspond to a specific parameter of the function, and each row must correspond to a separate function call. Rslurm works by dividing the data frame up into multiple segments, and then running the function on those segments simultaneously by using multiple cores. This is done using the slurm_apply(f, params) command, where f is the function and params is the data frame. The slurm_apply command returns a slurm job which can be called by other commands later. Slurm_apply can take other parameters as well, including the number of cores anodes used and the number of cpus per node.
To see the current status of their job, users can use the print_job_status(slr_job) command command, where slr_job is a slurm job. To return the results of the job, use the get_slurm_out command. A single instance of a function can also be run using the slurm_call(f, param) command. For another call command. After a job has finished, unnecessary temporary files can be cleaned up with the cleanup_files command.
For a more detailed explanation of how to use Rslurm, notation about all relevant Rslurm functions, and examples of code you can use, please visit this linkthe Rslurm documentation.
How do I use doParallel?
...
You may also assign or change the number of cores after using registerDoParallel using the option command. option(cores=2) would change your job to run on two cores. You can check how many workers your job is currently using with getDoParWorkers().
For another explanation of how to use doParallel along with examples of code and documentation about the rest of the package's functionality, please visit this link.
What if I need help using parallel processing or I have other questions?
If you need assistance with doParallel, Rslurm or , the concept of parallel processing, or other related questions, please contact Mike Tie.
...