However, often in bioinformatics computational tasks are too big for a single CPU, so jobs are submitted to a cluster. Then, the Makefile doesn't help you much: It can't detect that jobs are running on the cluster. There is qmake, but it only works if all your parallel jobs are specified in the Makefile. I usually write my parallel scripts in a way that they can submit as many instances as necessary of themselves to the cluster via qsub.
Therefore, I went ahead and wrote a small Python wrapper script that runs the job submission script and sniffs the job ids from the output of qsub. It then waits and monitors these jobs until they are all done. Then, the execution of the Makefile can continue.
Here's an example of how to invoke the wrapper script from the Makefile:
pubchem_compound_inchi.tsv.gz:
~/src/misc/qwrap.py ${SRC_DIR}/inchikeys.py
cat ../inchikey/* | gzip > pubchem_compound_inchi.tsv.gz
You can download the code (released under a BSD License, adapted to SGE). I hope it's useful!Addendum: Hunting around in the SGE documentation I found the "-sync" option, which, together with job arrays, probably provides the same functionality but also checks the exit status of the jobs.