https://bioinformatics.ibers.aber.ac.uk/wiki/api.php?action=feedcontributions&user=Ibers-admin&feedformat=atomIBERS Bioinformatics and HPC Wiki - User contributions [en]2024-03-28T19:02:30ZUser contributionsMediaWiki 1.31.5https://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54149Main Page2022-12-19T12:15:12Z<p>Ibers-admin: /* Slides and Talks */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides, Talks and Training Materials==<br />
<br />
Talk on using Containers - [[File:Containers.pdf]]<br />
<br />
[https://scw-aberystwyth.github.io/IBERS-HPC-tutorial/ Carpentries Style Lesson on using the HPC]<br />
<br />
[https://bioinformatics.ibers.aber.ac.uk/training/IBERS%20HPC%20Tutorial%20Recording.mp4 Recording of the lesson above]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54148Main Page2022-10-27T16:36:52Z<p>Ibers-admin: /* Remote Working Resources */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=MobaXterm&diff=54147MobaXterm2022-10-27T16:35:59Z<p>Ibers-admin: </p>
<hr />
<div>MobaXterm is a powerful tool for accessing servers with different protocol types.<br />
MobaXterm is an "All-In-One Network Application" tool that allows you to access the server, transfer files (download and upload), split the screen into multiple terminals, access remote graphical applications and record macros (to avoid repetitive work) .<br />
<br />
MobaXterm has a free version and can be downloaded here: https://mobaxterm.mobatek.net/<br />
<br />
<br />
'''MobaXterm Interface'''<br />
[[File:Mobaxterm00.png|800px|thumb|left]]<br />
<br />
[[File:Mobaxterm split.jpg|800px|thumb|left|Split Screen]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Java_memory_allocation_issues&diff=54146Java memory allocation issues2022-10-27T16:34:53Z<p>Ibers-admin: </p>
<hr />
<div>= This page is out of date =<br />
<br />
<br />
Recently an issue was discovered when trying to run a java with a large heap allocation.<br />
<br />
In this case we were trying to allocate 2Gb to the java VM instance. Even when specifying large memory allocations in SGE java would always complain there was not enough space to allocate the heap.<br />
<br />
A few people online have reported similar issues with SGE and java, it seems to stem from differences in the way java and SGE allocate memory to threads. Their solution is included on this page, but for more details see https://wiki.york.ac.uk/pages/viewpage.action?pageId=66126798<br />
<br />
== Set MALLOC_ARENA_MAX ==<br />
<br />
Firstly we need to set an enviromental variable, there is two ways to do this. Either set it in your SGE script, or as an environmental variable. The first is probably the best option so that it doesn't cause issues with other scripts.<br />
<br />
=== Set as job variable ===<br />
<br />
Add the following to your SGE script<br />
<br />
#$ -v MALLOC_ARENA_MAX=1<br />
<br />
=== Set as environmental variable ===<br />
<br />
To do it in your .bashrc<br />
<br />
export MALLOC_ARENA_MAX=1<br />
<br />
Then add this to your SGE script<br />
<br />
#$ -V<br />
<br />
== Limit GC threads ==<br />
<br />
Java will spawn multiple GC threads which take up memory, limiting these can help<br />
<br />
-XX:ParallelGCThreads=1<br />
<br />
== SGE Memory Allocation ==<br />
<br />
To get the 2Gb heap to work I had to allocate 4Gb of Memory in sun grid engine<br />
<br />
#$ -l h_vmem=4G</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54145Main Page2022-10-27T16:34:09Z<p>Ibers-admin: /* Workarounds */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54144Main Page2022-10-27T16:33:53Z<p>Ibers-admin: /* Workarounds */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Singularity_Containers&diff=54143Singularity Containers2022-10-27T16:33:20Z<p>Ibers-admin: </p>
<hr />
<div>== What is Singularity ==<br />
<br />
Singularity [https://www.sylabs.io/] is a container system that doesn't need root access. Containers are a set of applications and a minimal operating system bundled together in a single file. They are a good way to ensure that anyone running the software is using the exact same set of <br />
libraries and dependencies. This is helpful to reproducible science where we want somebody else to be able to recreate our results. They also allow us to run newer*/different operating systems than what is installed on the HPC.<br />
<br />
<br />
== Loading singularity ==<br />
<br />
Singularity has now been renamed apptainer, but the singularity program is still available if you load the apptainer module.<br />
<br />
Run the command (or add it to your submission script):<br />
<br />
module load apptainer <br />
<br />
<br />
== Obtaining Containers ==<br />
<br />
Singularity Hub [https://singularity-hub.org/collections] contains a number of premade containers which you can download and use. To download these run:<br />
<br />
singularity pull shub://<username>/<imagename>:<tag> <br />
<br />
e.g.<br />
<br />
singularity pull shub://SupercomputingWales/singularity_hub:base_image <br />
<br />
This will then download the file to <username>-singularity_hub-master-<image name>.simg <br />
<br />
== Running a shell in a container ==<br />
<br />
The singularity shell command will run a shell inside the container. You can then execute any commands from software installed in the container. To do this run the command "singularity shell".<br />
<br />
singularity shell <image name><br />
<br />
e.g.<br />
<br />
singularity shell SupercomputingWales-singularity_hub-master-base_image.simg <br />
<br />
== Running the container's default actions ==<br />
<br />
Most containers specify a default command which they will run. The "singularity run" command will execute this.<br />
<br />
singularity run <image name><br />
<br />
<br />
== Accessing the host file system from inside the container ==<br />
<br />
singularity shell -B /ibers/ernie/home:/home <imagename><br />
<br />
This will mount the directory /ibers/ernie/home from the host under /home in the container. Your own home directory will be under /home/<userid>. Note that accessing the ~ or ~<userid> directories won't work.<br />
<br />
== Writing your own containers ==<br />
<br />
To make your own containers you'll probably have to install singularity on your own computer, as building a container requires root access.<br />
<br />
This example takes an ubuntu 16.04 image as a base, then installs the program cowsay. This is done when the container is built. When the container is run it executes cowsay with the arguments given on the command line.<br />
<br />
<br />
bootstrap: docker<br />
From:ubuntu:16.04<br />
<br />
%help<br />
Example container for Cowsay<br />
<br />
%labels<br />
MAINTAINER IBERS Admin<br />
<br />
%environment<br />
#configure our locale, without this we'll get locale errors<br />
export LC_ALL=C<br />
#cowsay installs to /usr/games, but this isn't in the path by default<br />
export PATH=/usr/games:$PATH<br />
<br />
%post <br />
apt-get update<br />
apt-get -y install cowsay<br />
<br />
%runscript<br />
cowsay $@<br />
<br />
<br />
To build the container save the above example in a file called Singularity and run:<br />
<br />
sudo singularity build cowsay.simg Singularity<br />
<br />
This will create an image file called cowsay.simg containing all the required software to run cowsay in an ubuntu 16.04 operating system.<br />
<br />
== Publishing a Container ==<br />
<br />
* Publish the Singularity file on Github.<br />
* Create an account on Singularity Hub.<br />
* Add this repository to Singularity Hub and it will be automatically built by singularity hub and made available for download using the singularity pull command.<br />
<br />
== More Information ==<br />
<br />
* [[:file:Containers.pdf|Presentation on containers]] from the July 2018 Bioinformatics workshop.<br />
* [https://www.sylabs.io/guides/2.5.1/user-guide Official Documentation]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54142Main Page2022-10-27T16:32:14Z<p>Ibers-admin: /* Slides and Talks */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]<br />
<br />
[[error importing function definition for `BASH_FUNC_module']]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54141Main Page2022-10-27T16:31:59Z<p>Ibers-admin: /* IBERS HPC Guides */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]<br />
<br />
[[error importing function definition for `BASH_FUNC_module']]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54140Main Page2022-10-27T16:31:33Z<p>Ibers-admin: /* IBERS HPC Guides */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]<br />
<br />
[[error importing function definition for `BASH_FUNC_module']]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=ABySS&diff=54139ABySS2022-10-27T16:30:53Z<p>Ibers-admin: </p>
<hr />
<div>= This page is out of date and all information is incorrect. =<br />
<br />
ABySS is a denovo assembler of short read data.<br />
<br />
==installing==<br />
<br />
I used the following script to install abyss in my home folder (/home/rov). I first had to install sparsehash and boost libraries.<br />
<nowiki><br />
#/bin/sh<br />
<br />
#make and install abyss in my local directory<br />
#cd into abyss source distribution dir<br />
#and run this script using ../build_abyss.sh<br />
<br />
load module openmpi<br />
<br />
MYDIR="/ibers/ernie/home/rov/programs"<br />
<br />
#boost does not need to be compiled<br />
RJVBOOST="--with-boost=${MYDIR}/boost_1_54_0"<br />
<br />
#found using find / 2> /dev/null | grep openmpi<br />
RJVOPENMPI="--with-mpi=/cm/shared/apps/openmpi/gcc/64/1.4.4"<br />
<br />
#see https://groups.google.com/forum/#!msg/abyss-users/6NXwP959RTI/tqLtO14a4A8J<br />
RJVLDFLAGS="LDFLAGS=-L/cm/shared/apps/openmpi/gcc/64/1.4.4/lib64"<br />
<br />
#probably only one of these required<br />
RJVCPPFLAGS="-I${MYDIR}/sparsehash-2.0.2/src/google/sparsehash -I${MYDIR}/sparsehash-2.0.2/src/google -I${MYDIR}/sparsehash-2.0.2/src"<br />
<br />
#where to put abyss binaries<br />
RJVINSTALL="--prefix=${MYDIR}/abyss-local"<br />
<br />
#ensure max kmer size is > 64<br />
RJVMAXK='--enable-maxk=96'<br />
<br />
./configure ${RJVINSTALL} ${RJVBOOST} ${RJVOPENMPI} ${RJVMAXK} ${RJVLDFLAGS} CPPFLAGS="${RJVCPPFLAGS}"<br />
make<br />
make install<br />
</nowiki><br />
<br />
==preparing reads==<br />
Before assembling my Illumina reads, I first used trimmomatic to remove adapter sequences. I then organised the fastq files into libraries, one for each fragment size.<br />
<br />
I then used a custom python script to process the mate pair libraries to filter out read pairs which did not contain a valid mate pair.<br />
<br />
I then renamed the reads from the default Illumina name to ABySS compatible names. ABySS expected the first read of a pair to end with /1, and the second read to have the same name but end with /2. I used my own python script to do this renaming.<br />
<br />
==running==<br />
<br />
I used the following script to assemble my Illumina paired end and mate pair data.<br />
<br />
I found that using /dev/shm as the temporary directory (TMPDIR in the script below) increased the running speed. This folder is actually a RAM based file system, so the temporary files stay in RAM. The default temporary folder (probably /tmp, only 2GB) is too small. Using a folder on the scratch drive means that data has to cross the network, which makes it too slow.<br />
<br />
Using /dev/shm means that the temporary files are actually taking up RAM on the node. I found that for my data set only 25G of temporary files are needed, therefore this can fit into RAM okay. /dev/shm is limited to half the total RAM on the machine (i.e. 500/2 = 250G for the large node). Be sure to factor this into your -l h_maxvmem in the SGE script. Files in /dev/shm are not automatically deleted when the job ends, so be sure to explicitly delete them somehow, even if the job aborts. /dev/shm is shared between all job running the the node, so put your files under a folder called /dev/shm/[your username] or similar to avoid interferring with other users files.<br />
<br />
<nowiki><br />
#$ -S /bin/sh<br />
#$ -N abyss<br />
#$ -o ../logs/$JOB_NAME.out.$JOB_ID<br />
#$ -e ../logs/$JOB_NAME.err.$JOB_ID<br />
#$ -cwd<br />
#$ -l h_vmem=245G<br />
<br />
module load openmpi<br />
<br />
SLOTS=16<br />
RDIR='/ibers/ernie/scratch/rov/large_data/avena_renamed/'<br />
ABYSSPE='/ibers/ernie/home/rov/programs/abyss-local/bin/abyss-pe'<br />
KMER=51<br />
OUT="k${KMER}-001"<br />
HOSTFILE='./myhostfile'<br />
export PATH=${PATH}:/ibers/ernie/home/rov/programs/abyss-local/bin<br />
#export TMPDIR=/ibers/ernie/scratch/rov/abyss_atlantica_assembly_2013-08-08/tmp<br />
export TMPDIR=/dev/shm/rov-abyss-tmp/k51_001<br />
<br />
rm -rf ${TMPDIR}<br />
mkdir -p ${TMPDIR}<br />
<br />
mkdir -p $OUT<br />
cd $OUT<br />
echo `hostname` slots=${SLOTS} > ${HOSTFILE}<br />
<br />
${ABYSSPE} \<br />
np=${SLOTS} mpirun="mpirun -hostfile ${HOSTFILE}" \<br />
k=${KMER} \<br />
name=${OUT} \<br />
lib='pe200 pe700 peMP1' \<br />
mp='mpMP1' \<br />
pe200="${RDIR}pe200-R1.fq ${RDIR}pe200-R2.fq" \<br />
pe700="${RDIR}pe700-R1.fq ${RDIR}pe700-R2.fq" \<br />
peMP1="${RDIR}peMP1-R1.fq ${RDIR}peMP1-R2.fq" \<br />
mpMP1="${RDIR}mpMP1-R1.fq ${RDIR}mpMP1-R2.fq" \<br />
se="${RDIR}se.fq"<br />
<br />
rm -rf ${TMPDIR}<br />
</nowiki><br />
<br />
Any questions, email rov@aber.ac.uk<br />
<br />
Below is an example of another working script, as seen ABySS and openmpi had to be loaded in script before ABySS would function. This example shows a paired end fragmented library being assembled by ABySS with the k-mer size 25.<br />
<br />
<nowiki><br />
#$ -S /bin/sh<br />
#$ -N [running name]<br />
#$ -j y<br />
#$ -M [email address]<br />
#$ -q intel.q<br />
#$ -cwd<br />
<br />
module load openmpi/open64<br />
module load abyss/1.3.7 <br />
<br />
abyss-pe -j6 k=25 n=10 name=McCabeABySS lib='L001 L002 L003 L004 L006 L007' \<br />
L001='McCabe-gDNA-RichardDewhurst-LAB_NoIndex_L001_R1_001_qualtrim.trimmed5P.fastq.oneline.matched McCabe-gDNA-RichardDewhurst-LAB_NoIndex_L001_R2_001_qualtrim.trimmed5P.fastq.oneline.matched' \<br />
L002='McCabe-gDNA-RichardDewhurst-LAB_NoIndex_L002_R1_001_qualtrim.trimmed5P.fastq.oneline.matched McCabe-gDNA-RichardDewhurst-LAB_NoIndex_L002_R2_001_qualtrim.trimmed5P.fastq.oneline.matched' \<br />
L003='McCabe-gDNA-RichardDewhurst-LAB_NoIndex_L003_R1_001_qualtrim.trimmed5P.fastq.oneline.matched McCabe-gDNA-RichardDewhurst-LAB_NoIndex_L003_R2_001_qualtrim.trimmed5P.fastq.oneline.matched' \<br />
L004='McCabe-gDNA-RichardDewhurst-SAB_NoIndex_L004_R1_001_qualtrim.trimmed5P.fastq.oneline.matched McCabe-gDNA-RichardDewhurst-SAB_NoIndex_L004_R2_001_qualtrim.trimmed5P.fastq.oneline.matched' \<br />
L006='McCabe-gDNA-RichardDewhurst-SAB_NoIndex_L006_R1_001_qualtrim.trimmed5P.fastq.oneline.matched McCabe-gDNA-RichardDewhurst-SAB_NoIndex_L006_R2_001_qualtrim.trimmed5P.fastq.oneline.matched' \<br />
L007='McCabe-gDNA-RichardDewhurst-SAB_NoIndex_L007_R1_001_qualtrim.trimmed5P.fastq.oneline.matched McCabe-gDNA-RichardDewhurst-SAB_NoIndex_L007_R2_001_qualtrim.trimmed5P.fastq.oneline.matched'<br />
</nowiki></div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Array_jobs&diff=54138Array jobs2022-10-27T16:30:03Z<p>Ibers-admin: </p>
<hr />
<div>= This page is out of date and all information is incorrect. It will be updated shortly. =<br />
<br />
Array jobs allow for you to send of thousands of jobs at once without sending thousands of separate jobs, therefore reducing the stress placed on the server and keeping it quick.<br />
<br />
An example of an array job is <br />
<br />
<nowiki><br />
#$ -S /bin/sh<br />
#$ -N Usearch<br />
#$ -j y<br />
#$ -m e<br />
#$ -M USER@aber.ac.uk<br />
#$ -q large.q,intel.q,amd.q<br />
#$ -cwd<br />
#$ -pe multithread 8<br />
#$ -l h_vmem=1G<br />
#$ -l h_rt=5:00:00<br />
#$ -V<br />
#$ -t 1-1000<br />
<br />
<br />
echo $1<br />
<br />
echo $2<br />
<br />
base=$( basename $1)<br />
<br />
i=$(expr $SGE_TASK_ID - 1)<br />
<br />
echo $i<br />
<br />
usearch -ublast ../../split-Contigs_${2}.fa-${i}.fa -db $1 -evalue 0.01 -accel 0.5 -blast6out ${i}.${base}.usearch.txt <br />
<br />
</nowiki><br />
<br />
The "-t 1-1000" command converts this job from a normal job, into an array and produced the variable $SGE_TASK_ID. <br />
<br />
In this example usearch -ublast is run 1000 times on different files which are named the same, except for a difference of 1 number. Using the $SGE_TASK_ID you can loop through all the input files.<br />
<br />
However $SGE_TASK_ID can only start from 1 and so if your files start from 0 you have to use code as in the example which creates the variable $i which is $SGE_TASK_ID - 1 so that it starts from 0.<br />
<br />
Using an array job still provides an output file for each of the jobs which are part of the array and so you can still monitor each separately and check if 1 fails whilst the others work etc.<br />
<br />
One of the main benefits is that it reduces the number of jobs you have when using 'qstat', instead of having to scroll through thousands of jobs it is only a few plus each of the jobs currently running which are shown separately.<br />
<br />
<br />
For a more detailed look at the abilities of array jobs look at: http://wiki.gridengine.info/wiki/index.php/Simple-Job-Array-Howto<br />
<br />
<br />
== Limiting the number of tasks that run at any one time ==<br />
<br />
In the above example, there are 1000 tasks that need to be run. If there is nothing running on the HPC at the time, over 500 may start. While this may seem like a good thing, depending on the type of task you are doing it may cause problems. For example, if each task is very IO intensive, then you may not be able to run 500 at the same time, so while it is easier to manage task arrays in terms of a job submission, it can cause issues. <br />
<br />
One way around this is to amend the task line in the sun grid engine script to say that you wish 1-1000 to run, but only 20 to run at any one time. The syntax for this is as follows;<br />
<br />
#$ -t 1-1000 -tc 20<br />
<br />
NOTE: the -tc 20 which limits to 20 concurrent tasks.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Monitoring_your_jobs&diff=54137Monitoring your jobs2022-10-27T16:28:23Z<p>Ibers-admin: </p>
<hr />
<div>There are various ways for you to monitor and check up on your running and completed jobs.<br />
<br />
<br />
=== Check on you've submitted ===<br />
<br />
Once you have submitted your job scripts, you may want to check on the progress of what is running. This is achieved using the <nowiki>qstat</nowiki> command. This will show you your jobs. It might look something like;<br />
<br />
<nowiki><br />
[user@login01(aber) ~]$ squeue <br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
200133 amd myScript cos R 0:03 1 node008<br />
200134 amd myScript cos R 0:01 1 node008<br />
200135 amd myScript cos R 0:01 1 node008<br />
200136 amd myScript cos R 0:01 1 node008<br />
200137 amd myScript cos R 0:01 1 node008<br />
200138 amd myScript cos R 0:02 1 node008<br />
200139 amd myScript cos R 0:02 1 node008<br />
200140 amd myScript cos R 0:02 1 node008<br />
<br />
</nowiki><br />
<br />
<br />
=== Check a the status of a job ===<br />
<br />
You can use the <nowiki>squeue -j JOB_ID</nowiki> command to get information about a running or queued job. Below is what you might find on a running job.<br />
<br />
<nowiki><br />
[user@bert ~]$ qstat -j 200133<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
200133 amd myScript cos R 0:03 1 node008<br />
</nowiki><br />
<br />
<br />
=== job States ===<br />
<br />
R = Job is running <br />
PD = Job is waiting to run<br />
CG = Job is completing</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Complex_submissions&diff=54136Complex submissions2022-10-27T16:21:30Z<p>Ibers-admin: </p>
<hr />
<div>In [[Submitting your job using Slurm]] we looked at the following script;<br />
<br />
<nowiki><br />
#!/bin/bash --login<br />
<br />
# Specify the queue (also known as a partition)<br />
#SBATCH --partition=amd<br />
<br />
# run a single task, using a single CPU core<br />
#SBATCH --ntasks=1<br />
<br />
#SBATCH --output=myScript.o%J<br />
<br />
#run a program command to print hostname and uptime<br />
/bin/hostname && /bin/uptime<br />
</nowiki><br />
<br />
As mentioned in [[Submitting your job using Slurm]], unless the HPC is quiet, you may find it difficult to get this job to run or that you need more resources. This section discusses how to request resources using limits and gives some examples of some scripts.<br />
<br />
== Slurm and Limits ==<br />
<br />
Slurm does not know what you're attempting to do until you tell it. There are three pieces of information that the scheduler needs to best load balance your job in the queue. It needs to know how much memory you need, how many CPU cores (also known as slots) and how long the job is expected to run for. This information is passed to the scheduler from your job script. If you fail to specify the number of cpu cores, memory or time required, it will use the scheduler defaults, which may not be the best for your task.<br />
<br />
<br />
=== Requesting CPU Cores ===<br />
<br />
In the example above we only requested one task, which Slurm will map to using one core. We can request more cores by increasing this number, however there is no guarantee that these will be on the same node (and for many tasks we don't care if the cores are on the same node as each other). Just increasing the number of tasks Slurm allocates doesn't make multiple copies of our job run. To run multiple copies we have to use the srun command to launch our program and tell that how many copies we'd like. <br />
<br />
<nowiki><br />
#!/bin/bash --login<br />
<br />
# Specify the queue (also known as a partition)<br />
#SBATCH --partition=amd<br />
<br />
# run a single task, using a single CPU core<br />
#SBATCH --ntasks=8<br />
<br />
#SBATCH --output=myScript.o%J<br />
<br />
#run 8 copies of the program to print hostname and uptime<br />
srun --ntasks=8 "/bin/hostname && /bin/uptime"<br />
</nowiki><br />
<br />
<br />
This will submit your job and require 8 cores on any number of nodes (in the same partition) to run. At this point, if you submit this job, it will wait until 8 cores become available. <br />
<br />
<br />
<br />
=== Requesting all jobs run on a single node ===<br />
<br />
Adding the --nodes=1 parameter to your script will force all tasks to run on the same node. Note that the number of tasks requested must be less than or equal to the number of CPU cores available on a node. <br />
<br />
<br />
=== Requesting Memory ===<br />
<br />
Requesting memory is achieved by using the --mem option in your script. <br />
<br />
You can specify the memory limits like this:<br />
<br />
<nowiki><br />
#SBATCH --mem=40G<br />
</nowiki><br />
<br />
This will request 40G of RAM is allocated on each node the job runs on. If the job exceeds this memory usage then it will be killed by Slurm. For information about the specification of the nodes available, please see [[Bert and Ernie - An Overview]]. <br />
<br />
If you submit a job requesting more memory (mem_free) than is available on a single node in the queue then your job will fail to run. e.g. If you submit a job requiring 512GB RAM to the intel queue, it will fail to run as the maximum amount of memory a node has in the intel queue is 192GB. <br />
<br />
=== Requesting Time ===<br />
<br />
You can limit how long a job is allowed to take. When this time limit is exceeded Slurm will stop the job. <br />
<br />
Requesting time limits is achieved by using the --time option in your script. The format options are either minutes, minutes:seconds, hours:minutes:seconds, days-hours, days-hours:minutes or days-hours:minutes:seconds. <br />
<br />
for example to request a 24 hour time limit you could either use:<br />
<br />
<nowiki><br />
#SBATCH --time=24:00:00<br />
</nowiki><br />
<br />
or<br />
<br />
<nowiki><br />
#SBATCH --time=1-00<br />
</nowiki><br />
<br />
<br />
== An example script ==<br />
<br />
Here is a more realistic script that you may wish to run.<br />
<br />
<nowiki><br />
#!/bin/bash --login<br />
<br />
#specify which queue you wish to use<br />
#SBATCH --partition=amd<br />
<br />
#the job name as it appears in the queue<br />
#SBATCH --job-name=blast<br />
<br />
#output and error files<br />
#SBATCH --output=blast.o%J<br />
#SBATCH --error=blast.e%J<br />
<br />
<br />
#specify the number of tasks you require<br />
#SBATCH --ntasks=32<br />
<br />
#request all tasks run on the same node<br />
#SBATCH --nodes=1<br />
<br />
#specify how long the job will take<br />
#SBATCH --time=720:00:00<br />
<br />
#SBATCH --mem=100G<br />
<br />
#Load the blast module and then run the blast<br />
module load BLAST/blast-2.2.26<br />
blastall -p blastn -d /ibers/repository/public/blast_db/blast_june/nt -i myfile.fasta -o myfile.blast -a $SLURM_NTASKS -m 7<br />
</nowiki><br />
<br />
This script uses the amd queue, which we know has four nodes with 64 CPU cores and 256GB RAM and three nodes with 32 CPU cores and 98GB RAM. The run time of this, since it's a large sequence that is to be run, is set to 720 hours, or 30 days. I have requested 100GB for this job and 32 cores. This means that of the 7 AMD nodes available, only four of them are actually able to run it.<br />
<br />
To run this script:<br />
<br />
<nowiki><br />
sbatch myscript.slurm<br />
</nowiki><br />
<br />
Finally, you may notice that the blast command uses the -m flag to specify the number of CPU cores required. The $SLURM_TASKS variable is set by specifying the number of tasks you require. Using this notation you only need to specify the number of CPU cores needed in one place.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54135Main Page2022-10-27T15:14:24Z<p>Ibers-admin: /* IBERS HPC Guides */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using Slurm]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]<br />
<br />
[[error importing function definition for `BASH_FUNC_module']]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Submitting_your_job_using_Slurm&diff=54134Submitting your job using Slurm2022-10-27T15:13:58Z<p>Ibers-admin: /* Queues */</p>
<hr />
<div><br />
'''Slurm''' is the queue management software used to distribute jobs to the nodes on Bert and Ernie.<br />
<br />
There are lots of excellent guides elsewhere so all the topics will not be covered in this wiki. However some initial commands and scripts are introduced here.<br />
<br />
In order to run a Slurm job, you require a script. Simply make a file using your favourite editor (e.g. vi or nano) and give it a name (e.g. myScript.slurm). Then you can being to give the script some commands. This is what will be executed on each node. A very simple script is given below. This will run a UNIX command to tell you how long the machine has been on-line. <br />
<br />
NOTE: This is a very simple job script, it may work if there is little running on the HPC at the time, however you may find that it will fail if you have memory and cpu requirements. It is good practice to ensure that limits are in place so that your job is correctly placed into the queue correctly. This way it will not affect other peoples work and your job will be correctly scheduled. Look at [[Complex submissions]] for more information on this.<br />
<br />
<nowiki><br />
#!/bin/bash --login<br />
# specify the shell type<br />
<br />
# Specify the queue (also known as a partition)<br />
#SBATCH --partition=amd<br />
<br />
# run a single task, using a single CPU core<br />
#SBATCH --ntasks=1<br />
<br />
# specify the file to save job output to, %J will be replaced with a unique job number<br />
#SBATCH --output=myScript.o%J<br />
<br />
# specify the file to save job errors to, %J will be replaced with a unique job number<br />
#SBATCH --error=myScript.e%J<br />
<br />
<br />
#run a program command to print hostname and uptime<br />
/bin/hostname && /bin/uptime<br />
</nowiki><br />
<br />
To submit this to the Slurm queue, simply type;<br />
<br />
<nowiki><br />
sbatch myScript.slurm<br />
</nowiki><br />
<br />
This will then submit the job to an available node and create two files, one called myScript.o1234 and myScript.e1234, where 1234 is the job number. This changes as more jobs are submitted. <br />
<br />
When one views the contents of myScript.o1234 (which contains the intended output), you will see the screen output for the program. If you view myScript.e1234, this will contain any errors that have been printed to the screen.<br />
<br />
<nowiki><br />
cat myScript.o1234<br />
node008.hpc.private<br />
16:07:03 up 17 days, 2:05, 0 users, load average: 0.08, 0.02, 0.01<br />
</nowiki><br />
<br />
As you can see, this was run on node008.<br />
<br />
This time we will run the command multiple times.<br />
<br />
<nowiki><br />
for i in {1..64}; do sbatch myScript.slurm; done<br />
</nowiki><br />
<br />
This produces 64 files, identical with different job numbers, 64 with .o and 64 with .e. To view the contents of every output file (.o) you can <nowiki>cat</nowiki> all of them using the following command;<br />
<br />
WARNING - If you do this and the current working directory contains other files that have *.o* as the filename, it will print those too.<br />
<br />
<nowiki><br />
[username@bert ~]$ cat myScript.o*<br />
node008.hpc.private<br />
16:10:58 up 17 days, 2:09, 0 users, load average: 0.00, 0.00, 0.00<br />
node008.hpc.private<br />
16:10:58 up 17 days, 2:09, 0 users, load average: 0.00, 0.00, 0.00<br />
node008.hpc.private<br />
16:10:58 up 17 days, 2:09, 0 users, load average: 0.00, 0.00, 0.00<br />
node009.hpc.private<br />
16:11:16 up 17 days, 2:06, 0 users, load average: 1.92, 0.40, 0.13<br />
node009.hpc.private<br />
16:11:16 up 17 days, 2:06, 0 users, load average: 1.92, 0.40, 0.13<br />
node009.hpc.private<br />
16:11:16 up 17 days, 2:06, 0 users, load average: 1.92, 0.40, 0.13<br />
.<br />
.<br />
.<br />
.<br />
</nowiki><br />
<br />
As you can see, the node names are different, meaning that the jobs were not all run on the same node.<br />
<br />
<br />
= Queues =<br />
<br />
You can choose which queue (called partitions by Slurm) your job is submitted to and this will alter which nodes your job runs on. The available queues are shown below. If in doubt use the AMD queue.<br />
<br />
<br />
{|class="wikitable"<br />
! Name <br />
! Nodes <br />
! Purpose<br />
|-<br />
|highmem || 012 || Machines with 512GB of RAM and lots of CPUs<br />
|-<br />
|fat || 002 || Machines with 1TB of RAM<br />
|-<br />
|intel || 003,004 || 8 core Intel CPUs<br />
|-<br />
|amd || 005-011 || 32/64 core AMD CPUs<br />
|-<br />
|gpu || 13 || Machines with NVIDIA Graphics Processing Unit (GPU)<br />
|-<br />
|}</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Submitting_your_job_using_Slurm&diff=54133Submitting your job using Slurm2022-10-27T15:12:17Z<p>Ibers-admin: </p>
<hr />
<div><br />
'''Slurm''' is the queue management software used to distribute jobs to the nodes on Bert and Ernie.<br />
<br />
There are lots of excellent guides elsewhere so all the topics will not be covered in this wiki. However some initial commands and scripts are introduced here.<br />
<br />
In order to run a Slurm job, you require a script. Simply make a file using your favourite editor (e.g. vi or nano) and give it a name (e.g. myScript.slurm). Then you can being to give the script some commands. This is what will be executed on each node. A very simple script is given below. This will run a UNIX command to tell you how long the machine has been on-line. <br />
<br />
NOTE: This is a very simple job script, it may work if there is little running on the HPC at the time, however you may find that it will fail if you have memory and cpu requirements. It is good practice to ensure that limits are in place so that your job is correctly placed into the queue correctly. This way it will not affect other peoples work and your job will be correctly scheduled. Look at [[Complex submissions]] for more information on this.<br />
<br />
<nowiki><br />
#!/bin/bash --login<br />
# specify the shell type<br />
<br />
# Specify the queue (also known as a partition)<br />
#SBATCH --partition=amd<br />
<br />
# run a single task, using a single CPU core<br />
#SBATCH --ntasks=1<br />
<br />
# specify the file to save job output to, %J will be replaced with a unique job number<br />
#SBATCH --output=myScript.o%J<br />
<br />
# specify the file to save job errors to, %J will be replaced with a unique job number<br />
#SBATCH --error=myScript.e%J<br />
<br />
<br />
#run a program command to print hostname and uptime<br />
/bin/hostname && /bin/uptime<br />
</nowiki><br />
<br />
To submit this to the Slurm queue, simply type;<br />
<br />
<nowiki><br />
sbatch myScript.slurm<br />
</nowiki><br />
<br />
This will then submit the job to an available node and create two files, one called myScript.o1234 and myScript.e1234, where 1234 is the job number. This changes as more jobs are submitted. <br />
<br />
When one views the contents of myScript.o1234 (which contains the intended output), you will see the screen output for the program. If you view myScript.e1234, this will contain any errors that have been printed to the screen.<br />
<br />
<nowiki><br />
cat myScript.o1234<br />
node008.hpc.private<br />
16:07:03 up 17 days, 2:05, 0 users, load average: 0.08, 0.02, 0.01<br />
</nowiki><br />
<br />
As you can see, this was run on node008.<br />
<br />
This time we will run the command multiple times.<br />
<br />
<nowiki><br />
for i in {1..64}; do sbatch myScript.slurm; done<br />
</nowiki><br />
<br />
This produces 64 files, identical with different job numbers, 64 with .o and 64 with .e. To view the contents of every output file (.o) you can <nowiki>cat</nowiki> all of them using the following command;<br />
<br />
WARNING - If you do this and the current working directory contains other files that have *.o* as the filename, it will print those too.<br />
<br />
<nowiki><br />
[username@bert ~]$ cat myScript.o*<br />
node008.hpc.private<br />
16:10:58 up 17 days, 2:09, 0 users, load average: 0.00, 0.00, 0.00<br />
node008.hpc.private<br />
16:10:58 up 17 days, 2:09, 0 users, load average: 0.00, 0.00, 0.00<br />
node008.hpc.private<br />
16:10:58 up 17 days, 2:09, 0 users, load average: 0.00, 0.00, 0.00<br />
node009.hpc.private<br />
16:11:16 up 17 days, 2:06, 0 users, load average: 1.92, 0.40, 0.13<br />
node009.hpc.private<br />
16:11:16 up 17 days, 2:06, 0 users, load average: 1.92, 0.40, 0.13<br />
node009.hpc.private<br />
16:11:16 up 17 days, 2:06, 0 users, load average: 1.92, 0.40, 0.13<br />
.<br />
.<br />
.<br />
.<br />
</nowiki><br />
<br />
As you can see, the node names are different, meaning that the jobs were not all run on the same node.<br />
<br />
<br />
= Queues =<br />
<br />
You can choose which queue (called partitions by Slurm) your job is submitted to and this will alter which nodes your job runs on. The available queues are shown below. If in doubt use the AMD queue.<br />
<br />
<br />
{|class="wikitable"<br />
! Name <br />
! Nodes <br />
! Purpose<br />
|-<br />
|large || 012 || Machines with 512GB of RAM and lots of CPUs<br />
|-<br />
|fat || 002 || Machines with 1TB of RAM<br />
|-<br />
|intel || 003,004 || 8 core Intel CPUs<br />
|-<br />
|amd || 005-011 || 32/64 core AMD CPUs<br />
|-<br />
|gpu || 13 || Machines with NVIDIA Graphics Processing Unit (GPU)<br />
|-<br />
|}</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Submitting_your_job_using_Slurm&diff=54132Submitting your job using Slurm2022-10-27T12:13:27Z<p>Ibers-admin: Created page with " '''Slurm''' is the queue management software used to distribute jobs to the nodes on Bert and Ernie. There are lots of excellent guides elsewhere so all the topics will not..."</p>
<hr />
<div><br />
'''Slurm''' is the queue management software used to distribute jobs to the nodes on Bert and Ernie.<br />
<br />
There are lots of excellent guides elsewhere so all the topics will not be covered in this wiki. However some initial commands and scripts are introduced here.<br />
<br />
In order to run a Slurm job, you require a script. Simply make a file using your favourite editor (e.g. vi or nano) and give it a name (e.g. myScript.slurm). Then you can being to give the script some commands. This is what will be executed on each node. A very simple script is given below. This will run a UNIX command to tell you how long the machine has been on-line. <br />
<br />
NOTE: This is a very simple job script, it may work if there is little running on the HPC at the time, however you may find that it will fail if you have memory and cpu requirements. It is good practice to ensure that limits are in place so that your job is correctly placed into the queue correctly. This way it will not affect other peoples work and your job will be correctly scheduled. Look at [[Complex submissions]] for more information on this.<br />
<br />
<nowiki><br />
#specify the shell type<br />
#!/bin/bash --login<br />
<br />
# Specify the queue (also known as a partition)<br />
#SBATCH --partition=amd<br />
<br />
# run a single task, using a single CPU core<br />
#SBATCH --ntasks=1<br />
<br />
#run a program command to print hostname and uptime<br />
/bin/hostname && /bin/uptime<br />
</nowiki><br />
<br />
To submit this to the Slurm queue, simply type;<br />
<br />
<nowiki><br />
sbatch myScript.slurm<br />
</nowiki><br />
<br />
This will then submit the job to an available node and create two files, one called myScript.o1234 and myScript.e1234, where 1234 is the job number. This changes as more jobs are submitted. <br />
<br />
When one views the contents of myScript.o1234 (which contains the intended output), you will see the screen output for the program. If you view myScript.e1234, this will contain any errors that have been printed to the screen.<br />
<br />
<nowiki><br />
[username@bert ~]$ cat myScript.o1234<br />
node001<br />
12:16:15 up 9 days, 3:41, 0 users, load average: 0.00, 0.00, 0.00<br />
</nowiki><br />
<br />
As you can see, this was run on node001.<br />
<br />
This time we will run the command multiple times.<br />
<br />
<nowiki><br />
[username@bert ~]$ for i in {1..10}; do qsub myScript; done<br />
</nowiki><br />
<br />
This produces 20 files, identical with different job numbers, 10 with .o and 10 with .e. To view the contents of every output file (.o) you can <nowiki>cat</nowiki> all of them using the following command;<br />
<br />
WARNING - If you do this and the current working directory contains other files that have *.o* as the filename, it will print those too.<br />
<br />
<nowiki><br />
[username@bert ~]$ cat *.o*<br />
node001<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.00, 0.00, 0.00<br />
node007<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.00, 0.00, 0.00<br />
node006<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.09, 0.08, 0.07<br />
node005<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.12, 0.11, 0.04<br />
node007<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.00, 0.00, 0.00<br />
node001<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.00, 0.00, 0.00<br />
node006<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.09, 0.08, 0.07<br />
node005<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.12, 0.11, 0.04<br />
node001<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.00, 0.00, 0.00<br />
node007<br />
12:25:00 up 9 days, 3:49, 0 users, load average: 0.00, 0.00, 0.00<br />
</nowiki><br />
<br />
As you can see, the node names are different, meaning that the jobs were not all run on the same node.<br />
<br />
<br />
= Queues =<br />
<br />
You can choose which queue your job is submitted to and this will alter which nodes your job runs on. The available queues are shown below. If in doubt use the AMD or Intel queues.<br />
<br />
<br />
{|class="wikitable"<br />
! Name <br />
! Nodes <br />
! Purpose<br />
|-<br />
|all.q || 003 || DON'T USE THIS!!!!<br />
|-<br />
|large.q || 001, 012 || Machines with 512GB of RAM and lots of CPUs<br />
|-<br />
|fat.q || 002 || Machines with 1TB of RAM<br />
|-<br />
|intel.q || 003,004 || 8 core Intel CPUs<br />
|-<br />
|amd.q || 005-011 || 32/64 core AMD CPUs<br />
|-<br />
|metabolomics || 13 || For metabolomics group only, so they can access node 13<br />
|-<br />
|}</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Getting_an_account&diff=54129Getting an account2022-10-27T11:47:54Z<p>Ibers-admin: </p>
<hr />
<div><br />
== IBERS Staff and Postgraduate Students ==<br />
<br />
In order to get an account on Bert & Ernie (the IBERS HPC), all you need to do is fill in this online form (while connected to the university network)<br />
<br />
[http://bioinformatics.ibers.aber.ac.uk/request.html http://bioinformatics.ibers.aber.ac.uk/request.html]<br />
<br />
You will receive and email from the System Administrator once the account has been set up.<br />
<br />
== Access ==<br />
<br />
The IBERS HPC authentication system is integrated with [http://www.aber.ac.uk/en/is/index.html Information Services], which means that the username/password used on the HPC is the same as your IS account (email etc.). With this in mind, ensure that you comply with the Aberystwyth University IS [http://www.aber.ac.uk/en/is/regulations/passwords Regulations and Guidelines] with respect to passwords.<br />
<br />
== External users, visiting staff/students and collaborations ==<br />
<br />
The HPC can only be accessed from within the Aberystwyth University firewall. This means that an Aberystwyth University IS account is required. This can be applied for by following the guidelines outlined [https://www.aber.ac.uk/en/is/access/visitors/students/ here].</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Bert_and_Ernie_-_An_Overview&diff=54128Bert and Ernie - An Overview2022-10-27T11:40:07Z<p>Ibers-admin: /* The IBERS HPC */</p>
<hr />
<div>== What is HPC? ==<br />
<br />
Scientific computing generally revolves around researchers performing complex calculations as fast as possible. The super computers of the 70s, 80s and 90s were custom built to provide researchers with access to a machine with more memory (RAM) and CPU cores than the small, slow and expensive desktop computers of the era. These large machines such as those build by Cray and SGI were expensive and took a huge amount of space, power and cooling.<br />
<br />
In the early 2000s, the first multi core processors were released which dramatically changed the ability for users to being to take advantage of parallelisation when performing calculations. Today, the term High Performance Computing (HPC) is used to describe a form of supercomputing based upon off the shelf hardware. HPC clusters are often built from the same hardware as that which is found in high-end desktop workstations or web servers.<br />
<br />
== The IBERS HPC ==<br />
<br />
Names Bert and Ernie, the IBERS HPC consists of a master node (ernie), a login node (bert), 12 compute nodes and five storage nodes. <br />
<br />
The combined compute capacity is as follows;<br />
<br />
We have 416 CPU cores, 4TB RAM and 43TB home directory storage capacity, 109TB of temporary scratch disk, 12TB of fast solid state disk scratch. This is backed by 250TB and 680TB storage arrays for storage of sequencing data. <br />
<br />
Currently we are in the process of upgrading some of the older compute nodes and these numbers will be changing over late 2022 and early 2023.<br />
<br />
Specs Per node;<br />
<br />
node001 - currently broken and awaiting replacement<br />
<br />
node002 (fat queue) - 16 Cores, Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.1GHz, 1024GB RAM<br />
<br />
node003 (intel queue) - 8 Cores, Intel(R) Xeon(R) CPU X5647 @ 2.93GHz, 192GB RAM<br />
<br />
node004 (intel queue) - 8 Cores, Intel(R) Xeon(R) CPU X5647 @ 2.93GHz, 192GB RAM<br />
<br />
node005 (amd queue) - not currently in service, 32 Core 4 x AMD Opteron(TM) Processor 6220, 98GB RAM<br />
<br />
node006 (amd queue) - not currently in service, 32 Cores 4 x AMD Opteron(TM) Processor 6220, 98GB RAM<br />
<br />
node007 (amd queue) - not currently in service, 32 Cores 4 x AMD Opteron(TM) Processor 6220, 98GB RAM<br />
<br />
node008 (amd queue) - 64 Cores 4 x AMD Opteron(tm) Processor 6376 256GB RAM<br />
<br />
node009 (amd queue) - 64 Cores 4 x AMD Opteron(tm) Processor 6376 256GB RAM<br />
<br />
node010 (amd queue) - 64 Cores 4 x AMD Opteron(tm) Processor 6376 256GB RAM<br />
<br />
node011 (amd queue) - 64 Cores 4 x AMD Opteron(tm) Processor 6376 256GB RAM<br />
<br />
node012 (highmem queue) - 64 Cores 4 x AMD Opteron(tm) Processor 6376 512GB RAM<br />
<br />
node013 (GPU queue) - 32 Cores, 1 x AMD EPYC 7452 Processor, 768GB RAM, NVIDIA A100 GPU<br />
<br />
<br />
[[File:Hpc.png]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54127VPN and SSH via central2022-10-27T11:28:50Z<p>Ibers-admin: /* Generate an SSH key */</p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. You must be on campus/VPN the first time you do this, subsequent access off campus requires an SSH key.<br />
<br />
In Windows 10+, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
=== Generate an SSH key ===<br />
<br />
This is required for access off campus.<br />
<br />
On your computer run the command:<br />
<br />
$ ssh-keygen <br />
<br />
<br />
This will give the output:<br />
<br />
Generating public/private rsa key pair.<br />
Enter file in which to save the key (/home/abc12/.ssh/id_rsa): <br />
<br />
Press enter to save it in the default location (/home/abc12/.ssh/id_rsa in this example)<br />
<br />
You'll now be asked to enter a passphrase to protect the key. You can leave this blank, but adding a passphrase means that should anybody else copy the key they won't be able to use it without the passphrase.<br />
<br />
Enter passphrase (empty for no passphrase): <br />
Enter same passphrase again: <br />
<br />
Two files will now be saved in your .ssh directory. id_rsa which is your private key that you keep and id_rsa.pub which is your public key, you copy this to other computers you want to connect to.<br />
<br />
Your identification has been saved in /home/abc12/.ssh/id_rsa.<br />
Your public key has been saved in /home/abc12/.ssh/id_rsa.pub.<br />
The key fingerprint is:<br />
SHA256:KovOb+BBFX3txNV9IjuFoAMviZ8ByFFtF8kxKnLj9kI abc12@localhost<br />
The key's randomart image is:<br />
+---[RSA 3072]----+<br />
|=.o+++oB+ |<br />
|o* .=o=+* . |<br />
|+.==.=.+ o |<br />
| o=+= o |<br />
| .oE .S |<br />
| = . . |<br />
| . +... |<br />
| ...oo - |<br />
| .+oo . |<br />
+----[SHA256]-----+<br />
<br />
Now to copy the key to a system you want to use you can either paste the id_rsa.pub file into the .ssh/authorized_keys file on the other system or you can use the ssh-copy-id command. This is usually the simpler option, so let's do that and add our key to central by running the command:<br />
<br />
ssh-copy-id abc12@central.aber.ac.uk<br />
<br />
This should give a response like this and it will prompt for your password on central (since they key isn't copied yet).<br />
<br />
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/abc12/.ssh/id_rsa.pub"<br />
The authenticity of host 'central.aber.ac.uk (144.124.255.1)' can't be established.<br />
ECDSA key fingerprint is SHA256:n5CZT+pkmlMlF7N+vqN1ybTxatdrW8Kt4Ko0BLNikc.<br />
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes<br />
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed<br />
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys<br />
abc12@central.aber.ac.uk's password: <br />
<br />
Number of key(s) added: 1<br />
<br />
Now try logging into the machine, with: "ssh 'abc12@central.aber.ac.uk'"<br />
and check to make sure that only the key(s) you wanted were added.<br />
<br />
<br />
Now if you attempt to login to central again it shouldn't prompt for your password. If you set a passphrase on the key then you will be asked for this.<br />
<br />
ssh abc12@central.aber.ac.uk<br />
<br />
Connections to central should now be possible off campus/VPN too.<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features.<br />
<br />
== SSH Port Forwarding == <br />
<br />
'''This no longer works with central, these instructions are left here for reference should you want to use this technique elsewhere.'''<br />
<br />
SSH port forwarding allows you to send data other than what would be on the screen/keyboard over your SSH session. It can be used to get around firewall restrictions to access computers behind a firewall or break out from behind a firewall to other parts of the internet. <br />
<br />
For a good visual guide explaining this, see [https://www.youtube.com/watch?v=AtuAdk4MwWw this youtube video]. <br />
<br />
=== Local Port Forwarding ===<br />
<br />
A local port forward allows you to access a single port on another computer that's accessible to the system you're SSH'ed into. For example you might want to SSH to central from outside the university and then have it port forward to Bert. This way you can SSH straight into Bert, using the port forward via central. <br />
<br />
Lets forward port 22 on bert (which is the port for SSH) to port 2222 on our local computer via central.aber.ac.uk. <br />
<br />
ssh -L 2222:bert.ibers.aber.ac.uk:22 <userid>@central.aber.ac.uk<br />
<br />
We can now connect to bert by running (in a different terminal window):<br />
<br />
ssh -p 2222 <userid>@localhost<br />
<br />
This feature is particularly useful for copying files between bert and your home PC as you can use it with the SCP/SFTP commands or a graphical copying utility like Filezilla.<br />
<br />
To copy a file called <localfile> to bert we can do:<br />
<br />
scp -P 2222 <localfile> <userid>@localhost:<br />
<br />
Note that scp uses a capital P to specify the port number, but ssh uses a lower case p.<br />
<br />
=== Dynamic Port Forwarding (SOCKS proxy) ===<br />
<br />
SSH has a nice extra feature where data can be forwarded to the remote computer for it to forward onto others. This effectively means any data you send will appear to be from the remote computer. To activate this feature we have to start SSH with an extra option. SSH will create a proxy server using the SOCKS protocol, any software we want to use this feature will have to be told to send its data to the SOCKS proxy. <br />
<br />
To start the proxy add the "-D" option to SSH followed by a port number between 1024 and 65535, 1080 is the default number for SOCKS but it doesn't really matter what you use. <br />
<br />
ssh <userid>@central.aber.ac.uk -D 1080<br />
<br />
<br />
Once we've entered our password there will be a SOCKS proxy server running on our local computer listening on port 1080. Any requests sent to this will be forwarded to central, which will forward them onto their destination. <br />
<br />
==== Proxy server settings ====<br />
<br />
To use the SOCKS proxy you'll have to change your proxy server settings in the applications you want to use it. Any programs which you don't change the settings for will continue to access the internet via your own internet provider. <br />
<br />
Firefox:<br />
<br />
* Click on the grill menu (3 horizontal lines in the top left)<br />
* Click on the cog icon (options)<br />
* Click on the Wizard hat (advanced settings) icon at the bottom of the left hand side<br />
* Choose the network tab<br />
* Under the "Connection" section at the top click "Settings" next to the "configure how Firefox connects to the internet" <br />
* Choose "Manual Proxy configuration"<br />
* Enter "localhost" in the SOCKS host section and set the port to 1080, choose "SOCKS v5" and press Ok.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54126VPN and SSH via central2022-10-27T11:17:37Z<p>Ibers-admin: /* SSH Port Forwarding */</p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. You must be on campus/VPN the first time you do this, subsequent access off campus requires an SSH key.<br />
<br />
In Windows 10+, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
=== Generate an SSH key ===<br />
<br />
This is required for access off campus.<br />
<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features.<br />
<br />
== SSH Port Forwarding == <br />
<br />
'''This no longer works with central, these instructions are left here for reference should you want to use this technique elsewhere.'''<br />
<br />
SSH port forwarding allows you to send data other than what would be on the screen/keyboard over your SSH session. It can be used to get around firewall restrictions to access computers behind a firewall or break out from behind a firewall to other parts of the internet. <br />
<br />
For a good visual guide explaining this, see [https://www.youtube.com/watch?v=AtuAdk4MwWw this youtube video]. <br />
<br />
=== Local Port Forwarding ===<br />
<br />
A local port forward allows you to access a single port on another computer that's accessible to the system you're SSH'ed into. For example you might want to SSH to central from outside the university and then have it port forward to Bert. This way you can SSH straight into Bert, using the port forward via central. <br />
<br />
Lets forward port 22 on bert (which is the port for SSH) to port 2222 on our local computer via central.aber.ac.uk. <br />
<br />
ssh -L 2222:bert.ibers.aber.ac.uk:22 <userid>@central.aber.ac.uk<br />
<br />
We can now connect to bert by running (in a different terminal window):<br />
<br />
ssh -p 2222 <userid>@localhost<br />
<br />
This feature is particularly useful for copying files between bert and your home PC as you can use it with the SCP/SFTP commands or a graphical copying utility like Filezilla.<br />
<br />
To copy a file called <localfile> to bert we can do:<br />
<br />
scp -P 2222 <localfile> <userid>@localhost:<br />
<br />
Note that scp uses a capital P to specify the port number, but ssh uses a lower case p.<br />
<br />
=== Dynamic Port Forwarding (SOCKS proxy) ===<br />
<br />
SSH has a nice extra feature where data can be forwarded to the remote computer for it to forward onto others. This effectively means any data you send will appear to be from the remote computer. To activate this feature we have to start SSH with an extra option. SSH will create a proxy server using the SOCKS protocol, any software we want to use this feature will have to be told to send its data to the SOCKS proxy. <br />
<br />
To start the proxy add the "-D" option to SSH followed by a port number between 1024 and 65535, 1080 is the default number for SOCKS but it doesn't really matter what you use. <br />
<br />
ssh <userid>@central.aber.ac.uk -D 1080<br />
<br />
<br />
Once we've entered our password there will be a SOCKS proxy server running on our local computer listening on port 1080. Any requests sent to this will be forwarded to central, which will forward them onto their destination. <br />
<br />
==== Proxy server settings ====<br />
<br />
To use the SOCKS proxy you'll have to change your proxy server settings in the applications you want to use it. Any programs which you don't change the settings for will continue to access the internet via your own internet provider. <br />
<br />
Firefox:<br />
<br />
* Click on the grill menu (3 horizontal lines in the top left)<br />
* Click on the cog icon (options)<br />
* Click on the Wizard hat (advanced settings) icon at the bottom of the left hand side<br />
* Choose the network tab<br />
* Under the "Connection" section at the top click "Settings" next to the "configure how Firefox connects to the internet" <br />
* Choose "Manual Proxy configuration"<br />
* Enter "localhost" in the SOCKS host section and set the port to 1080, choose "SOCKS v5" and press Ok.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54125VPN and SSH via central2022-10-27T11:16:33Z<p>Ibers-admin: /* Connecting to Central */</p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. You must be on campus/VPN the first time you do this, subsequent access off campus requires an SSH key.<br />
<br />
In Windows 10+, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
=== Generate an SSH key ===<br />
<br />
This is required for access off campus.<br />
<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features.<br />
<br />
== SSH Port Forwarding == <br />
<br />
SSH port forwarding allows you to send data other than what would be on the screen/keyboard over your SSH session. It can be used to get around firewall restrictions to access computers behind a firewall or break out from behind a firewall to other parts of the internet. <br />
<br />
For a good visual guide explaining this, see [https://www.youtube.com/watch?v=AtuAdk4MwWw this youtube video]. <br />
<br />
=== Local Port Forwarding ===<br />
<br />
A local port forward allows you to access a single port on another computer that's accessible to the system you're SSH'ed into. For example you might want to SSH to central from outside the university and then have it port forward to Bert. This way you can SSH straight into Bert, using the port forward via central. <br />
<br />
Lets forward port 22 on bert (which is the port for SSH) to port 2222 on our local computer via central.aber.ac.uk. <br />
<br />
ssh -L 2222:bert.ibers.aber.ac.uk:22 <userid>@central.aber.ac.uk<br />
<br />
We can now connect to bert by running (in a different terminal window):<br />
<br />
ssh -p 2222 <userid>@localhost<br />
<br />
This feature is particularly useful for copying files between bert and your home PC as you can use it with the SCP/SFTP commands or a graphical copying utility like Filezilla.<br />
<br />
To copy a file called <localfile> to bert we can do:<br />
<br />
scp -P 2222 <localfile> <userid>@localhost:<br />
<br />
Note that scp uses a capital P to specify the port number, but ssh uses a lower case p.<br />
<br />
=== Dynamic Port Forwarding (SOCKS proxy) ===<br />
<br />
SSH has a nice extra feature where data can be forwarded to the remote computer for it to forward onto others. This effectively means any data you send will appear to be from the remote computer. To activate this feature we have to start SSH with an extra option. SSH will create a proxy server using the SOCKS protocol, any software we want to use this feature will have to be told to send its data to the SOCKS proxy. <br />
<br />
To start the proxy add the "-D" option to SSH followed by a port number between 1024 and 65535, 1080 is the default number for SOCKS but it doesn't really matter what you use. <br />
<br />
ssh <userid>@central.aber.ac.uk -D 1080<br />
<br />
<br />
Once we've entered our password there will be a SOCKS proxy server running on our local computer listening on port 1080. Any requests sent to this will be forwarded to central, which will forward them onto their destination. <br />
<br />
==== Proxy server settings ====<br />
<br />
To use the SOCKS proxy you'll have to change your proxy server settings in the applications you want to use it. Any programs which you don't change the settings for will continue to access the internet via your own internet provider. <br />
<br />
Firefox:<br />
<br />
* Click on the grill menu (3 horizontal lines in the top left)<br />
* Click on the cog icon (options)<br />
* Click on the Wizard hat (advanced settings) icon at the bottom of the left hand side<br />
* Choose the network tab<br />
* Under the "Connection" section at the top click "Settings" next to the "configure how Firefox connects to the internet" <br />
* Choose "Manual Proxy configuration"<br />
* Enter "localhost" in the SOCKS host section and set the port to 1080, choose "SOCKS v5" and press Ok.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54123Main Page2022-05-04T09:50:05Z<p>Ibers-admin: /* Workarounds */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Installing your own modules]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]<br />
<br />
[[error importing function definition for `BASH_FUNC_module']]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54114Main Page2020-03-27T18:26:50Z<p>Ibers-admin: /* Remote Working Resources */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54113VPN and SSH via central2020-03-26T19:03:22Z<p>Ibers-admin: /* Enable access to central */</p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. <br />
<br />
In Windows 10, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features. <br />
<br />
<br />
== SSH Port Forwarding == <br />
<br />
SSH port forwarding allows you to send data other than what would be on the screen/keyboard over your SSH session. It can be used to get around firewall restrictions to access computers behind a firewall or break out from behind a firewall to other parts of the internet. <br />
<br />
For a good visual guide explaining this, see [https://www.youtube.com/watch?v=AtuAdk4MwWw this youtube video]. <br />
<br />
=== Local Port Forwarding ===<br />
<br />
A local port forward allows you to access a single port on another computer that's accessible to the system you're SSH'ed into. For example you might want to SSH to central from outside the university and then have it port forward to Bert. This way you can SSH straight into Bert, using the port forward via central. <br />
<br />
Lets forward port 22 on bert (which is the port for SSH) to port 2222 on our local computer via central.aber.ac.uk. <br />
<br />
ssh -L 2222:bert.ibers.aber.ac.uk:22 <userid>@central.aber.ac.uk<br />
<br />
We can now connect to bert by running (in a different terminal window):<br />
<br />
ssh -p 2222 <userid>@localhost<br />
<br />
This feature is particularly useful for copying files between bert and your home PC as you can use it with the SCP/SFTP commands or a graphical copying utility like Filezilla.<br />
<br />
To copy a file called <localfile> to bert we can do:<br />
<br />
scp -P 2222 <localfile> <userid>@localhost:<br />
<br />
Note that scp uses a capital P to specify the port number, but ssh uses a lower case p.<br />
<br />
=== Dynamic Port Forwarding (SOCKS proxy) ===<br />
<br />
SSH has a nice extra feature where data can be forwarded to the remote computer for it to forward onto others. This effectively means any data you send will appear to be from the remote computer. To activate this feature we have to start SSH with an extra option. SSH will create a proxy server using the SOCKS protocol, any software we want to use this feature will have to be told to send its data to the SOCKS proxy. <br />
<br />
To start the proxy add the "-D" option to SSH followed by a port number between 1024 and 65535, 1080 is the default number for SOCKS but it doesn't really matter what you use. <br />
<br />
ssh <userid>@central.aber.ac.uk -D 1080<br />
<br />
<br />
Once we've entered our password there will be a SOCKS proxy server running on our local computer listening on port 1080. Any requests sent to this will be forwarded to central, which will forward them onto their destination. <br />
<br />
==== Proxy server settings ====<br />
<br />
To use the SOCKS proxy you'll have to change your proxy server settings in the applications you want to use it. Any programs which you don't change the settings for will continue to access the internet via your own internet provider. <br />
<br />
Firefox:<br />
<br />
* Click on the grill menu (3 horizontal lines in the top left)<br />
* Click on the cog icon (options)<br />
* Click on the Wizard hat (advanced settings) icon at the bottom of the left hand side<br />
* Choose the network tab<br />
* Under the "Connection" section at the top click "Settings" next to the "configure how Firefox connects to the internet" <br />
* Choose "Manual Proxy configuration"<br />
* Enter "localhost" in the SOCKS host section and set the port to 1080, choose "SOCKS v5" and press Ok.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54112VPN and SSH via central2020-03-26T19:02:22Z<p>Ibers-admin: /* Local Port Forwarding */</p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. <br />
<br />
In Windows 10, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features. <br />
<br />
<br />
== SSH Port Forwarding == <br />
<br />
SSH port forwarding allows you to send data other than what would be on the screen/keyboard over your SSH session. It can be used to get around firewall restrictions to access computers behind a firewall or break out from behind a firewall to other parts of the internet. <br />
<br />
For a good visual guide explaining this, see [https://www.youtube.com/watch?v=AtuAdk4MwWw this youtube video]. <br />
<br />
=== Local Port Forwarding ===<br />
<br />
A local port forward allows you to access a single port on another computer that's accessible to the system you're SSH'ed into. For example you might want to SSH to central from outside the university and then have it port forward to Bert. This way you can SSH straight into Bert, using the port forward via central. <br />
<br />
Lets forward port 22 on bert (which is the port for SSH) to port 2222 on our local computer via central.aber.ac.uk. <br />
<br />
ssh -L 2222:bert.ibers.aber.ac.uk:22 <userid>@central.aber.ac.uk<br />
<br />
We can now connect to bert by running (in a different terminal window):<br />
<br />
ssh -p 2222 <userid>@localhost<br />
<br />
This feature is particularly useful for copying files between bert and your home PC as you can use it with the SCP/SFTP commands or a graphical copying utility like Filezilla.<br />
<br />
To copy a file called <localfile> to bert we can do:<br />
<br />
scp -P 2222 <localfile> <userid>@localhost:<br />
<br />
Note that scp uses a capital P to specify the port number, but ssh uses a lower case p.<br />
<br />
=== Dynamic Port Forwarding (SOCKS proxy) ===<br />
<br />
SSH has a nice extra feature where data can be forwarded to the remote computer for it to forward onto others. This effectively means any data you send will appear to be from the remote computer. To activate this feature we have to start SSH with an extra option. SSH will create a proxy server using the SOCKS protocol, any software we want to use this feature will have to be told to send its data to the SOCKS proxy. <br />
<br />
To start the proxy add the "-D" option to SSH followed by a port number between 1024 and 65535, 1080 is the default number for SOCKS but it doesn't really matter what you use. <br />
<br />
ssh <userid>@central.aber.ac.uk -D 1080<br />
<br />
<br />
Once we've entered our password there will be a SOCKS proxy server running on our local computer listening on port 1080. Any requests sent to this will be forwarded to central, which will forward them onto their destination. <br />
<br />
==== Proxy server settings ====<br />
<br />
To use the SOCKS proxy you'll have to change your proxy server settings in the applications you want to use it. Any programs which you don't change the settings for will continue to access the internet via your own internet provider. <br />
<br />
Firefox:<br />
<br />
* Click on the grill menu (3 horizontal lines in the top left)<br />
* Click on the cog icon (options)<br />
* Click on the Wizard hat (advanced settings) icon at the bottom of the left hand side<br />
* Choose the network tab<br />
* Under the "Connection" section at the top click "Settings" next to the "configure how Firefox connects to the internet" <br />
* Choose "Manual Proxy configuration"<br />
* Enter "localhost" in the SOCKS host section and set the port to 1080, choose "SOCKS v5" and press Ok.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54111VPN and SSH via central2020-03-26T14:33:09Z<p>Ibers-admin: </p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. <br />
<br />
In Windows 10, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features. <br />
<br />
<br />
== SSH Port Forwarding == <br />
<br />
SSH port forwarding allows you to send data other than what would be on the screen/keyboard over your SSH session. It can be used to get around firewall restrictions to access computers behind a firewall or break out from behind a firewall to other parts of the internet. <br />
<br />
For a good visual guide explaining this, see [https://www.youtube.com/watch?v=AtuAdk4MwWw this youtube video]. <br />
<br />
=== Local Port Forwarding ===<br />
<br />
A local port forward allows you to access a single port on another computer that's accessible to the system you're SSH'ed into. For example you might want to SSH to central from outside the university and then have it port forward to Bert. This way you can SSH straight into Bert, using the port forward via central. <br />
<br />
Lets forward port 22 on bert (which is the port for SSH) to port 2222 on our local computer via central.aber.ac.uk. <br />
<br />
ssh -L 2222:bert.ibers.aber.ac.uk:22 <userid>@central.aber.ac.uk<br />
<br />
We can now connect to bert by running:<br />
<br />
ssh -p 2222 <userid>@localhost<br />
<br />
=== Dynamic Port Forwarding (SOCKS proxy) ===<br />
<br />
SSH has a nice extra feature where data can be forwarded to the remote computer for it to forward onto others. This effectively means any data you send will appear to be from the remote computer. To activate this feature we have to start SSH with an extra option. SSH will create a proxy server using the SOCKS protocol, any software we want to use this feature will have to be told to send its data to the SOCKS proxy. <br />
<br />
To start the proxy add the "-D" option to SSH followed by a port number between 1024 and 65535, 1080 is the default number for SOCKS but it doesn't really matter what you use. <br />
<br />
ssh <userid>@central.aber.ac.uk -D 1080<br />
<br />
<br />
Once we've entered our password there will be a SOCKS proxy server running on our local computer listening on port 1080. Any requests sent to this will be forwarded to central, which will forward them onto their destination. <br />
<br />
==== Proxy server settings ====<br />
<br />
To use the SOCKS proxy you'll have to change your proxy server settings in the applications you want to use it. Any programs which you don't change the settings for will continue to access the internet via your own internet provider. <br />
<br />
Firefox:<br />
<br />
* Click on the grill menu (3 horizontal lines in the top left)<br />
* Click on the cog icon (options)<br />
* Click on the Wizard hat (advanced settings) icon at the bottom of the left hand side<br />
* Choose the network tab<br />
* Under the "Connection" section at the top click "Settings" next to the "configure how Firefox connects to the internet" <br />
* Choose "Manual Proxy configuration"<br />
* Enter "localhost" in the SOCKS host section and set the port to 1080, choose "SOCKS v5" and press Ok.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54110VPN and SSH via central2020-03-25T19:16:41Z<p>Ibers-admin: /* Socks proxy */</p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. <br />
<br />
In Windows 10, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features. <br />
<br />
<br />
== Socks proxy ==<br />
<br />
SSH has a nice extra feature where data can be forwarded to the remote computer for it to forward onto others. This effectively means any data you send will appear to be from the remote computer. To activate this feature we have to start SSH with an extra option. SSH will create a proxy server using the SOCKS protocol, any software we want to use this feature will have to be told to send its data to the SOCKS proxy. <br />
<br />
To start the proxy add the "-D" option to SSH followed by a port number between 1024 and 65535, 1080 is the default number for SOCKS but it doesn't really matter what you use. <br />
<br />
ssh <userid>@central.aber.ac.uk -D 1080<br />
<br />
<br />
Once we've entered our password there will be a SOCKS proxy server running on our local computer listening on port 1080. Any requests sent to this will be forwarded to central, which will forward them onto their destination. <br />
<br />
=== Proxy server settings ===<br />
<br />
To use the SOCKS proxy you'll have to change your proxy server settings in the applications you want to use it. Any programs which you don't change the settings for will continue to access the internet via your own internet provider. <br />
<br />
Firefox:<br />
<br />
* Click on the grill menu (3 horizontal lines in the top left)<br />
* Click on the cog icon (options)<br />
* Click on the Wizard hat (advanced settings) icon at the bottom of the left hand side<br />
* Choose the network tab<br />
* Under the "Connection" section at the top click "Settings" next to the "configure how Firefox connects to the internet" <br />
* Choose "Manual Proxy configuration"<br />
* Enter "localhost" in the SOCKS host section and set the port to 1080, choose "SOCKS v5" and press Ok.<br />
<br />
== SSH Port Forwarding ==</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=VPN_and_SSH_via_central&diff=54109VPN and SSH via central2020-03-25T18:49:14Z<p>Ibers-admin: Created page with "= VPN = The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you wi..."</p>
<hr />
<div>= VPN =<br />
<br />
The VPN (Virtual Private Network) will securely connect your computer to the university network when off-campus. To access bert and most IBERS virtual machines you will need to connect to the VPN first. <br />
<br />
The university uses a VPN program called Global Protect, instructions on how to install it can be found at on the [https://www.aber.ac.uk/en/is/it-services/vpn/ Information Services FAQ pages]. <br />
<br />
== More detailed notes ==<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/vpn/ webpage].<br />
<br />
== Using the VPN on Linux ==<br />
<br />
There is an official GlobalProtect client for Linux which is linked to on the Information Services page, however some users have reported difficulty getting it to work. <br />
<br />
<br />
=== OpenConnect on the command line ===<br />
<br />
As an alternative the open source openconnect client can be used, but it needs to be version 8.0 or newer. If you are running Ubuntu version 16.04 or 18.04 this is not available using your normal package sources, but can be installed via this [https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect PPA]. Linux Mint 19 seems to work without any extra packages. The openconnect client can also be installed from source, you can download it from [https://github.com/openconnect/openconnect github].<br />
<br />
Use the command (replace <userid> with your aber user id, WITHOUT @aber.ac.uk:<br />
<br />
sudo openconnect --user=<userid> --protocol=gp pa-vpn.aber.ac.uk<br />
<br />
You will need to have setup a Multifactor Authentication token using a phone app such as Google Authenticator or [https://github.com/paolostivanin/OTPClient otpclient] (for Linux desktop) and by visiting the webpage [mfa.aber.ac.uk] while on campus. If you can't get to campus see the section below on Socks proxies as a workaround for this.<br />
<br />
=== OpenConnect via Network Manager ===<br />
<br />
If you want to connect using a GUI then you can create a <br />
<br />
This does NOT work in Ubuntu versions 16.04 or 18.04. <br />
<br />
=== More detailed Linux notes ===<br />
<br />
Alun Jones in Computer Science support has written some detailed instructions on using you can find these on his [https://users.dcs.aber.ac.uk/auj/linux/ webpage].<br />
<br />
= SSH via Central =<br />
<br />
Central is a Linux server run by Information Services which is accessible off campus. You can login to it using SSH and then login to other machines (e.g. bert or your office PC) that are on the university network. Access to central is disabled by default unless you are part of the Computer Science department. <br />
<br />
== Enable access to central ==<br />
<br />
1. Go to the [https://myaccount.aber.ac.uk IS My Account page]<br />
2. Choose "Login to check and edit your account settings"<br />
3. Enter your university username and password when prompted<br />
4. Click "Add or remove permissions" in the Account section.<br />
5. Under the "Service Features on my own account" section ensure that "SSH access on central.aber.ac.uk" says "Remove". If it says "Add" then click on the "Add" button. It will take about 15 minutes to activate.<br />
<br />
== Connecting to Central ==<br />
<br />
Connect via SSH to central.aber.ac.uk. <br />
<br />
In Windows 10, Linux or MacOS open a terminal and type (replacing <userid> with your university user ID):<br />
<br />
ssh <userid>@central.aber.ac.uk <br />
<br />
The first time you connect you'll see a message about the host key.<br />
<br />
The authenticity of host 'central.aber.ac.uk (144.124.16.20)' can't be established.<br />
ECDSA key fingerprint is SHA256:MAyKXGiivwSsc9ICg1PQdh1Xo92qjTAyDhuub8xMkqA.<br />
Are you sure you want to continue connecting (yes/no)?<br />
<br />
Type "yes" (just pressing y won't work) and then press enter. Then enter your password when prompted. Once logged in the prompt will change to saying:<br />
<br />
central:~ $<br />
<br />
From here you could connect to Bert by typing:<br />
<br />
ssh bert.ibers.aber.ac.uk<br />
<br />
<br />
=== Other Windows SSH clients ===<br />
<br />
If you don't have a recent version of Windows 10 you'll need to install an SSH client. Try either [https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Putty] or [https://mobaxterm.mobatek.net/download-home-edition.html MobaXTerm]. Putty is a small download and very simple, MobaXterm is bigger and has many other features. <br />
<br />
<br />
== Socks proxy ==<br />
<br />
SSH has a nice extra feature where data you want to reach the internet can be sent through the SSH connection and the computer at the other end will forward it onto the internet. This will give other computers k<br />
This effectively forms a very basic VPN<br />
<br />
<br />
== SSH Port Forwarding ==</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54108Main Page2020-03-25T17:59:17Z<p>Ibers-admin: /* Remote Working Resources */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN and SSH via central]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Using SSH tunnels]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54107Main Page2020-03-25T17:58:39Z<p>Ibers-admin: /* Troubleshooting */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Using SSH tunnels]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Asking_for_Help&diff=54106Asking for Help2020-03-25T17:57:47Z<p>Ibers-admin: /* Where else can I get help? */</p>
<hr />
<div><br />
== Where can I get help? ==<br />
<br />
The easiest way to get help is to send an email to ibers-cs@aber.ac.uk. You can also ask in person, but there is a good chance that I (or whoever is in a position to help at the time of asking) will forget the details of your problem a few minutes later, so it is a good idea to send a follow up email with the details anyway.<br />
<br />
== What information do I need to provide? ==<br />
<br />
In all cases, you should provide enough information so that an admin can reproduce your problem. What is required will depend on the nature of your problem, but here is a good start:<br />
<br />
=== Context ===<br />
<br />
To paraphrase the [https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem stackexchange meta] people may ask about a failed attempt at a solution to a problem, rather than underlying problem itself. To avoid this, include some context about your issue.<br />
<br />
Consider the following contrived example: Alice is experiencing stomach pain due to appendicitis. She tried taking ibuprofen, but it proves ineffective. Believing that she has a solution, she asks her friend Bob to buy her a stronger painkiller, without offering any context. Bob complies, but the stronger painkiller is not a solution to her problem and it does not cure her of her appendicitis. Had Alice described her actual problem Bob could have directed Alice to the correct solution (the nearest hospital).<br />
<br />
=== Command/Script ===<br />
<br />
Include the command/script that is not working as expected.<br />
<br />
=== Environment Variables ===<br />
<br />
Include the output of <br />
<br />
env<br />
<br />
just before or after you execute your misbehaving script/command. <br />
<br />
=== Output ===<br />
<br />
If execution of a program is resulting in an error, or just not producing the output you expect, you should include the '''''exact''''' output. It's ok to copy and paste here. If the output is longer than a few lines, it would be better to save it to a file and attach it to your email instead.<br />
<br />
If you are using bash, you can redirect the stdout and stderr streams of a command (`ls -l` in this case) to a file like this:<br />
<br />
ls -l >> output_of_ls.txt 2>&1<br />
<br />
=== Expected Output ===<br />
<br />
This is very important. An admin probably will have no experience with the program/tool you are trying to use and they will not necessarily know what the correct output looks like.<br />
<br />
=== What have you tried? ===<br />
<br />
Presumably you have tried to troubleshoot the problem yourself to the best of your ability. Describe what you tried and why it didn't work.<br />
<br />
== Where else can I get help? ==<br />
<br />
=== Hacky Hour ===<br />
<br />
Hacky Hour is an informal meetup for anyone using computing to solve data intensive research problems (such as bioinformatics). Meetings happen (approximately) every other week. For more details see <br />
https://scw-aberystwyth.github.io/HackyHour/</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54105Main Page2020-03-25T17:56:50Z<p>Ibers-admin: </p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
[[Minimal, Verifiable, Complete example]]<br />
<br />
== Remote Working Resources ==<br />
<br />
[[VPN]]<br />
<br />
[[Copying files with SCP/WinSCP/Filezilla/Cyberduck]]<br />
<br />
[[Using SSH tunnels]]<br />
<br />
[[Video conferencing and collaboration tools]]<br />
<br />
[[Running graphical programs remotely]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=SCW_Access_Procedures&diff=54102SCW Access Procedures2018-09-20T15:50:54Z<p>Ibers-admin: </p>
<hr />
<div>To request access to SCW you need to fill out a user account request and project request. These will be reviewed by SCW staff in Swansea or Cardiff. Go to https://my.supercomputing.wales and sign in with your Aberystwyth ID to create an account, then choose "Create Project Application" to apply for a project. Alternately you can request to join an existing project if you know its project code, ask the principal investigator of the project for this.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=SCW_Introduction&diff=54101SCW Introduction2018-09-20T15:48:41Z<p>Ibers-admin: </p>
<hr />
<div>= Introduction = <br />
<br />
[http://supercomputing.wales Super Computing Wales (SCW)] is a shared supercomputing facility for Aberystwyth, Bangor, Cardiff and Swansea Universities. All its computing resources are located in Cardiff or Swansea. <br />
<br />
Its considerably more powerful than the IBERS HPC, but is shared by a lot more users.<br />
<br />
== Current Status ==<br />
<br />
SCW is a follow on to the earlier High Performance Computing Wales project that ran from 2010 to 2015. Super Computing Wales is available to use as of September 24th 2018.<br />
<br />
== Training ==<br />
<br />
Training courses for SCW will be running periodically. See http://jump.aber.ac.uk/?nmhpt for more details. <br />
<br />
== Research Software Engineers ==<br />
<br />
SCW has employed a group of software engineers to help write new software or adapt existing software to take advantage of SCW. Colin Sauze (cos@aber.ac.uk) is the software engineer for Aberystwyth, please email him with any SCW specific queries. <br />
<br />
== Mailing List ==<br />
<br />
There is an SCW mailing list for all Aberystwyth based users of the system. Email scw-users@aber.ac.uk to send to this list. If you want to subscribe to the list email Colin Sauze (cos@aber.ac.uk) to request to be put on the list.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Singularity_Containers&diff=54100Singularity Containers2018-08-07T10:37:46Z<p>Ibers-admin: </p>
<hr />
<div>== What is Singularity ==<br />
<br />
Singularity [https://www.sylabs.io/] is a container system that doesn't need root access. Containers are a set of applications and a minimal operating system bundled together in a single file. They are a good way to ensure that anyone running the software is using the exact same set of <br />
libraries and dependencies. This is helpful to reproducible science where we want somebody else to be able to recreate our results. They also allow us to run newer*/different operating systems than what is installed on the HPC.<br />
<br />
* = up to a point, the very latest operating systems (e.g. Ubuntu 18.04+) still don't work.<br />
<br />
== Loading singularity ==<br />
<br />
Run the command (or add it to your submission script):<br />
<br />
module load singularity <br />
<br />
<br />
== Obtaining Containers ==<br />
<br />
Singularity Hub [https://singularity-hub.org/collections] contains a number of premade containers which you can download and use. To download these run:<br />
<br />
singularity pull shub://<username>/<imagename>:<tag> <br />
<br />
e.g.<br />
<br />
singularity pull shub://SupercomputingWales/singularity_hub:base_image <br />
<br />
This will then download the file to <username>-singularity_hub-master-<image name>.simg <br />
<br />
== Running a shell in a container ==<br />
<br />
The singularity shell command will run a shell inside the container. You can then execute any commands from software installed in the container. To do this run the command "singularity shell".<br />
<br />
singularity shell <image name><br />
<br />
e.g.<br />
<br />
singularity shell SupercomputingWales-singularity_hub-master-base_image.simg <br />
<br />
== Running the container's default actions ==<br />
<br />
Most containers specify a default command which they will run. The "singularity run" command will execute this.<br />
<br />
singularity run <image name><br />
<br />
<br />
== Accessing the host file system from inside the container ==<br />
<br />
singularity shell -B /ibers/ernie/home:/home <imagename><br />
<br />
This will mount the directory /ibers/ernie/home from the host under /home in the container. Your own home directory will be under /home/<userid>. Note that accessing the ~ or ~<userid> directories won't work.<br />
<br />
== Writing your own containers ==<br />
<br />
To make your own containers you'll probably have to install singularity on your own computer, as building a container requires root access.<br />
<br />
This example takes an ubuntu 16.04 image as a base, then installs the program cowsay. This is done when the container is built. When the container is run it executes cowsay with the arguments given on the command line.<br />
<br />
<br />
bootstrap: docker<br />
From:ubuntu:16.04<br />
<br />
%help<br />
Example container for Cowsay<br />
<br />
%labels<br />
MAINTAINER IBERS Admin<br />
<br />
%environment<br />
#configure our locale, without this we'll get locale errors<br />
export LC_ALL=C<br />
#cowsay installs to /usr/games, but this isn't in the path by default<br />
export PATH=/usr/games:$PATH<br />
<br />
%post <br />
apt-get update<br />
apt-get -y install cowsay<br />
<br />
%runscript<br />
cowsay $@<br />
<br />
<br />
To build the container save the above example in a file called Singularity and run:<br />
<br />
sudo singularity build cowsay.simg Singularity<br />
<br />
This will create an image file called cowsay.simg containing all the required software to run cowsay in an ubuntu 16.04 operating system.<br />
<br />
== Publishing a Container ==<br />
<br />
* Publish the Singularity file on Github.<br />
* Create an account on Singularity Hub.<br />
* Add this repository to Singularity Hub and it will be automatically built by singularity hub and made available for download using the singularity pull command.<br />
<br />
== More Information ==<br />
<br />
* [[:file:Containers.pdf|Presentation on containers]] from the July 2018 Bioinformatics workshop.<br />
* [https://www.sylabs.io/guides/2.5.1/user-guide Official Documentation]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Singularity_Containers&diff=54099Singularity Containers2018-08-03T17:31:49Z<p>Ibers-admin: Created page with "== What is Singularity == Singularity [https://www.sylabs.io/] is a container system that doesn't need root access. Containers are a set of applications and a minimal operati..."</p>
<hr />
<div>== What is Singularity ==<br />
<br />
Singularity [https://www.sylabs.io/] is a container system that doesn't need root access. Containers are a set of applications and a minimal operating system bundled together in a single file. They are a good way to ensure that anyone running the software is using the exact same set of <br />
libraries and dependencies. This is helpful to reproducible science where we want somebody else to be able to recreate our results. They also allow us to run newer*/different operating systems than what is installed on the HPC.<br />
<br />
* = up to a point, the very latest operating systems (e.g. Ubuntu 18.04+) still don't work.<br />
<br />
== Loading singularity ==<br />
<br />
Run the command (or add it to your submission script):<br />
<br />
module load singularity <br />
<br />
<br />
== Obtaining Containers ==<br />
<br />
Singularity Hub [https://singularity-hub.org/collections] contains a number of premade containers which you can download and use. To download these run:<br />
<br />
singularity pull shub://<username>/<imagename>:<tag> <br />
<br />
e.g.<br />
<br />
singularity pull shub://SupercomputingWales/singularity_hub:base_image <br />
<br />
This will then download the file to <username>-singularity_hub-master-<image name>.simg <br />
<br />
== Running a shell in a container ==<br />
<br />
The singularity shell command will run a shell inside the container. You can then execute any commands from software installed in the container. To do this run the command "singularity shell".<br />
<br />
singularity shell <image name><br />
<br />
e.g.<br />
<br />
singularity shell SupercomputingWales-singularity_hub-master-base_image.simg <br />
<br />
== Running the container's default actions ==<br />
<br />
Most containers specify a default command which they will run. The "singularity run" command will execute this.<br />
<br />
singularity run <image name><br />
<br />
<br />
== Accessing the host file system from inside the container ==<br />
<br />
singularity shell -B /ibers/ernie/home:/home <imagename><br />
<br />
This will mount the directory /ibers/ernie/home from the host under /home in the container. Your own home directory will be under /home/<userid>. Note that accessing the ~ or ~<userid> directories won't work.<br />
<br />
== Writing your own containers ==<br />
<br />
To make your own containers you'll probably have to install singularity on your own computer, as building a container requires root access.<br />
<br />
This example takes an ubuntu 16.04 image as a base, then installs the program cowsay. This is done when the container is built. When the container is run it executes cowsay with the arguments given on the command line.<br />
<br />
<br />
bootstrap: docker<br />
From:ubuntu:16.04<br />
<br />
%help<br />
Example container for Cowsay<br />
<br />
%labels<br />
MAINTAINER IBERS Admin<br />
<br />
%environment<br />
#configure our locale, without this we'll get locale errors<br />
export LC_ALL=C<br />
#cowsay installs to /usr/games, but this isn't in the path by default<br />
export PATH=/usr/games:$PATH<br />
<br />
%post <br />
apt-get update<br />
apt-get -y install cowsay<br />
<br />
%runscript<br />
cowsay $@<br />
<br />
<br />
To build the container save the above example in a file called Singularity and run:<br />
<br />
sudo singularity build cowsay.simg Singularity<br />
<br />
This will create an image file called cowsay.simg containing all the required software to run cowsay in an ubuntu 16.04 operating system.<br />
<br />
== Publishing a Container ==<br />
<br />
* Publish the Singularity file on Github.<br />
* Create an account on Singularity Hub.<br />
* Add this repository to Singularity Hub and it will be automatically built by singularity hub and made available for download using the singularity pull command.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54098Main Page2018-08-03T17:06:14Z<p>Ibers-admin: /* Special HPC Workflows */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
[[Minimal, Verifiable, Complete example]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
[[Singularity Containers]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54097Main Page2018-08-03T17:02:38Z<p>Ibers-admin: /* Slides and Talks */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
[[Minimal, Verifiable, Complete example]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Main_Page&diff=54096Main Page2018-08-03T17:02:15Z<p>Ibers-admin: /* Slides and Talks */</p>
<hr />
<div>[[File:IBERS-logo-Bi.jpg]]<br />
<br />
<br />
Welcome to the HPC Wiki for the [http://bioinformatics.ibers.aber.ac.uk Bioinformatics] research group at [http://www.aber.ac.uk Aberystwyth University]. The aim of this wiki is to allow researchers in the field of Bioinformatics in IBERS at Aberystwyth to document common tasks using the High Performance Computing cluster.<br />
<br />
== Editing this Wiki ==<br />
<br />
Please email ibers-cs@aber.ac.uk to ask for access to edit this wiki.<br />
<br />
== Troubleshooting ==<br />
<br />
Please read these sections if you are having trouble. There is a chance that you can resolve the problem on your own and ask for help more effectively otherwise. Some (most) of these ideas are shamelessly taken from the [https://stackoverflow.com/help/ stackoverflow help pages].<br />
<br />
[[Asking for Help]]<br />
<br />
[[Minimal, Verifiable, Complete example]]<br />
<br />
== Super Computing Wales Guides ==<br />
<br />
[[SCW Introduction]]<br />
<br />
[[SCW Training Materials]]<br />
<br />
[[SCW Access Procedures]]<br />
<br />
<br />
== IBERS HPC Guides ==<br />
<br />
'''Overview'''<br />
<br />
[[Bert and Ernie - An Overview]]<br />
<br />
[[Nodes, Cores, Slots]]<br />
<br />
[[Scheduling and queuing]]<br />
<br />
[[Getting an account]]<br />
<br />
'''Using the HPC'''<br />
<br />
[[Quick start]]<br />
<br />
[[Your disk space]]<br />
<br />
[[Module Environment]]<br />
<br />
[[Submitting your job using SGE]]<br />
<br />
[[Complex submissions]]<br />
<br />
[[Monitoring your jobs]]<br />
<br />
[[Available Software]]<br />
<br />
[[UNIX Graphical interface]]<br />
<br />
[[Array jobs]]<br />
<br />
[[ABySS]]<br />
<br />
[[Updated Scheduler]]<br />
<br />
[[WinSCP from external]]<br />
<br />
== Tools to access the HPC (Windows) ==<br />
<br />
[[Putty]]<br />
<br />
[[MobaXterm]]<br />
<br />
[[Filezilla]]<br />
<br />
== Slides and Talks ==<br />
<br />
[[File:Sge queue changes.pdf]]<br />
[[File:Containers.pdf]]<br />
<br />
== Bioinformatics Working Group related ==<br />
<br />
[[Meeting minutes]]<br />
<br />
== Special HPC Workflows ==<br />
<br />
[[Matlab]]<br />
<br />
[[Blast]]<br />
<br />
[[Active Perl]]<br />
<br />
[[SPRINT]]<br />
<br />
[[Progressive Cactus]]<br />
<br />
[[OrthoMCL]]<br />
<br />
[[Diamond Blast]]<br />
<br />
== Other Services ==<br />
<br />
[[Blast2go]]<br />
<br />
== Best Practice ==<br />
<br />
[[Running BLAST optimally]]<br />
<br />
[[Use scratch space]]<br />
<br />
== Tutorials ==<br />
<br />
[[RNA Seq on HPC]]<br />
<br />
[[RNA Seq on HPC (using Trinity)]]<br />
<br />
[[Genome Annotation]]<br />
<br />
== Tips ==<br />
<br />
[[Splitting Multifastas]]<br />
<br />
== Workarounds ==<br />
<br />
[[Java memory allocation issues]]<br />
<br />
[[convert_fasta_to_1l_fasta.sh]]´<br />
<br />
[[Your software/script uses threads and works locally, but crashes if run under the SGE's queue]]<br />
<br />
[[Intermittent login failures on the HPC and some VMs]]</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=File:Containers.pdf&diff=54095File:Containers.pdf2018-08-03T17:01:51Z<p>Ibers-admin: Introduction to containers</p>
<hr />
<div>Introduction to containers</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=SCW_Training_Materials&diff=54094SCW Training Materials2018-07-05T15:24:06Z<p>Ibers-admin: </p>
<hr />
<div>* HPC Wales/Super Computing Wales Support pages - http://portal.supercomputing.wales<br />
* Super Computing Wales tutorial - https://supercomputingwales.github.io/SCW-tutorial/</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Intermittent_login_failures_on_the_HPC_and_some_VMs&diff=54093Intermittent login failures on the HPC and some VMs2018-04-12T15:39:33Z<p>Ibers-admin: </p>
<hr />
<div>This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. <br />
<br />
'''Update - April 12th 2018, Information Services say the problem is now fixed. You should not need to do any of the following anymore. If you deployed the permanent fix for windows, please remove it by following the instructions in the "Removing the entry" section.'''<br />
<br />
= Background =<br />
<br />
A number of you have reported intermittent problems logging into bert or having your sessions disconnected. This causes timeouts when trying to login or "network error: Software caused connection abort" messages in putty when a login had worked and then gets dropped. The cause has been identified as a problem with a network switch in the Visualisation Centre which includes the HPC, Repository and some VMs. Information Services are aware of the problem and have been in discussion with the switch manufacture and hope to have a fix for this soon.<br />
<br />
= Workarounds =<br />
<br />
There are two workarounds for this problem, one temporary and one (semi) permanent. <br />
These fixues will NOT work on wireless/eduroam connections, VPN connections or any other connections outside the IBERS network.<br />
<br />
== Temporary fix ==<br />
<br />
* Login to central.aber.ac.uk<br />
* Login to bert.ibers.aber.ac.uk<br />
* ping the IP address of your computer (which will be of the format 144.124.1XX.XXX)<br />
* Login directly to Bert<br />
<br />
This will probably timeout at some point, depending on what OS you're using it might only last a few minutes. Leaving the login via central open with ping running should prevent this. If you're having problems with a VM then login to the VM instead of bert.<br />
<br />
<br />
== Permanent(ish) fixes: ==<br />
<br />
These are for bert only, if you've got a problem with a VM host then you'll have to change the addresses. Email ibers-cs@aber.ac.uk if you need help finding this out.<br />
<br />
=== Linux/Mac ===<br />
<br />
* Open a terminal and run the command:<br />
sudo arp -s bert.ibers.aber.ac.uk d4:be:d9:b3:b7:45<br />
<br />
This only lasts until you reboot.<br />
<br />
<br />
=== Windows ===<br />
<br />
==== Permanent Fix ====<br />
<br />
* Open an administrator command prompt (https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/)<br />
* Type the command: <br />
netsh -c interface ipv4 add neighbors “Local Area Connection” “144.124.106.138” “d4-be-d9-b3-b7-45” store=persistent<br />
* On some systems the network interface won't be called "Local Area Connection". Go to the "Network Connections" page from the "Network and Internet" section in Control Panel or run the "ipconfig" command to find the name of your network interface. It seems that on Windows 8/10 it will just be called "Ethernet" instead of "Local Area Connection".<br />
* This should permanently fix the problem.<br />
<br />
===== Removing the entry =====<br />
<br />
When IS have fixed the network this entry can be removed by doing:<br />
<br />
netsh -c interface ipv4 add neighbors "Local Area Connection" "144.124.106.138" "d4-be-d9-b3-b7-45" store=persistent<br />
<br />
<br />
==== Temporary Alternative Method ====<br />
<br />
If the method above fails you can temporarily add an entry by running the command:<br />
<br />
arp -s bert.ibers.aber.ac.uk d4-be-d9-b3-b7-45<br />
<br />
This will be reset when your system is rebooted.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Intermittent_login_failures_on_the_HPC_and_some_VMs&diff=54092Intermittent login failures on the HPC and some VMs2018-03-23T17:36:47Z<p>Ibers-admin: /* Windows */</p>
<hr />
<div>This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. <br />
<br />
= Background =<br />
<br />
A number of you have reported intermittent problems logging into bert or having your sessions disconnected. This causes timeouts when trying to login or "network error: Software caused connection abort" messages in putty when a login had worked and then gets dropped. The cause has been identified as a problem with a network switch in the Visualisation Centre which includes the HPC, Repository and some VMs. Information Services are aware of the problem and have been in discussion with the switch manufacture and hope to have a fix for this soon.<br />
<br />
= Workarounds =<br />
<br />
There are two workarounds for this problem, one temporary and one (semi) permanent. <br />
These fixues will NOT work on wireless/eduroam connections, VPN connections or any other connections outside the IBERS network.<br />
<br />
== Temporary fix ==<br />
<br />
* Login to central.aber.ac.uk<br />
* Login to bert.ibers.aber.ac.uk<br />
* ping the IP address of your computer (which will be of the format 144.124.1XX.XXX)<br />
* Login directly to Bert<br />
<br />
This will probably timeout at some point, depending on what OS you're using it might only last a few minutes. Leaving the login via central open with ping running should prevent this. If you're having problems with a VM then login to the VM instead of bert.<br />
<br />
<br />
== Permanent(ish) fixes: ==<br />
<br />
These are for bert only, if you've got a problem with a VM host then you'll have to change the addresses. Email ibers-cs@aber.ac.uk if you need help finding this out.<br />
<br />
=== Linux/Mac ===<br />
<br />
* Open a terminal and run the command:<br />
sudo arp -s bert.ibers.aber.ac.uk d4:be:d9:b3:b7:45<br />
<br />
This only lasts until you reboot.<br />
<br />
<br />
=== Windows ===<br />
<br />
==== Permanent Fix ====<br />
<br />
* Open an administrator command prompt (https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/)<br />
* Type the command: <br />
netsh -c interface ipv4 add neighbors “Local Area Connection” “144.124.106.138” “d4-be-d9-b3-b7-45” store=persistent<br />
* On some systems the network interface won't be called "Local Area Connection". Go to the "Network Connections" page from the "Network and Internet" section in Control Panel or run the "ipconfig" command to find the name of your network interface. It seems that on Windows 8/10 it will just be called "Ethernet" instead of "Local Area Connection".<br />
* This should permanently fix the problem.<br />
<br />
===== Removing the entry =====<br />
<br />
When IS have fixed the network this entry can be removed by doing:<br />
<br />
netsh -c interface ipv4 add neighbors "Local Area Connection" "144.124.106.138" "d4-be-d9-b3-b7-45" store=persistent<br />
<br />
<br />
==== Temporary Alternative Method ====<br />
<br />
If the method above fails you can temporarily add an entry by running the command:<br />
<br />
arp -s bert.ibers.aber.ac.uk d4-be-d9-b3-b7-45<br />
<br />
This will be reset when your system is rebooted.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Intermittent_login_failures_on_the_HPC_and_some_VMs&diff=54091Intermittent login failures on the HPC and some VMs2018-03-23T17:35:55Z<p>Ibers-admin: /* Temporary Alternative Method = */</p>
<hr />
<div>This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. <br />
<br />
= Background =<br />
<br />
A number of you have reported intermittent problems logging into bert or having your sessions disconnected. This causes timeouts when trying to login or "network error: Software caused connection abort" messages in putty when a login had worked and then gets dropped. The cause has been identified as a problem with a network switch in the Visualisation Centre which includes the HPC, Repository and some VMs. Information Services are aware of the problem and have been in discussion with the switch manufacture and hope to have a fix for this soon.<br />
<br />
= Workarounds =<br />
<br />
There are two workarounds for this problem, one temporary and one (semi) permanent. <br />
These fixues will NOT work on wireless/eduroam connections, VPN connections or any other connections outside the IBERS network.<br />
<br />
== Temporary fix ==<br />
<br />
* Login to central.aber.ac.uk<br />
* Login to bert.ibers.aber.ac.uk<br />
* ping the IP address of your computer (which will be of the format 144.124.1XX.XXX)<br />
* Login directly to Bert<br />
<br />
This will probably timeout at some point, depending on what OS you're using it might only last a few minutes. Leaving the login via central open with ping running should prevent this. If you're having problems with a VM then login to the VM instead of bert.<br />
<br />
<br />
== Permanent(ish) fixes: ==<br />
<br />
These are for bert only, if you've got a problem with a VM host then you'll have to change the addresses. Email ibers-cs@aber.ac.uk if you need help finding this out.<br />
<br />
=== Linux/Mac ===<br />
<br />
* Open a terminal and run the command:<br />
sudo arp -s bert.ibers.aber.ac.uk d4:be:d9:b3:b7:45<br />
<br />
This only lasts until you reboot.<br />
<br />
<br />
=== Windows ===<br />
<br />
* Open an administrator command prompt (https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/)<br />
* Type the command: <br />
netsh -c interface ipv4 add neighbors “Local Area Connection” “144.124.106.138” “d4-be-d9-b3-b7-45” store=persistent<br />
* On some systems the network interface won't be called "Local Area Connection". Go to the "Network Connections" page from the "Network and Internet" section in Control Panel or run the "ipconfig" command to find the name of your network interface. It seems that on Windows 8/10 it will just be called "Ethernet" instead of "Local Area Connection".<br />
* This should permanently fix the problem.<br />
<br />
==== Temporary Alternative Method ====<br />
<br />
If the method above fails you can temporarily add an entry by running the command:<br />
<br />
arp -s bert.ibers.aber.ac.uk d4-be-d9-b3-b7-45<br />
<br />
This will be reset when your system is rebooted.<br />
<br />
==== Removing the entry ====<br />
<br />
When IS have fixed the network this entry can be removed by doing:<br />
<br />
netsh -c interface ipv4 add neighbors "Local Area Connection" "144.124.106.138" "d4-be-d9-b3-b7-45" store=persistent</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Intermittent_login_failures_on_the_HPC_and_some_VMs&diff=54090Intermittent login failures on the HPC and some VMs2018-03-23T17:35:46Z<p>Ibers-admin: /* Windows */</p>
<hr />
<div>This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. <br />
<br />
= Background =<br />
<br />
A number of you have reported intermittent problems logging into bert or having your sessions disconnected. This causes timeouts when trying to login or "network error: Software caused connection abort" messages in putty when a login had worked and then gets dropped. The cause has been identified as a problem with a network switch in the Visualisation Centre which includes the HPC, Repository and some VMs. Information Services are aware of the problem and have been in discussion with the switch manufacture and hope to have a fix for this soon.<br />
<br />
= Workarounds =<br />
<br />
There are two workarounds for this problem, one temporary and one (semi) permanent. <br />
These fixues will NOT work on wireless/eduroam connections, VPN connections or any other connections outside the IBERS network.<br />
<br />
== Temporary fix ==<br />
<br />
* Login to central.aber.ac.uk<br />
* Login to bert.ibers.aber.ac.uk<br />
* ping the IP address of your computer (which will be of the format 144.124.1XX.XXX)<br />
* Login directly to Bert<br />
<br />
This will probably timeout at some point, depending on what OS you're using it might only last a few minutes. Leaving the login via central open with ping running should prevent this. If you're having problems with a VM then login to the VM instead of bert.<br />
<br />
<br />
== Permanent(ish) fixes: ==<br />
<br />
These are for bert only, if you've got a problem with a VM host then you'll have to change the addresses. Email ibers-cs@aber.ac.uk if you need help finding this out.<br />
<br />
=== Linux/Mac ===<br />
<br />
* Open a terminal and run the command:<br />
sudo arp -s bert.ibers.aber.ac.uk d4:be:d9:b3:b7:45<br />
<br />
This only lasts until you reboot.<br />
<br />
<br />
=== Windows ===<br />
<br />
* Open an administrator command prompt (https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/)<br />
* Type the command: <br />
netsh -c interface ipv4 add neighbors “Local Area Connection” “144.124.106.138” “d4-be-d9-b3-b7-45” store=persistent<br />
* On some systems the network interface won't be called "Local Area Connection". Go to the "Network Connections" page from the "Network and Internet" section in Control Panel or run the "ipconfig" command to find the name of your network interface. It seems that on Windows 8/10 it will just be called "Ethernet" instead of "Local Area Connection".<br />
* This should permanently fix the problem.<br />
<br />
==== Temporary Alternative Method =====<br />
<br />
If the method above fails you can temporarily add an entry by running the command:<br />
<br />
arp -s bert.ibers.aber.ac.uk d4-be-d9-b3-b7-45<br />
<br />
This will be reset when your system is rebooted.<br />
<br />
<br />
==== Removing the entry ====<br />
<br />
When IS have fixed the network this entry can be removed by doing:<br />
<br />
netsh -c interface ipv4 add neighbors "Local Area Connection" "144.124.106.138" "d4-be-d9-b3-b7-45" store=persistent</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Intermittent_login_failures_on_the_HPC_and_some_VMs&diff=54089Intermittent login failures on the HPC and some VMs2018-03-23T16:58:02Z<p>Ibers-admin: /* Windows */</p>
<hr />
<div>This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. <br />
<br />
= Background =<br />
<br />
A number of you have reported intermittent problems logging into bert or having your sessions disconnected. This causes timeouts when trying to login or "network error: Software caused connection abort" messages in putty when a login had worked and then gets dropped. The cause has been identified as a problem with a network switch in the Visualisation Centre which includes the HPC, Repository and some VMs. Information Services are aware of the problem and have been in discussion with the switch manufacture and hope to have a fix for this soon.<br />
<br />
= Workarounds =<br />
<br />
There are two workarounds for this problem, one temporary and one (semi) permanent. <br />
These fixues will NOT work on wireless/eduroam connections, VPN connections or any other connections outside the IBERS network.<br />
<br />
== Temporary fix ==<br />
<br />
* Login to central.aber.ac.uk<br />
* Login to bert.ibers.aber.ac.uk<br />
* ping the IP address of your computer (which will be of the format 144.124.1XX.XXX)<br />
* Login directly to Bert<br />
<br />
This will probably timeout at some point, depending on what OS you're using it might only last a few minutes. Leaving the login via central open with ping running should prevent this. If you're having problems with a VM then login to the VM instead of bert.<br />
<br />
<br />
== Permanent(ish) fixes: ==<br />
<br />
These are for bert only, if you've got a problem with a VM host then you'll have to change the addresses. Email ibers-cs@aber.ac.uk if you need help finding this out.<br />
<br />
=== Linux/Mac ===<br />
<br />
* Open a terminal and run the command:<br />
sudo arp -s bert.ibers.aber.ac.uk d4:be:d9:b3:b7:45<br />
<br />
This only lasts until you reboot.<br />
<br />
<br />
=== Windows ===<br />
<br />
* Open an administrator command prompt (https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/)<br />
* Type the command: <br />
netsh -c interface ipv4 add neighbors “Local Area Connection” “144.124.106.138” “d4-be-d9-b3-b7-45” store=persistent<br />
* This should permanently fix the problem.<br />
<br />
<br />
==== Removing the entry ====<br />
<br />
When IS have fixed the network this entry can be removed by doing:<br />
<br />
netsh -c interface ipv4 add neighbors "Local Area Connection" "144.124.106.138" "d4-be-d9-b3-b7-45" store=persistent</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Intermittent_login_failures_on_the_HPC_and_some_VMs&diff=54088Intermittent login failures on the HPC and some VMs2018-03-23T15:56:19Z<p>Ibers-admin: /* Background */</p>
<hr />
<div>This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. <br />
<br />
= Background =<br />
<br />
A number of you have reported intermittent problems logging into bert or having your sessions disconnected. This causes timeouts when trying to login or "network error: Software caused connection abort" messages in putty when a login had worked and then gets dropped. The cause has been identified as a problem with a network switch in the Visualisation Centre which includes the HPC, Repository and some VMs. Information Services are aware of the problem and have been in discussion with the switch manufacture and hope to have a fix for this soon.<br />
<br />
= Workarounds =<br />
<br />
There are two workarounds for this problem, one temporary and one (semi) permanent. <br />
These fixues will NOT work on wireless/eduroam connections, VPN connections or any other connections outside the IBERS network.<br />
<br />
== Temporary fix ==<br />
<br />
* Login to central.aber.ac.uk<br />
* Login to bert.ibers.aber.ac.uk<br />
* ping the IP address of your computer (which will be of the format 144.124.1XX.XXX)<br />
* Login directly to Bert<br />
<br />
This will probably timeout at some point, depending on what OS you're using it might only last a few minutes. Leaving the login via central open with ping running should prevent this. If you're having problems with a VM then login to the VM instead of bert.<br />
<br />
<br />
== Permanent(ish) fixes: ==<br />
<br />
These are for bert only, if you've got a problem with a VM host then you'll have to change the addresses. Email ibers-cs@aber.ac.uk if you need help finding this out.<br />
<br />
=== Linux/Mac ===<br />
<br />
* Open a terminal and run the command:<br />
sudo arp -s bert.ibers.aber.ac.uk d4:be:d9:b3:b7:45<br />
<br />
This only lasts until you reboot.<br />
<br />
<br />
=== Windows ===<br />
<br />
* Open an administrator command prompt (https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/)<br />
* Type the command: <br />
netsh -c interface ipv4 add neighbors “Local Area Connection” “144.124.106.138” “d4-be-d9-b3-b7-45” store=persistent<br />
* This should permanently fix the problem.</div>Ibers-adminhttps://bioinformatics.ibers.aber.ac.uk/wiki/index.php?title=Intermittent_login_failures_on_the_HPC_and_some_VMs&diff=54087Intermittent login failures on the HPC and some VMs2018-03-22T15:41:34Z<p>Ibers-admin: Created page with "This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. = Background = A number of you have reported intermittent problems logg..."</p>
<hr />
<div>This issue has been affecting users of Bert, the repository and some virtual machines in early 2018. <br />
<br />
= Background =<br />
<br />
A number of you have reported intermittent problems logging into bert or having your sessions disconnected. This causes timeouts when trying to login or "network error: Software caused connection abort" messages in putty when a login had worked and then gets dropped. The cause has been identified as a problem between the IBERS wired network and all the servers located in the Visualisation Centre which includes the HPC, Repository and some VMs.<br />
<br />
= Workarounds =<br />
<br />
There are two workarounds for this problem, one temporary and one (semi) permanent. <br />
These fixues will NOT work on wireless/eduroam connections, VPN connections or any other connections outside the IBERS network.<br />
<br />
== Temporary fix ==<br />
<br />
* Login to central.aber.ac.uk<br />
* Login to bert.ibers.aber.ac.uk<br />
* ping the IP address of your computer (which will be of the format 144.124.1XX.XXX)<br />
* Login directly to Bert<br />
<br />
This will probably timeout at some point, depending on what OS you're using it might only last a few minutes. Leaving the login via central open with ping running should prevent this. If you're having problems with a VM then login to the VM instead of bert.<br />
<br />
<br />
== Permanent(ish) fixes: ==<br />
<br />
These are for bert only, if you've got a problem with a VM host then you'll have to change the addresses. Email ibers-cs@aber.ac.uk if you need help finding this out.<br />
<br />
=== Linux/Mac ===<br />
<br />
* Open a terminal and run the command:<br />
sudo arp -s bert.ibers.aber.ac.uk d4:be:d9:b3:b7:45<br />
<br />
This only lasts until you reboot.<br />
<br />
<br />
=== Windows ===<br />
<br />
* Open an administrator command prompt (https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/)<br />
* Type the command: <br />
netsh -c interface ipv4 add neighbors “Local Area Connection” “144.124.106.138” “d4-be-d9-b3-b7-45” store=persistent<br />
* This should permanently fix the problem.</div>Ibers-admin