Hey everyone,
RC has installed CUDA 7.5 on all aagk80 GPUs, and it can be found
at /usr/local/cuda-7.5 on the nodes themselves. If you want to compile your
program and link against these libraries, you will need to run an
interactive job to get onto one of the nodes. Unfortunately it appears that
there are issues doing so from the rclogin nodes. Therefore please ssh to
rcnx01 before trying to launch an interactive session. Thus once you log
onto Odyssey, your commands to start an interactive session would be:
ssh rcnx01
srun -p aagk80 --pty --mem 500 -t 0-06:00 /bin/bash
If you have any issues please let me know, and I'll do my best to help.
Sam
On Fri, Sep 11, 2015 at 11:01 AM, Alan Aspuru-Guzik <alan(a)aspuru.com> wrote:
Dear all, Info below! Let's surprise RC and get
this thing computing ASAP
Okay, aagk80 is ready for use:
[root@holy-slurm01 log]# scontrol show partition aagk80
PartitionName=aagk80
AllowGroups=rc_admin,aspuru-guzik_lab AllowAccounts=ALL AllowQos=ALL
AllocNodes=ALL Default=NO
DefaultTime=00:10:00 DisableRootJobs=NO GraceTime=0 Hidden=NO
MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=1 LLN=NO
MaxCPUsPerNode=UNLIMITED
Nodes=aagk80gpu[01-64]
Priority=10 RootOnly=NO ReqResv=NO Shared=NO PreemptMode=REQUEUE
State=UP TotalCPUs=768 TotalNodes=64 SelectTypeParameters=N/A
DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
Let me know if you have any questions and we will let you know when we are
ready to try again with HPL. In the meantime enjoy.
--
Alán Aspuru-Guzik | Professor of Chemistry and Chemical Biology
Harvard University | 12 Oxford Street, Room M113 | Cambridge, MA 02138
(617)-384-8188 |
http://aspuru.chem.harvard.edu |
http://about.me/aspuru
_____________________________________________
Aspuru-list mailing list
Aspuru-list(a)lists.fas.harvard.edu
https://lists.fas.harvard.edu/mailman/listinfo/aspuru-list