Where is everything: Difference between revisions

From genomewiki
Jump to navigationJump to search
No edit summary
No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 2: Line 2:
** 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore (400TB) and /pos/podstore
** 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore (400TB) and /pos/podstore
** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS
** CIRM-01 parasol cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, 2TB local hard disk?
*** 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted
** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
** bazaar: 256GB, 32 cores, general use
** bazaar: 256GB, 32 cores, Ubuntu operating system, general use
** openstack cluster
** openstack cluster
*** 70 nodes * 32 cores, 256 GB per node, no /pod, openstack block store available
*** manage via http://podcloud.pod/
*** 2240 cores (70 nodes * 32 cores), 256 GB per node, no /pod, openstack block store available
*** each node has 10 TB local storage
*** each node has 10 TB local storage
*** CIRM-01: 64GB RAM, 24 cores, /pod/pstore is mounted as NFS, only accessible to CIRM group
** traditional clusters:
*** parasol cluster, 19 x 32 = 608 cores, headnode: "podk", /pod/pstore is mounted as NFS, 2TB local hard disk?, 256 GB per node, /pod/pstore is mounted
*** SGI cluster, 50 x 32 = 1600 cores, headnode "podk", /pod/pstore mounted as NFS, hard disk? 256 GB RAM per node


* Before the VPN:
* Before the VPN:
** GPFS massive parallel file system: 1.3PB, 30 file servers, mounted on /hive
** GPFS massive parallel file system: 1.3PB, 30 file servers, mounted on /hive everywhere
** hgwdev: 1TB RAM, 64 Cores, ~1TB harddisk on /scratch, small NFS as /cluster/software  
** hgwdev: 1TB RAM, 64 Cores, ~1TB harddisk on /scratch, small NFS as /cluster/software  
** parasol "ku" cluster: 30 nodes, 256 GB RAM each,  
** parasol "ku" cluster: 960 cores (30 nodes x 32 cores), 256 GB RAM per node, 2TB of local /scratch
** kolossus and juggernaut: 1TB RAM each, 64 Cores each
** kolossus and juggernaut: like hgwdev, 1TB RAM each, 64 Cores each

Latest revision as of 20:38, 28 November 2016

  • Behind the VPN:
    • 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore (400TB) and /pos/podstore
    • ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
    • stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
    • bazaar: 256GB, 32 cores, Ubuntu operating system, general use
    • openstack cluster
      • manage via http://podcloud.pod/
      • 2240 cores (70 nodes * 32 cores), 256 GB per node, no /pod, openstack block store available
      • each node has 10 TB local storage
      • CIRM-01: 64GB RAM, 24 cores, /pod/pstore is mounted as NFS, only accessible to CIRM group
    • traditional clusters:
      • parasol cluster, 19 x 32 = 608 cores, headnode: "podk", /pod/pstore is mounted as NFS, 2TB local hard disk?, 256 GB per node, /pod/pstore is mounted
      • SGI cluster, 50 x 32 = 1600 cores, headnode "podk", /pod/pstore mounted as NFS, hard disk? 256 GB RAM per node
  • Before the VPN:
    • GPFS massive parallel file system: 1.3PB, 30 file servers, mounted on /hive everywhere
    • hgwdev: 1TB RAM, 64 Cores, ~1TB harddisk on /scratch, small NFS as /cluster/software
    • parasol "ku" cluster: 960 cores (30 nodes x 32 cores), 256 GB RAM per node, 2TB of local /scratch
    • kolossus and juggernaut: like hgwdev, 1TB RAM each, 64 Cores each