Where is everything: Difference between revisions

From genomewiki
Jump to navigationJump to search
(Created page with "* Behind the VPN: ** 1PB Hitachi shared file system called /pod ** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod is mounted as NFS ** CIRM-01 cluster (headnode: "podk"), only...")
 
No edit summary
Line 1: Line 1:
* Behind the VPN:
* Behind the VPN:
** 1PB Hitachi shared file system called /pod
** 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore and /pos/podstore
** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod is mounted as NFS
** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
** CIRM-01 cluster (headnode: "podk"), only for CIRM users, /pod is mounted as NFS
** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS
*** 19 nodes, 32 cores, 256 GB per node, /pod is mounted
** CIRM-01 cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, not too much local storage
** stacker: 1.5TB RAM, 160 cores, general use
*** 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted
** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
** bazaar: 256GB, 32 cores, general use
** bazaar: 256GB, 32 cores, general use
** openstack cluster
*** 70 nodes * 32 cores, 256 GB per node, no /pod, openstack block store available
*** each node has 10 TB local storage
* Before the VPN:
* GPFS massive parallel file system: 1PB, 30 file servers
* hgwdev: 1TB RAM, 64 Cores,
* ku cluster: parasol
* kolossus and juggernaut: 1TB RAM each, 64 Cores each

Revision as of 17:47, 28 September 2016

  • Behind the VPN:
    • 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore and /pos/podstore
    • ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
    • CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS
    • CIRM-01 cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, not too much local storage
      • 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted
    • stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
    • bazaar: 256GB, 32 cores, general use
    • openstack cluster
      • 70 nodes * 32 cores, 256 GB per node, no /pod, openstack block store available
      • each node has 10 TB local storage
  • Before the VPN:
  • GPFS massive parallel file system: 1PB, 30 file servers
  • hgwdev: 1TB RAM, 64 Cores,
  • ku cluster: parasol
  • kolossus and juggernaut: 1TB RAM each, 64 Cores each