Where is everything: Difference between revisions
From genomewiki
Jump to navigationJump to search
No edit summary |
No edit summary |
||
Line 3: | Line 3: | ||
** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module) | ** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module) | ||
** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS | ** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS | ||
** CIRM-01 cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, not too much local storage | ** CIRM-01 parasol cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, not too much local storage | ||
*** 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted | *** 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted | ||
** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted | ** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted |
Revision as of 17:59, 28 September 2016
- Behind the VPN:
- 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore and /pos/podstore
- ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
- CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS
- CIRM-01 parasol cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, not too much local storage
- 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted
- stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
- bazaar: 256GB, 32 cores, general use
- openstack cluster
- 70 nodes * 32 cores, 256 GB per node, no /pod, openstack block store available
- each node has 10 TB local storage
- Before the VPN:
- GPFS massive parallel file system: 1PB, 30 file servers
- hgwdev: 1TB RAM, 64 Cores,
- ku cluster: parasol
- kolossus and juggernaut: 1TB RAM each, 64 Cores each