Where is everything: Difference between revisions
From genomewiki
Jump to navigationJump to search
No edit summary |
No edit summary |
||
Line 3: | Line 3: | ||
** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module) | ** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module) | ||
** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS | ** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS | ||
** CIRM-01 parasol cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, | ** CIRM-01 parasol cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, 2TB local hard disk? | ||
*** 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted | *** 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted | ||
** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted | ** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted | ||
Line 12: | Line 12: | ||
* Before the VPN: | * Before the VPN: | ||
** GPFS massive parallel file system: | ** GPFS massive parallel file system: 1.3PB, 30 file servers, mounted on /hive | ||
** hgwdev: 1TB RAM, 64 Cores, | ** hgwdev: 1TB RAM, 64 Cores, ~1TB harddisk on /scratch, small NFS as /cluster/software | ||
** ku cluster: | ** parasol "ku" cluster: 30 nodes, 256 GB RAM each, | ||
** kolossus and juggernaut: 1TB RAM each, 64 Cores each | ** kolossus and juggernaut: 1TB RAM each, 64 Cores each |
Revision as of 18:25, 28 September 2016
- Behind the VPN:
- 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore (400TB) and /pos/podstore
- ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
- CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod/pstore is mounted as NFS
- CIRM-01 parasol cluster (headnode: "podk"), only for CIRM users, /pod/pstore is mounted as NFS, 2TB local hard disk?
- 19 nodes, 32 cores, 256 GB per node, /pod/pstore is mounted
- stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
- bazaar: 256GB, 32 cores, general use
- openstack cluster
- 70 nodes * 32 cores, 256 GB per node, no /pod, openstack block store available
- each node has 10 TB local storage
- Before the VPN:
- GPFS massive parallel file system: 1.3PB, 30 file servers, mounted on /hive
- hgwdev: 1TB RAM, 64 Cores, ~1TB harddisk on /scratch, small NFS as /cluster/software
- parasol "ku" cluster: 30 nodes, 256 GB RAM each,
- kolossus and juggernaut: 1TB RAM each, 64 Cores each