From HLRS Dgrid
Jump to: navigation, search

This page provides

  • a general hardware overview
  • information and formal rules how to get access to the system
  • and hints you should read before starting to work

Hardware and Architecture

The HLRS DGRID/BW-Grid Cluster platform consists of several frontend nodes for interactive access ( / and several compute nodes for execution of parallel programs.

Two different compute node CPU architecture types are installed:

  • IBM cell
  • Intel Xeon 5150 (woodcrest)
  • Intel Xeon E5440 (Harpertown)


  • Operating System: ScientificLinux 5.0 on Intel based nodes
  • Operating System: Fedora Core8 on cell based nodes
  • Batchsystem: Torque/Maui
  • node-node interconnect: Infiniband + GigE
  • Global Disk 100 TB
  • GPFS
  • OpenMPI
  • Compiler: Intel, GCC, Java

Short overview of installed nodes
Function Name CPU Sockets Memory Disk PBS properties Interconnect
Compute Nodes n010201 - n110414 (498 nodes) Intel Xeon E5440 2.83 GHz 2 16 GB - bwgrid GigE/Infiniband
Compute Nodes node01 - node28 (28 nodes) Intel Xeon 5150 2.66 GHz 2 8 GB 160 GB dgrid GigE/Infiniband
Compute Nodes cell01 - cell07 (7 nodes) Cell Broadband Engine 3.2 GHz 1GB 40 GB cell GigE/Infiniband
Login Node frbw / frwab Intel Xeon E5440 2.83 GHz 2 18 GB 900 GB - GigE/Infiniband
I/O Server data1 / data2 Intel Xeon 5130 2.0 GHz 16 GB 100 TB - GigE/Infiniband
Infrastructure (NTP,PBS,DNS,DHCP,FTP,GT4,...) some virtual XEN based hosts) - GigE

Hardware im Cluster bwGRiD



For a introduction to the access and usage of the BW-Grid cluster see


Grid User Certificate

Before applying for Access to the resources of the D-Grid infrastructure the user must own or get a Grid User Certificate. One possibility is to use a grid certificate of the DFN-Verein, which can be requested through DFN PKI members Grid RAs. The Forschungszentrum Jülich Grid Certificate page with its Web interface for user certificates is one of several possibilities. Your local contact person has to certify the request (identity card needed). Members of the Universität Stuttgart might therefore prefer the DFN-PKI Schnittstelle für Nutzer und Administratoren - Zertifikate for the Universität Stuttgart (RA_ID=123) or directly;id=1;menu_item=1&RA_ID=123

for user certificates and visit Thomas Beisel (, (0711)685-87220) at the HLRS for activation (make an appointment!).

VO membership

To get the Membership in an existent Virtual Organization (VO), like BW-Grid or InGrid, which is a D-Grid Project, it is necessary to register:

Access through Grid Middleware

Direct Access

The only way to access / (frontend node of HLRS DGRID Cluster) from outside is through ssh


The frontend nodes,
are intended as single point of access to the entire cluster. Here you can set your environment, move your data, edit and compile your programs and create batch scripts. Any interactive usage of the frontend node which causes a high cpu/memory load are NOT allowed (production runs).

The compute nodes for running parallel jobs are available only through the Batch system on the frontend node.

Filesystem Policy

IMPORTANT! NO BACKUP!! There is NO backup done of any user data located on HLRS DGRID/BW-Grid systems. The only protection of your data is the redundant disk subsystem. This RAID system (Raid5) is able to handle a failure of one component (e.g. a single disk or a controller). There is NO way to recover inadvertently removed data. Users have to backup critical data on their local site!

Support / Feedback

Please report all problems to:

System Administrators:
    1. Thomas Beisel
    2. Bernd Krischok
    3. Martin Hecht
User Management
    1. Martin Hecht
    2. Jochen Buchholz
Grid Technologie
    1. Martin Hecht
    2. Jochen Buchholz
Software Tools / Development / Parallelisation
    1. Rainer Keller
    2. Uwe Küster
    3. Alexander Schulz
    4. Shiqing Fan
    1. Martin Bernreuther
    2. Martin Winter