Personal tools
You are here: Home References

DEISA Glossary

A collection of technical terms related to High-Performance-Computing (HPC) and the DEISA infrastructure.

The following is a collection of technical terms related to High-Performance-Computing (HPC) and the DEISA infrastructure.

Definitions have been collected from all over the Web. For more detailed explanations and definitions, the interested reader is referred to the following useful links:

Authentication, Authorisation and Accounting.
Policy and methods to compute usage data and to handle user credits.
IBM's proprietary UNIX Operating System.
Americas GridPMA
The Americas Grid Policy Management Authority (TAGPMA) is a federation of authentication providers and relying parties headed by a Policy Management Authority of those responsible for grids in North and South America. The goal is to foster the cross-domain trust relationships that are needed to deploy grids in the Americas and around the world.
The Asia Pacific Grid Policy Management Authority (APGrid PMA) supports Grid communities in Asia Pacific to implement a common trust domain across organizations. The main activity of the APGrid PMA is to coordinate Public Key Infrastructure for use with Grid authentication. The APGrid PMA is expected to be referred as a representative policy management authority in Asia Pacific.
Applications Task Force (ATASKF)
The DEISA team of leading experts in high performance and Grid computing whose major objective is to provide the consultancy needed to enable the user's adoption of the DEISA research infrastructure. The application support service is provided on different levels. Early adopters of the DEISA infrastructure from different scientific communities (e.g., Materials Science, Cosmology, Plasma Physics, Life Sciences, Engineering and Industry) are individually supported.
Process of identifying and confirming the identity of a previously registered user.
Process of granting or denying access to a resource for an authenticated user.
Autonomic Computing
The capability of computer systems and networks to automatically configure themselves to changing conditions and heal themselves in the event of failure. Autonomy implies that less human intervention is required for operation under such conditions.
Batch Scheduler
Middleware tool which schedules user's batch jobs to run on the best suited and least loaded system(s) in the network (e.g. a cluster of servers, compute clusters, parallel systems, grid), according to user-set parameters like priority, run time, system architecture, operating system, memory requirements, etc. Popular batch schedulers are for example Load Leveler, LSF, PBS, and Oracle Grid Engine.
(Bringing Europe's eLectronic Infrastructures to Expanding Frontiers - Phase II) is an EU FP7 project spanning over 24 months, start date 1st of April 2008, with the aim of supporting the goals of e- Infrastructure projects to maximise synergies in specific application areas between research, scientific and industrial communities. The project builds on the success achieved in FP6 BELIEF [2005-2007] and has the strategic objective to coordinate the efficient & effective communication of results, networking and knowledge between e-Infrastructure projects and their users to promote worldwide development and exploitation.
A set of computer programs usually characterising the workload of a computer centre s used to measure the performance of a computer system or environment. An example is the DEISA Benchmark suite of parallel scientific applications codes from a wide range of disciplines which can be used to quantify the performance of parallel supercomputers. These codes have been chosen as being representative for the applications of the scientific projects performed on the DEISA HPC facilities. The codes and associated datasets have been selected to be useful in benchmarking machines with peak performances in the regime of hundreds of  teraflops.
The OGSA Basic Execution Services are a draft standard created by the Open Grid Forum. It defines basic services for Job submission and management. The current version of the standard is supported in UNICORE6 used in DEISA.
Best Practices
The most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people. Best Practice is considered by some as a business buzzword used to describe the process of developing and following a standard way of doing things that multiple organizations can use for management, policy, and especially software systems.
The Berkeley Open Infrastructure for Network Computing, is a software platform for volunteer computing and desktop Grid computing, developed at UC Berkeley.
Capability Computing
Computing paradigm aiming at solving grand-challenge, big-science problems, based on highly optimized application codes running on high-end supercomputers.
Capacity Computing
Computing paradigm aiming at maximizing workload throughput.
The information issued by a trusted party. Often in a directory with public access. Used to identify an individual or a system. Contains at least a subject, a unique serial number and a validity period.
Certification Authority (CA)

An internal entity or trusted third party that issues, signs, revokes and manages digital certificates.
The Common Gateway Interface (CGI) is a standard for interfacing external applications with information servers, such as HTTP or Web servers. A plain HTML document that the Web daemon retrieves is static, which means it exists in a constant state: a text file that doesn't change. A CGI program, on the other hand, is executed in real-time, so that it can output dynamic information. Example of a gateway: to "hook up" a Unix database to the World Wide Web, to allow people from all over the world to query it, one needs to create a CGI program that the Web daemon will execute to transmit information to the database engine, and receive the results back again and display them to the client.
Cloud Computing
Computing paradigm focusing on provisioning of metered services related to the use of hardware, software platforms, and applications, billed on a pay-per-use base, and pushed by vendors such as Amazon, Google, Microsoft, Salesforce, Oracle, and others. Accordingly, there are many different (but similar) definitions (as with Grid Computing). Examples:
  • Gartner Group: "Cloud Computing is a style of computing where massive scalable IT-enabled capabilities are delivered as a service to external customers using Internet technologies."
  • Amazon: "Easy, secure, flexible, on demand, pay per use, self serve."
  • Other definitions of Cloud Computing from the web:
    • Increase capacity or add capability on the fly without investing in new infrastructure, training new personnel, or licensing new software.
    • Cloud Computing is the realization of earlier Utility Computing without the technical complexities or complicated deployments.
Cluster Computing
Connecting two or more computers together in such a way that they appear to be a single computing resource. Clustering is used for parallel processing, load balancing, and fault tolerance. Clustering is a popular strategy for implementing grid computing, since it is relatively easy to add new CPUs simply by adding a new server or blade to the overall cluster. Clusters are typically transparent to users and applications.
Common Production Environment
See DCPE, DEISA Common Production Environment.
A specialized workload management system for compute-intensive jobs. Like other full-featured batch systems, Condor provides a job queuing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. Condor is a grid middleware component developed at UW Madison, packaged and distributed as part of the Virtual Data Toolkit (VDT).
Condor for Globus. The Condor-G system leverages recent advances in two distinct areas: (1) security and resource access in multi-domain environments, as supported within the Globus Toolkit, and (2) management of computation and harnessing of resources within a single administrative domain, embodied within the Condor system. Condor-G combines the inter-domain resource management protocols of the Globus Toolkit and the intra-domain resource and job management methods of Condor to allow the user to harness multi-domain resources as if they all belong to one personal domain.
Community Research and Development Information Service (CORDIS), the gateway to European research and development
HPC-Hardware vendor, offering Cray XT4 supercomputing systems.
Evidence asserting the user's right to access certain systems (e.g. username, password, etc).
The Data Access and Integration Services standard describes how different kinds of data sources ranging from relational (SQL) data bases to plain XML file repositories can be accessed remotely via Web Services.
DEISA Accounting Report Tool, is an easy to use client application which gathers accounting information provided by every DEISA site in a standardized manner, and generates reports about your usage of the (distributed) DEISA computing resources. For using DART you should have a valid account in the DEISA network.
Data Grid
A grid computing system that deals with data - the controlled sharing and management of large amounts of distributed data. These are often, but not always, combined with computational grid computing systems. Many scientific and engineering applications require access to large amounts of distributed data (terabytes or petabytes). The size and number of these data collections has been growing rapidly in recent years and will continue to grow as new experiments and sensors come on-line, the costs of computation and data storage decrease and performances increase, and new computational science applications are developed.
A disk pool management system with a SRM interface, jointly developed by DESY and Fermilab. dCache is a software system for storing and retrieving huge amounts of data, distributed among a large number of heterogeneous server nodes, under a single virtual file system tree with a variety of standard access methods. dCache provides methods for exchanging data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, the cache simulates unlimited direct access storage space. Data exchanges to and from the underlying Hierarchical Storage Management system are performed automatically and invisibly to the user. File system namespace operations may be performed through a standard NFS interface.
DEISA Common Production Environment. DEISA provides a software stack to the user, comprising compilers, shells, tools, libraries, middleware components and applications. The organization of this software is usually site and system specific. This Common Production Environment Service is offering a unified interface to access the software stack. This service allows compilers, libraries, tools and applications to be accessed at any site in a coherent way, by using a mechanism that is based on Environment Modules. DEISA provides a system of such modules which has been designed according to the requirements of DEISA user communities. Appropriate curation of the DEISA environment modules at each affected site ensures that the software access interface remains consistent in case of system software upgrades or due to the evolution of the DEISA software stack.
The Distributed European Infrastructure for Supercomputing Applications is a consortium of leading national supercomputing centres that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope. The purpose of this EU funded research infrastructure is to enable scientific discovery across a broad spectrum of science and technology, by enhancing and reinforcing European capabilities in the area of high performance computing. This becomes possible through a deep integration of existing national high-end platforms, tightly coupled by a dedicated network and supported by innovative system and grid software.
The previous DEISA project funded in FP6.
The current DEISA project funded in FP7, and successor of DEISA1.
DEISA Extreme Computing Initiative (DECI)
The purpose of the DEISA Extreme Computing Initiative (DECI) is to enhance the impact of the DEISA research infrastructure on leading European science and technology. DECI identifies, enables, deploys and operates "flagship" applications in selected areas of science and technology. These leading, ground breaking applications must deal with complex, demanding, innovative simulations that would not be possible without the DEISA infrastructure, and which would benefit from the exceptional resources of the Consortium.
An open standards-based means for accessing the DEISA consortium's heterogeneous supercomputing Grid.
The Distributed Resource Management Application API is a high-level Open Grid Forum API specification for the submission and control of jobs to one or more Distributed Resource Management Systems (DRMS) within a Grid architecture. The scope of the API covers all the high level functionality required for Grid applications to submit, control, and monitor jobs to local Grid DRM systems. DRMAA is (beside GridRPC) one of the first specifications that reached the full recommendation status in the Open Grid Forum.
Distributed Resource Management System. Examples are Condor, Grid Engine, Loadleveler, LSF, PBS, Slurm, and Torque.
An extension of DEISA1 in FP6.
The European e-Infrastructure Forum is a forum for the discussion of principles and practices to create synergies for distributed Infrastructures. The goal of the European e-Infrastructure Forum is the achievement of seamless interoperation of leading e-Infrastructures serving the European Research Area. The focus of the forum is the needs of the user communities that require services which can only be achieved by collaborating Infrastructures. The current members are EGEE, EGI, DEISA, PRACE, Terena, and GEANT.
The European Fusion Development Agreement (EFDA) is an agreement between European fusion research institutions and the European Commission to strengthen their coordination and collaboration, and to participate in collective activities. Its activities include coordination of physics and technology in EU laboratories, the exploitation of the world's largest fusion experiment, the Joint European Torus (JET) in the UK, training and career development in fusion and EU contributions to international collaborations. EFDA is supported by DEISA.
Enabling Grids for E-sciencE project brought together scientists and engineers from more than 280 institutions in 54 countries world-wide to provide a seamless Grid infrastructure for e-Science that is available to scientists 24 hours-a-day. Conceived from the start as a four-year project, the second two-year phase started on 1 April 2006, and was funded by the European Commission. Expanding from originally two scientific fields, high energy physics and life sciences, EGEE integrated applications from many other scientific fields, ranging from geology to computational chemistry. Generally, the EGEE Grid infrastructure is ideal for any scientific research especially where the time and resources needed for running the applications are considered impractical when using traditional IT infrastructures. At the end of 2009, the EGEE Grid consists of 110,000 CPU available to users 24 hours a day, 7 days a week, in addition to about 20 PB disk (20 million Gigabytes) + tape MSS of storage, and maintains 250,000 concurrent jobs.
European Grid Initiative, to establish a sustainable grid infrastructure in Europe. Driven by the needs and requirements of the research community, it is expected to enable the next leap in research infrastructures, thereby supporting collaborative scientific discoveries in the European Research Area (ERA).
Refers to a new research environment in which all researchers - whether working in the context of their home institutions or in national or multinational scientific initiatives ? are sharing access to unique or distributed scientific facilities (including data, instruments, computing and communications), regardless of their type and location in the world.
e-infrastructure Reflection Group (e-IRG)
The e-Infrastructure Reflection Group was founded to define and recommend best practices for the pan- European electronic infrastructure efforts. It consists of official government delegates from all the EU countries. The e-IRG produces white papers, roadmaps and recommendations, and analyses the future foundations of the European Knowledge Society.
The European Middleware Initiative (EMI) is a close collaboration of the three major middleware providers, ARC, gLite and UNICORE, and other specialized software providers like dCache. The project's mission is to 1. deliver a consolidated set of middleware components for deployment in EGI (as part of the Unified Middleware Distribution - UMD), PRACE and other DCIs, 2. extend the interoperability and integration with emerging computing models, 3. strengthen the reliability and manageability of the services and establish a sustainable model to support, and 4. harmonise and evolve the middleware, ensuring it responds effectively to the requirements of the scientific communities relying on it.
The European Network for Earth System Modelling ENES is intended to help in the development and evaluation of state-of-the-art climate and Earth system models, to facilitate focused model inter-comparisons in order to assess and improve these models, to encourage exchanges of software and model results, and to help in the development of high performance computing facilities dedicated to long high-resolution multi-model ensemble integrations. ENES is supported by DEISA.
Enterprise Grid
A collection of networked grid components under the control of a grid management entity, typically managed by a single enterprise - i.e. a business entity is responsible for managing the assignment of resources to services in order to meet its business goals. What defines the boundaries of the enterprise grid is management responsibility and control. The services that run on an enterprise grid may range from the traditional commercial enterprise applications, such as ERP or CRM packaged applications, to newer, distributed applications or services. Enterprise grids are typically differentiated from more traditional data centers by management practices and technology, which make management service or application centric, rather than component centric, enable the pooling and sharing of networked resources, and enable agility through rapid and automated service provisioning. An enterprise grid may be confined to a single data center or it may extend across several.
Enterprise Grid Computing
A form of grid computing, specifically within an enterprise, resulting in an enterprise grid that includes commercial enterprise applications as a type of workload/service.
ESA's Planck mission will be the first European space observatory whose main goal is the study of the Cosmic Microwave Background (CMB) – the relic radiation from the Big Bang. Observing at microwave wavelengths, ESA's Planck observatory is the third space mission of its kind. It will measure tiny fluctuations in the CMB with unprecedented accuracy, providing the sharpest picture ever of the young Universe — when it was only 380 000 years old — and zeroing-in on theories that describe its birth and evolution. PLANCK is supported by DEISA.
European Strategy Forum on Research Infrastructures.
The EU Fusion fOR Iter Applications project is funded by European Union under FP7. It will provide a comprehensive framework and infrastructure for core and edge transport and turbulence simulation, linking grid and HPC, to the fusion modelling community. The EUFORIA project will enhance the modelling capabilities for ITER sized plasmas through the adaptation, optimization and integration of a set of critical applications for edge and core transport modelling targeting different computing paradigms as needed (serial and parallel grid computing and HPC). Deployment of both a grid service and a High Performance Computing services are essential to the project. A novel aspect is the dynamic coupling and integration of codes and applications running on a set of heterogeneous platforms into a single coupled framework through a workflow engine a mechanism needed to provide the necessary level integration in the physics applications. This strongly enhances the integrated modelling capabilities of fusion plasmas and will at the same time provide new computing infrastructure and tools to the fusion community in general. EUFORIA is supported by DEISA.
The international organisation to coordinate the trust fabric for e-Science grid authentication in Europe.
File Staging
When the job submit host and the execution host in a distributed environment do not have access to a common, shared file system or are simply using different root directories, File Staging enables that the necessary job data is copied (via remote file copy or a similar utility) to the execution host, and the job results are copied back to the submit host.
Sixth Research Framework Programme of the EU. and
Seventh Research Framework Programme of the EU. and
Is a file transfer protocol for exchanging and manipulating files over a Transmission Control Protocol (TCP) computer network such as the Internet. An FTP client may connect to an FTP server to manipulate files on that server.
A scalable distributed monitoring system for clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualisation. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures. It has been used to link clusters across university campuses and around the world and can scale to handle clusters with 2000 nodes.
The Gauss Centre for Supercomputing (GCS) is an alliance of the three national supercomputing centres - Jülich Supercomputing Centre (JSC), the Leibniz-Rechenzentrum (LRZ), and the Höchstleistungsrechenzentrum Stuttgart (HLRS) - into a virtual organisation enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Württemberg, Bayern, and Nordrhein-Westfalen from July 2006. This alliance provides one of the largest and most powerful supercomputer infrastructures in Europe.
GÉANT2 is the high-bandwidth, academic Internet serving Europe’s research and education community. Connecting over 30 million researchers with a multi-domain topology spanning 34 European countries and links to a number of other world regions, GÉANT2 is at the heart of global research networking. GÉANT2 is co-funded by the European Commission and Europe's national research and education networks, and is managed by DANTE.
The purpose of this Grid Interoperation Now Community Group of the Open Grid Forum is to coordinate a set of interoperation efforts among production Grids interested in interoperating in support of applications that require resources in multiple Grids.
Grid Information Service, the yellow pages of a Grid. Formerly MDS, the Metacomputing Directory Service.
Grid Index Information Service (a component of MDS) maintains a current index of all the registered resources known in the grid. It provides a means of knitting together arbitrary GRIS services to provide a coherent system image that can be explored or searched by grid applications. GIISes thus provide a mechanism for identifying "interesting" resources, where "interesting" can be defined arbitrarily. For example, a GIIS could list all of the computational resources available within a confederation of laboratories, or all of the distributed data storage systems owned by a particular agency.
A middleware for grid computing. Born from the collaborative efforts of more than 80 experts in 12 different academic and industrial research centres as part of the EGEE Project, gLite provides a framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet.
Globus Alliance
A group that conducts research and development for academic grids. The alliance, creators of the Globus Toolkit, includes representatives from Argonne National Laboratory, the University of Southern California Information Sciences Institute, the University of Chicago, the University of Edinburgh, and the Swedish Center for Parallel Computers.
Globus Toolkit
A kit designed by the Globus Alliance to provide a set of tools based on standard grid APIs. Its latest development version, GT5, is based on standards drafted by the Open Grid Forum.
The GLUE schema is intended to describe the components of an e-Infrastructure as well as their state. In its older versions, though, it was not able to describe the DEISA infrastructure due to its strong Globus/gLite disposition. Now, with GLUE being developed to a real standard in the OGF, DEISA takes part in this process and tries to put it on a broader basis.
The IBM General Parallel File System (GPFS) is a true distributed, clustered file system. Multiple servers are used to manage the data and metadata of a single file system. Individual files are broken into multiple blocks and striped across multiple disks, and multiple servers, which eliminates bottlenecks.
Grid Resource Allocation and Management service provides a single interface for requesting and using remote system resources for the execution of jobs. The most common use of GRAM is remote job submission and control. It is designed to provide a uniform, flexible interface to job scheduling systems. See also WS GRAM.
A service for sharing computer power and data storage capacity over the Internet, unlike the Web which is a service just for sharing information over the Internet. The Grid goes well beyond simple communication between computers, and aims ultimately to turn the global network of computers into one vast computational resource. Today, the Grid is a "work in progress", with the underlying technology still in a prototype phase, and being developed by hundreds of researchers and software engineers around the world. and
Grid Computing
Enables organisations to share computing and information resources across department and organisational boundaries in a secure, highly efficient manner. Organisations around the world are utilizing grid computing today in such diverse areas as collaborative scientific research, drug discovery, financial risk analysis, and product design. Grid computing enables research-oriented organisations to solve problems that were infeasible to solve due to computing and data-integration constraints. Grids also reduce costs through automation and improved IT resource utilisation. Finally, grid computing can increase an organisation's agility enabling more efficient business processes and greater responsiveness to change. Over time grid computing will enable a more flexible, efficient and utility-like global computing infrastructure.
Grid Engine
An open source batch-queuing and workload management system. Grid Engine is typically used on a compute farm or compute cluster and is responsible for accepting, scheduling, dispatching, and managing the remote execution of large numbers of standalone, parallel or interactive user jobs. It also manages and schedules the allocation of distributed resources such as processors, memory, disk space, and software licenses.
A protocol defined by several Open Grid Forum recommendations, and a draft before the IETF FTP working group. The GridFTP protocol provides for the secure, robust, fast and efficient transfer of (especially bulk) data. The Globus Toolkit provides the most commonly used implementation of that protocol, though others do exist (primarily tied to proprietary internal systems). and

GRDI2010 - Towards a 10-Year Vision for Global Research Data Infrastructures - is an Europe-driven initiative proposed as Coordination Action under the Seventh Framework Programme FP7 funded by GÉANT & Infrastructure unit of the EU DG-INFSO (February 2010 - 24 months). GRDI2020 is supporting the objectives of the call “INFRA-2009.3: Studies conferences & coordination actions supporting policy development, including international co-operation for e-Infrastructures” in specifically contributing to the definition and development of a European policy for sustainable Global Research Data Infrastructures, thereby aiding in the build-up of the European Research Area.

Grid Portal
A gateway for a Web site that is a major starting site for users (end-users and administrators) when they get connected to the Grid and Grid resources such as computers, storage, network, application, data, and other services. An example is EnginFrame ( which in DEISA served as the portal for the Life Science community.
An integrating federated authorisation infrastructure (Shibboleth) with Grid technology (the Globus Toolkit) to provide attribute-based authorisation for distributed scientific communities. Project GridShib is funded by the NSF Middleware Initiative (NMI). The goal of GridShib is to allow interoperability between the Globus Toolkit from the Globus Alliance with Shibboleth from Internet2. As a result, GridShib enables secure attribute sharing between Grid virtual organisations and higher-educational institutions.
GridTalk brings the success stories of Europe's e- infrastructure to a wider audience. The project coordinates the dissemination outputs of EGEE and other European grid computing efforts, ensuring their results and influence are reported in print and online.
A Metascheduler that enables large-scale, reliable and efficient sharing of computing resources (clusters, computing farms, servers, supercomputers), managed by different LRM (Local Resource Management) systems, such as PBS, SGE, LSF, Condor, within a single organisation (enterprise grid) or scattered across several administrative domains (partner or supply-chain grid). With GridWay, a Grid infrastructure can be exploited and managed in the same way as a local computing cluster. GridWay is a Globus project.
Grid Resource Information Service, queries resources for their current configuration, capabilities, and status.
Grid Security Infrastructure overcomes the security challenges posed by grid applications by offering programmers the following three features: complete public-key system, mutual authentication through digital certificates, and credential delegation and single sign-on. GSI is composed of a set of command-line tools to manage certificates, and a set of Java classes to easily integrate security into our grid services. It is based on standard technologies, such as TLS (formerly SSL) and secure Web Services specifications (XML-Signature, XML-Encryption, etc.).
Graphical User Interface.
The High Performance Computing in Europe Taskforce was formed by representatives of European countries interested in shaping the European High Performance Computing Infrastructure. HET has set the framework of a European policy in the area of High Performance Computing. HET reports to the countries represented in the present Taskforce and to the e-IRG. HET was the predecessor of PRACE.
Hierarchical Storage Management (HSM)
A data storage technique which automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as hard disk drive arrays, are more expensive (per byte stored) than slower devices, such as optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise's data on slower devices, and then copy data to faster disk drives when needed. In effect, HSM turns the fast disk drives into caches for the slower mass storage devices. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the fast devices.
High-performance computing is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 1012 floating-point operations per second. The term HPC is occasionally used as a synonym for supercomputing, although technically a supercomputer is a system that performs at or near the currently highest operational rate for computers. Some supercomputers work at more than a petaflop or 1015 floating-point operations per second. The most common users of HPC systems are scientific researchers, engineers and academic institutions. Some government agencies, particularly the military, also rely on HPC for complex applications. High-performance systems often use custom-made components in addition to so-called commodity components.
The Transnational Access activity enables researchers working in any eligible country in Europe to carry out a collaborative research visit of up to 3 months´ duration in any other eligible country, and to gain access to some of the most powerful High Performance Computing facilities in Europe.
The High Performance Computing Basic Profile is a recommendation which defines how to submit, monitor and manage jobs using standard mechanisms across different job schedulers or Grid middleware from different software providers. It is of natural importance to DEISA, and the DEISA Technology Work Package works on implementing it thoroughly.
HPC in the Cloud
HPC in the Cloud is the only portal dedicated to covering Technical Cloud computing in science, industry and the data center. The publication is written to provide technology decision-makers and stakeholders in the High Performance Computing industry (spanning government, industry and academia) with the most accurate and current information on developments happening in the point where Cloud Computing and HPC intersect.
HPC-Hardware vendor offering HPC systems such as BlueGene/ P, IBM PowerPC, Power5, and Power6, IBM Blade Center, and IBM p690.
Information & Communication Technologies (ICT)
ICT - Projects in FP7
An integrated development environment (IDE) is a programming environment that has been packaged as an application program, typically consisting of a code editor, a compiler, a debugger, and a graphical user interface (GUI) builder. The IDE may be a standalone application or may be included as part of one or more existing and compatible applications.
The Internet Engineering Task Force, is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. The actual technical work of the IETF is done in its working groups, which are organized by topic into several areas (e.g., routing, transport, security, etc.). The mission of the IETF is to produce high quality, relevant technical and engineering documents that influence the way people design, use, and manage the Internet in such a way as to make the Internet work better. These documents include protocol standards, best current practices, and informational documents of various kinds.
In order to continuously support the European computing infrastructures and to exploit possible synergies, the Initiative for Globus in Europe (IGE) coordinates European Globus activities, drive forward Globus developments according to the requirements of European users, and strengthen the influence of European developers in the Globus Alliance. The IGE will serve as a comprehensive service provider for the European e-infrastructures regarding the development, customisation, provisioning, support, and maintenance of components of the Globus Toolkit.
Information Society Technologies (IST)
Information Society Technologies.
International Grid Trust Federation (IGTF)
The body with the goal to harmonize and synchronize PMAs policies to establish and maintain global trust relationships in e-Science.
iRODS is the integrated Rule-Oriented Data-management System, a community-driven, open source, data grid software solution. It helps researchers, archivists and others manage (organize, share, protect, and preserve) large sets of computer files. Collections can range in size from moderate to a hundred million files or more totalling petabytes of data. The requirements to manage large collections of data include both a number of generic capabilities and diverse features that depend on the details of different applications. iRODS is highly configurable and easily extensible for a very wide range of use cases through user-defined Micro-services, without having to modify core code. iRODS has been deployed by DEISA on a testbed.
An international, weekly, on-line science-computing newsletter that shows the importance of distributed computing, grid computing, cloud computing and high-performance computing. It does so by reporting about the people and projects involved in these fields, and how these types of computing technologies are being applied to make scientific advances.
Information Technology Infrastructure Library. ITIL is a consistent and comprehensive documentation of best practice for IT Service Management. Used by many hundreds of organisations around the world, an ITIL philosophy has grown up around the guidance contained within the ITIL books and the supporting professional qualification scheme. ITIL consists of a series of books giving guidance on the provision of quality IT services, and on the accommodation and environmental facilities needed to support IT. ITIL has been developed in recognition of organisations' growing dependency on IT and embodies best practices for IT Service Management.
Joint Research Activity. In DEISA2 there are two JRAs: JRA1, Integrated DEISA Development Environment, i.e. provision of an integrated environment for scientific application development, based on a software infrastructure for tools integration, which provides a common user interface across multiple computing platforms. And JRA2, Enhancing Scalability, i.e. enabling supercomputer applications for the efficient exploitation of current and future supercomputers.
The Job Submission Description Language (JSDL) is an established standard created by the Open Grid Forum (OGF). It is a language for formulating the resource requirements for computational job. requests to be submitted to resources providers such as Compute Centres or more generally Grid Infrastructures. JSDL is currently supported in DEISA via GRIDSAM / UNICORE5 and UNICORE6.
The Job Submission Description Language Working Group in the Open Grid Forum, see OGF.
The Joint Security Policy Group aims to prepare simple and general policies which are not only applicable to the primary stakeholders but that are also of use to other Grid infrastructures (NGI's etc). The adoption of common policies by multiple Grids can ease the problems of interoperability.
Juelich Benchmark Environment for running HPC benchmarks, such as the DEISA benchmark computer programs. It provides a script based framework to easily create benchmark sets, run those sets on different computer systems and evaluate the results. It is actively developed by the Juelich Supercomputing Centre of Forschungszentrum Juelich, Germany.

Short for Lightweight Directory Access Protocol, a set of protocols for accessing information directories. LDAP is based on the standards contained within the X.500 standard and supports TCP/IP, which is necessary for any type of Internet access.

IBM's parallel job scheduling and workload management system that allows users to run more jobs in less time by matching each job's processing needs and priority with the available resources, thereby maximizing resource utilization.

Load Sharing Facility is a batch scheduling system from Platform Computing that helps manage and optimize complex IT environments delivering higher IT efficiency, faster time to business results, reduced cost of computing and guaranteed service execution. The LSF family of products delivers a grid-enabled solution that is optimized for solving technical computing problems. LSF fully utilizes all IT resources regardless of operating system, including desktops, servers and mainframes to ensure policy-driven, prioritized service levels for always-on access to resources.
Lustre is a scalable, secure, robust, highly-available cluster file system. The central goal is the development of a next-generation cluster file system which can serve clusters with 10,000's of nodes, provide petabytes of storage, and move 100's of GB/sec with state-of-the- art security and management infrastructure.
Multi-Cluster GPFS. See GPFS.
Multi-Cluster LoadLeveler. See LoadLeveler.
A policy-based job scheduler and event engine that enables utility-based computing for clusters. It simplifies management across one or multiple hardware, operating system, storage, network, license and resource manager environments to increase the ROI of clustered resources and improve system utilization.

The process of dynamic collection, interpretation and presentation of information about hardware and software resources.
A language-independent communications protocol for point-to-point and collective communication, used to program parallel computers. MPI is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation. MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today.
The National Research Grid Initiative of Japan, created in 2003 by the Ministry of Education, Culture, Sports, Science and Technology (MEXT). Its objective is to research and develop grid middleware according to global standards to a level that can support practical operation, so that a large-scale computing environment (the Science Grid) can be implemented for widely-distributed, advanced research and education. NAREGI is carrying out R&D from two directions: through the grid middleware R&D at the National Institute of Informatics (NII), and through applied experimental study using nano-applications, at the Institute for Molecular Science (IMS). These two organisations advance the project in cooperation with industry, universities and public research facilities. From 2006, under the Science Grid NAREGI Program, research and development will continue to build on current results, while expanding in scope to include application environments for next-generation, peta-scale supercomputers.
HPC-Hardware Vendor, offering NEC SX-8 and SX-9 vector-technology based HPC systems.
Network Job Supervisor. A Grid site supporting the UNICORE middleware is required to run at least one instance of an NJS which is the entry point for incoming jobs. Underneath the NJS, there exists an abstraction layer, the TSI (Target System Interface) which maps UNICORE language to a language the underlying batch system server can understand. The NJS communicates with the UNICORE gateway and authenticates users at the Grid site by querying the UUDB (UNICORE User Database).
Research and Education Network (national network providers).
The Network File System version 4 is a protocol defined by the IETF. NFSv4 is a distributed file system protocol which owes heritage to NFS protocol version 2, RFC 1094, and version 3, RFC 1813. Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the mount protocol. In addition, support for strong security (and its negotiation), compound operations, client caching, and internationalization have been added, and attention has been paid to making NFS version 4 operate well in an Internet environment.
The Organisation for the Advancement of Structured Information Standards is a not-for-profit consortium that drives the development, convergence and adoption of open standards for the global information society. The consortium produces more Web services standards than any other organisation along with standards for security, e-business, and standardisation efforts in the public sector and for application-specific markets. Founded in 1993, OASIS has more than 5,000 participants representing over 600 organisations and individual members in 100 countries.
The Open Grid Forum is a community of users, developers, and vendors leading the global standardisation effort for grid computing. OGF accelerates grid adoption to enable business value and scientific discovery by providing an open forum for grid innovation and developing open standards for grid software interoperability. The work of OGF is carried out through community-initiated working groups, which develop standards and specifications in cooperation with other leading standards organisations, software vendors, and users. The OGF community consists of thousands of individuals in industry and research, representing over 400 organisations in more than 50 countries. OGF is the organisation that resulted from the merger of the Global Grid Forum (GGF) and the Enterprise Grid Alliance (EGA).
Aims to play a key role in influencing the drive towards global standardisation efforts and in bringing best practices back into the EU computing environment. OGF-Europe provides the resources needed to deeply engage with European grid users and providers and to respond to their unique set of requirements for Applied Distributed Computing. OGF- Europe is interacting with diverse organisations across a wide range of verticals & domains to help create awareness of distributed computing & the major benefits it delivers. Funded by the EC, OGF-Europe comes at a crucial time in IT development where the use of distributed computing is transforming the IT landscape in the shift towards an increasingly global, knowledge-based economy.
The Open Grid Services Architecture, describes an architecture for a service-oriented grid computing environment for business and scientific use, developed within the Open Grid Forum. OGSA is based on several Web service technologies, notably WSDL and SOAP. Briefly, OGSA is a distributed interaction and computing architecture based around services, assuring interoperability on heterogeneous systems so that different types of resources can communicate and share information. OGSA has been described as a refinement of the emerging Web Services architecture, specifically designed to support Grid requirements.
See BES.
This project develops middleware to assist with access and integration of data from separate sources via the grid. OGSA-DAI is a middleware product which supports the exposure of data resources, such as relational or XML databases, on to grids. Various interfaces are provided and many popular database management systems are supported. The software also includes a collection of components for querying, transforming and delivering data in different ways, and a simple toolkit for developing client applications. OGSA-DAI is designed to be extendable, so users can provide their own additional functionality.
The Open Middleware Infrastructure Institute, provides software and support to enable a sustained future for the UK e-Science community and its international collaborators. Many researchers would benefit from access to emerging e-Infrastructure - such as the Grid - but are put off by software that is difficult to implement and use. OMII-UK's software solutions solve these problems by providing the functionality needed by researchers in an easy-to-use and easy-to-install package. In addition to our Software Solutions, OMII-UK also provides the OMII-UK Development Kit - an easy to install and use open-source software package that provides a secure web service hosting environment, web services and the necessary tools and environments to access these services.

OMII_Europe ran from May 2006 until April 2008 and was funded under Framework 6 of the European Union's Research Infrastructures priority. The project has been established to source key software components that can interoperate across several heterogeneous Grid middleware platforms. OMII-Europe endorses both the use of open standards and open source. OMII-Europe has chosen particular open standards for the Grid that it believes are essential to interoperability across global resources.

Eliminates the need for multiple usernames across different websites, simplifying user's online experience. One gets to choose the OpenID Provider that best meets the needs and most importantly that one trust. At the same time, the user's OpenID can stay with him, no matter which provider to move to. And, the OpenID technology is not proprietary and is completely free. OpenID and other means of authentication are of great concern for DEISA.
Open Multi-Processing is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C/C++ and Fortran on many architectures, including Unix and Microsoft Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behaviour. Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer. An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI).
The Open Science Grid is a consortium of software, service and resource providers and researchers, from universities, national laboratories and computing centres across the U.S., who together build and operate the OSG project. OSG brings together computing and storage resources from campuses and research communities into a common, shared grid infrastructure over research networks via a common set of middleware. OSG's mission is to help satisfy the ever-growing computing and data management requirements of scientific researchers, especially collaborative science requiring high throughput computing. The project is funded by the NSF and DOE, and provides staff for managing various aspects of the OSG.
The Portable Batch System is a job scheduling software. Its primary task is to allocate computational tasks, i.e., batch jobs, among the available computing resources. It is often used in conjunction with UNIX cluster environments. Several spin-offs of this software have resulted in it having various names, like PBSpro, the commercial version distributed by Altair. However, the overall architecture and command-line interface remain essentially the same.
1000 Teraflop/s. 1015 floating point operations per second.
A Cluster, Grid or Cloud Portal provides a single secure web interface to computational resources (computing, storage, network, data, applications), while hiding the complexity of the underlying hardware and software of the distributed computing environment. See also Grid Portal.
The Partnership for Advanced Computing in Europe is a project funded in part by the EU's 7th Framework Programme which prepares the creation of a persistent pan-European HPC service, consisting of several tier-0 centres providing European researchers with access to capability computers and forming the top level of the European HPC ecosystem.
The PRACE 1IP project is designed to support the accelerated implementation of the RI. The project supports the evolution of the RI by refining and extending the administrative, legal and financial framework with focus on the specific requirements of industry. To enable world-class science on novel systems the project assists users in porting, optimising and petascaling applications to the different architectures and deploys consistent services across the RI. The tools and techniques will be selected to have broad applicability across many disciplines. This is accompanied by advanced training in modern programming methods and paradigms, establishing a permanent distributed training infrastructure.
Private Key
One of two keys used in public key cryptography. The private key is known only to the owner and is used to sign and decrypt messages. This key is used to "sign" outgoing messages, and is used to decrypt incoming messages.
The aim of the Parallel Tools Platform project is to produce an open- source industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. The project will provide a standard, portable parallel integrated development environment that supports a wide range of parallel architectures and runtime systems; a scalable parallel debugger; support for the integration of a wide range of parallel tools; and an environment that simplifies the end-user interaction with parallel systems.
Public Key
One of two keys used in public key cryptography. The public key can be known to anyone and is used to verify signatures and encrypt messages. The public key of a public-private key cryptography system. This key is used to confirm "signatures" on incoming messages or to encrypt a file or message so that only the holder of the private key can decrypt the file or message.
Public Key Infrastructure (PKI)
Processes and technologies used to issue and manage digital certificates, enabling third parties to authenticate individual users, services and hosts.
A registration authority (RA) is an authority in a network that verifies user requests for a digital certificate and tells the certificate authority (CA) to issue it. RAs are part of a public key infrastructure (PKI), a networked system that enables companies and users to exchange information and money safely and securely. The digital certificate contains a public key that is used to encrypt and decrypt messages and digital signatures.
Research Infrastructures in FP6
Usually physical or logical components, onto which services are provisioned. Resources are considered to be Grid Components within the context of the Grid reference model. Examples include server hardware, network switches, disc arrays, software media and so forth.
Resource Broker
Workload component that finds the best matching resource supplier for the submitted job execution.
SOAP based web services are kind of heavy weight and, hence, best suited for rich client application and server to server communication (aka. B2B). Light clients, such as javascript and Flex based rich internet applications are much better suited to interact with light weight services based on the REST model. For instance, the DEISA accounting information services are implemented following the REST model.
Risk Management Information Systems are typically computerized systems that assist in consolidating property values, claims, policy, and exposure information and provide the tracking and management reporting capabilities to enable you to monitor and control your overall cost of risk.
Resource Usage Sertvice, see Usage Record.
Service Activities are for example Grid Support, Operation and Management, including tasks such as grid monitoring and control and resource and user support, and Resource Provision which includes tasks such as policies and service level agreements.
The Security Assertion Markup Language is an XML framework for exchanging authentication and authorisation information. SAML is a standard of OASIS.
Protection of information assets from unauthorized access through the use of technology, processes, and training.
In the most general sense usually one or more software components which responds to client requests. An electronic bookstore application is a service. The database component of the bookstore provides a data base service to the bookstore. Thus high level, abstract services may be recursively decomposed into lower level constituent services. In general, a service and all of its decomposable sub services are considered to be grid components. In general the term should be qualified or its meaning made obvious through context.
HPC-Hardware Vendor, offering e.g. SGI-Altix systems based on Intel Itanium.
An architecture and an open source software developed by Internet2/MACE (Middleware Architecture Committee for Education). Shibboleth is based on SAML and allows the implementation of an AAI.
Single Sign-On (SSO)
Enables the user to gain access to multiple resources by authenticating only once.
Service Level Agreement is an agreement between the provider and consumer of a service which stipulates a set of properties or attributes that the service must satisfy, possibly together with a definition of the payment and/or penalties associated with meeting or failing to meet the agreed criteria.
Service Level Objectives are the specific quantifiable and measurable attributes of a service that form the basis of a Service Level Agreement.
Simple Linux Utility for Resource Management. SLURM is an open-source resource manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non- exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
Symmetric multiprocessing or SMP involves a multiprocessor computer- architecture where two or more identical processors can connect to a single shared main memory. Most common multiprocessor systems today use an SMP architecture. In case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors. SMP systems allow any processor to work on any task no matter where the data for that task are located in memory; with proper operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently.
A Service Oriented Architecture is a set of principles and practices for sharing, reusing, and orchestrating business logic represented as services or components. SOA is the most optimized and cost-effective way to deliver applications deployed on grid-based infrastructures.
The Storage Resource Broker allows users to access files across a distributed environment. The actual physical location and the way the data is stored is abstracted from the user.
Storage Resource Manager provides the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities. SRMs are Grid storage services providing interfaces to storage resources, as well as advanced functionality such as dynamic space allocation and file management on shared storage systems.
Secure Socket Layer, see TLS Transport Layer Security.
The key to realizing the benefits of grid computing, so that the diverse resources that make up a modern computing environment can be discovered, accessed, allocated, monitored, and in general managed as a single virtual system, even when provided by different vendors and/or operated by different organisations. The OGF Open Grid Forum is the organisation leading the global standardisation effort for grid computing.
1000 Gigaflop/s. 1012 floating point operations per second.
The US open scientific discovery infrastructure combining leadership class resources at eleven partner sites to create an integrated, persistent computational resource. Using high-performance network connections, the TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. Currently, TeraGrid resources include more than 250 teraflops of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks. Researchers can also access more than 100 discipline-specific databases. With this combination of resources, the TeraGrid is the world's largest, most comprehensive distributed cyberinfrastructure for open scientific research. The TeraGrid project is funded by the National Science Foundation and includes 11 partners: Indiana, LONI, NCAR, NCSA, JICS, ORNL, PSC, Purdue, SDSC, TACC and UC/ANL.
Current National Science Foundation (NSF) awards for the operation, user support, and enhancement of the TeraGrid expire in 2010. TeraGrid Extreme Digital Resources for Science and Engineering (XD) is the next phase in the NSF's ongoing effort to build a cyberinfrastructure that delivers high-end digital services, providing US researchers and educators with the capability to work with extremely large amounts of digitally represented information. The primary goal of TeraGrid XD is to enable major advances in science and engineering research, in the integration of research and education, and in broadening participation in science and engineering by under-represented groups, by providing researchers and educators with usable access to extreme-scale digital resources beyond those typically available on a typical campus, together with the interfaces, consulting support, and training necessary to facilitate their use.
Tier-0, Tier-1, Tier-2
Different levels of HPC capabilities (PetaFlop/s, TeraFlop/s, lower).
Transport Layer Security, and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide secure communications on the Internet for such things as web browsing, e-mail, Internet faxing, messaging and other data transfers. There are slight differences between SSL and TLS, but the protocol remains substantially the same.
An open source resource manager providing control over batch jobs and distributed compute nodes. It is a community effort based on the original PBS project and, with more than 1,200 patches, has incorporated significant advances in the areas of scalability, fault tolerance, and feature extensions. This version may be freely modified and redistributed subject to the constraints of the included license.
Trouble-Ticket System
A ticketing system for tracking and resolution of grid-wide support centre issues. The ticketing system supports automatic exchange of tickets among the various support centres that comprise the collective operations activity of DEISA. The system includes capabilities for reception and aggregation of alert notifications originating from various deployed validation services within the DEISA facilities.
The Uniform Interface to Computing Resources offers a ready-to-run Grid system including client and server software. UNICORE makes distributed computing and data resources available in a seamless and secure way in intranets and the internet. The UNICORE project created software that allows users to submit jobs to remote high performance computing resources without having to learn details of the target operating system, data storage conventions and techniques, or administrative policies and procedures at the target site.
Utility Computing
A term often applied to IT infrastructure and technology which may be paid for or may deliver services which may be paid for based on use or value, rather than on component cost. Enterprise Grids, by their nature will be service centric and will require telemetry that will enable reconciliation of resources with the services that consume them, i.e. cost with value, and thus enable utility computing.
UR and RUS
The Usage Record standard is a document definition on how to describe accounting records. This standard has been adopted by DEISA for exchanging accounting information across the infrastructure. Furthermore, DEISA is involved in the development of this standard by actively contributing to the UR-WG within the OGF. The development of a Resource Usage Services recommendation describing services for dealing with usage records is also followed closely be DEISA.
ViroLab was an EU FP6 project that developed a Grid-based virtual laboratory for infectious diseases to facilitate medical knowledge discovery and decision support for, e.g., HIV drug resistance, using large, high quality in-vitro and clinical patient databases. At the core of the ViroLab virtual laboratory is a rule-based ranking system. Using a Grid- based service oriented architecture, vertical integration of the biomedical information from viruses (proteins and mutations), patients (e.g. viral load) and literature (drug resistance experiments), results in a rule-based decision support system for drug ranking. The Grid-based virtual laboratory supports tools for statistical analysis, visualization, modeling and simulation, to predict the temporal virological and immunological response of viruses with complex mutation patterns to drug therapy. It provides the medical doctors with a decision support system to rank drugs targeted at patients, and the virologists with an advanced environment to study trends on an individual, population and epidemiological level. ViroLab was supported by DEISA.
Virtual Organisation (VO)
A group of people with similar interest that primarily interact via communication media such as newsletters, telephone, email, online social networks etc. rather than face to face, for social, professional, educational or other purposes. In Grid Computing, a VO is a group who shares the same computing resources.
Adding a layer onto some entity so that the new entity exhibits the interface properties of the original. This layer hides the true implementation of the virtualized object so that the original can be changed or replaced without fundamentally impacting the interaction of entities that have a dependency on it. An example is disk LUNs, which present the interface of a disk, yet may be implemented as a whole disk, a partition of a disk or perhaps an aggregation such as a RAID stripe.
The VO Membership Registration Service (VOMRS) is a server that provides the means for registering members of a Virtual Organization, and coordination of this process among the various VO and grid resource administrators. VOMRS consists of a database to maintain user registration and institutional information, and a web user interface (web UI) for input of data into the database and manipulation of that data.
Virtual Organisation Membership Service is an authorisation system for Virtual Organisations on a Grid.
The Virtual Physiological Human Network of Excellence is a project which aims to help support and progress European research in biomedical modelling and simulation of the human body. This will improve our ability to predict, diagnose and treat disease, and have a dramatic impact on the future of healthcare, the pharmaceutical and medical device industries. VPH is supported by DEISA.
Web Service
A software system designed to support interoperable machine-to- machine interaction over a network. It has an interface described in a machine-processable format (specifically WSDL). Other systems interact with the Web service in a manner prescribed by its description using SOAP-messages, typically conveyed using HTTP with an XML serialisation in conjunction with other Web-related standards.
Workflow applications
Simulations where several independent codes act successively on a stream of data, the output of one code being the input of the next one in the chain.
World Community Grid
An effort to create the world's largest public computing grid to tackle scientific research projects that benefit humanity. Launched in November 2004, it is funded and operated by IBM with client software currently available for Windows, Linux, Mac OS X and FreeBSD operating systems. Using the idle time of computers around the world, World Community Grid's research projects have analyzed aspects of the human genome, HIV, dengue, muscular dystrophy, and cancer. The organisation has so far partnered with over 350 other companies and organisations to assist in the work and has over 300,000 registered user accounts.
When dealing with SOAP based Web Services, the often called WS-* standards and recommendations are inevitable parts for interoperation of services, regardless whether those are used for intra- or inter-infrastructure purposes. Since DEISA is a production infrastructure, taking part in the development of such services would be well beyond its goals. Nevertheless, DEISA considers the importance of having a look ahead of what may be coming in order to be prepared and minimize uptake and integration.
Web Services Grid Resource Allocation and Management component comprises a set of WSRF-compliant Web services to locate, submit, monitor, and cancel jobs on Grid computing resources. WS GRAM is not a job scheduler, but rather a set of services and clients for communicating with a range of different batch/cluster job schedulers using a common protocol. WS GRAM is meant to address a range of jobs where reliable operation, stateful monitoring, credential management, and file staging are important. The proprietary, pre-Web service protocol from GT2 (termed "Pre-WS GRAM") is also included.
The Web Services Interoperability organisation is an open industry organisation chartered to promote Web services interoperability across platforms, operating systems and programming languages. The organisation's diverse community of Web services leaders helps customers to develop interoperable Web services by providing guidance, recommended practices and supporting resources. Specifically, WS-I creates, promotes and supports generic protocols for the interoperable exchange of messages between Web services. In this context, "generic protocols" are protocols that are independent of any action indicated by a message, other than those actions necessary for its secure, reliable and efficient delivery, and "interoperable" means suitable for multiple operating systems and multiple programming languages.  WS-I is now part of OASIS:
The Web Services Resource Framework defines a generic and open framework for modeling and accessing stateful resources using Web services. This includes mechanisms to describe views on the state, to support management of the state through properties associated with the Web service, and to describe how these mechanisms are extensible to groups of Web services.
The World Wide Web Consortium develops interoperable technologies (specifications, guidelines, software, and tools) to lead the Web to its full potential. W3C is a forum for information, commerce, communication, and collective understanding. W3C's mission is to lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web.
X.509 Certificate
The standard for public key infrastructures. It defines among other things standard formats for certificates.
Document Actions