Mon compte

connexion

inscription

   Publicité E▼


 » 
allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien
allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien

Significations et usages de InfiniBand

Définition

⇨ voir la définition de Wikipedia

   Publicité ▼

Locutions

Wikipedia

InfiniBand

                   
The panel of an InfiniBand switch with CX4/SFF-8470 connectors

InfiniBand is a switched fabric communications link used in high-performance computing and enterprise data centers. Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. Infiniband host bus adapters and network switches are manufactured by Mellanox and Intel (which acquired Qlogic's infiniband business in January 2012[1]).

InfiniBand forms a superset of the Virtual Interface Architecture (VIA).

Contents

  Description

Effective unidirectional theoretical throughput
(actual data rate, not signaling rate)
  SDR DDR QDR FDR-10 FDR EDR
1X 2 Gbit/s 4 Gbit/s 8 Gbit/s 10.3 Gbit/s 13.64 Gbit/s 25 Gbit/s
4X 8 Gbit/s 16 Gbit/s 32 Gbit/s 41.2 Gbit/s 54.54 Gbit/s 100 Gbit/s
12X 24 Gbit/s 48 Gbit/s 96 Gbit/s 123.6 Gbit/s 163.64 Gbit/s 300 Gbit/s

Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. On top of the point to point capabilities, InfiniBand also offers multicast operations as well. It supports several signaling rates and, as with PCI Express, links can be bonded together for additional throughput.

  Signaling rate

An Infiniband link is a serial link operating at one of five data rates: single data rate (SDR), double data rate (DDR), quad data rate (QDR), fourteen data rate (FDR), and enhanced data rated (EDR).

The SDR connection's signalling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection. DDR is 5 Gbit/s and QDR is 10 Gbit/s. FDR is 14.0625 Gbit/s and EDR is 25.78125 Gbit/s per lane.

For SDR, DDR and QDR, links use 8B/10B encoding — every 10 bits sent carry 8bits of data — making the effective data transmission rate four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbit/s useful data, respectively. For FDR and EDR, links use 64B/66B encoding — every 66 bits sent carry 64 bits of data. (Neither of these calculations takes into account the additional physical layer overhead requirements for comma characters or protocol requirements such as StartOfFrame and EndOfFrame).

Implementers can aggregate links in units of 4 or 12, called 4X or 12X. A 12X QDR link therefore carries 120 Gbit/s raw, or 96 Gbit/s of useful data. As of 2009 most systems use a 4X aggregate, implying a 10 Gbit/s (SDR), 20 Gbit/s (DDR) or 40 Gbit/s (QDR) connection. Larger systems with 12X links are typically used for cluster and supercomputer interconnects and for inter-switch connections.

The InfiniBand future roadmap also has "HDR" (High Data rate), due in 2014, and "NDR" (Next Data Rate), due "some time later", but as of June 2010, these data rates were not yet tied to specific speeds.[2]

  Latency

The single data rate switch chips have a latency of 200 nanoseconds, DDR switch chips have a latency of 140 nanoseconds and QDR switch chips have a latency of 100 nanoseconds. The end-to-end latency range spans from 1.07 microseconds MPI latency (Mellanox ConnectX QDR HCAs) to 1.29 microseconds MPI latency (Qlogic InfiniPath HCAs) to 2.6 microseconds (Mellanox InfiniHost DDR III HCAs).[citation needed] As of 2009 various InfiniBand host channel adapters (HCA) exist in the market, each with different latency and bandwidth characteristics. InfiniBand also provides RDMA capabilities for low CPU overhead. The latency for RDMA operations is less than 1 microsecond (Mellanox[3] ConnectX HCAs).

  Topology

InfiniBand uses a switched fabric topology, as opposed to a hierarchical switched network like traditional Ethernet architectures, although emerging Ethernet fabric architectures propose many benefits which could see Ethernet replace InfiniBand.[4] Most of the network topologies are Fat-Tree, mesh or 3D-Torus. Recent papers (ISCA'10) demonstrated butterfly topologies (Clos) as well.[5]

All transmissions begin or end at a "channel adapter." Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service (QoS).

  Messages

InfiniBand transmits data in packets of up to 4 KB that are taken together to form a message. A message can be:

  Applications

InfiniBand has been adopted in enterprise datacenters, for example Oracle Exadata and Exalogic Machines, financial sectors, cloud computing (an InfiniBand based system won the best of VMWorld for Cloud Computing) and more. InfiniBand has been mostly used for high performance clustering computer cluster applications. A number of the TOP500 supercomputers have used InfiniBand including the former[6] reigning fastest supercomputer, the IBM Roadrunner.

SGI, LSI, DDN, Oracle, Rorke Data among others, have also released storage utilizing InfiniBand "target adapters". These products essentially compete with architectures such as Fibre Channel, SCSI, and other more traditional connectivity-methods. Such target adapter-based discs can become a part of the fabric of a given network, in a fashion similar to DEC VMS clustering. The advantage to this configuration is lower latency and higher availability to nodes on the network (because of the fabric nature of the network). In 2009, the Oak-Ridge National Lab Spider storage system used this type of InfiniBand attached storage to deliver over 240 gigabytes per second of bandwidth.

  Physical Interconnection

InfiniBand uses copper CX4 cable for SDR and DDR rates — also commonly used to connect SAS (Serial Attached SCSI) HBAs to external (SAS) disk arrays. With SAS, this is known as an SFF-8470 connector, and is referred to as an "InfiniBand-style" Connector. The latest connectors used with QDR and FDR are QSFP (Quad SFP) and can be copper or fiber, depending on the length required.

  Programming

InfiniBand has no standard programming API within the specification. The standard only lists a set of "verbs" — functions that must exist. The syntax of these functions is left to the vendors. The de-facto standard to date has been the syntax developed by the OpenFabrics Alliance, which was adopted by most of the InfiniBand vendors, for GNU/Linux, FreeBSD, and MS Windows. The InfiniBand software stack developed by OpenFabrics Alliance is released as "OpenFabrics Enterprise Distribution (OFED)", under a choice of two licenses GPL2 or BSD license for Linux and FreeBSD, and as "WinOF" under a choice of BSD license for Windows.

  History

InfiniBand originated from the 1999 merger of two competing designs:

  1. Future I/O, developed by Compaq, IBM, and Hewlett-Packard
  2. Next Generation I/O (ngio), developed by Intel, Microsoft, and Sun

From the Compaq side, the roots of the technology derived from Tandem's ServerNet. For a short time before the group came up with a new name, InfiniBand was called System I/O.[7]

InfiniBand was originally envisioned[by whom?] as a comprehensive "system area network" that would connect CPUs and provide all high speed I/O for "back-office" applications. In this role it would potentially replace just about every datacenter I/O standard including PCI, Fibre Channel, and various networks like Ethernet. Instead, all of the CPUs and peripherals would be connected into a single pan-datacenter switched InfiniBand fabric. This vision offered a number of advantages in addition to greater speed, not the least of which is that I/O workload would be largely lifted from computer and storage. In theory, this should make the construction of clusters much easier, and potentially less expensive, because more devices could be shared and they could be easily moved around as workloads shifted. Proponents of a less comprehensive vision saw InfiniBand as a pervasive, low latency, high bandwidth, low overhead interconnect for commercial datacenters, albeit one that might perhaps only connect servers and storage to each other, while leaving more local connections to other protocols and standards such as PCI.[citation needed]

As of 2009 InfiniBand has become a popular interconnect for high-performance computing, and its adoption as seen in the TOP500 supercomputers list is faster than Ethernet.[8] In the recent years InfiniBand has been increasingly adopted in the enterprise datacenters.

In 2008 Oracle Corporation released its HP Oracle Database Machine build as a RAC Database (Real Application Clustered Database) with storage provided on its Exadata Storage server which utilises InfiniBand as the backend interconnect for all IO and Interconnect traffic. Updated versions of the Exadata Storage system, now using Sun computing hardware, continue to utilize InfiniBand infrastructure.

In 2009, IBM announced a December 2009 release date for their DB2 pureScale offering, a shared-disk clustering scheme (inspired by parallel sysplex for DB2 z/OS) that uses a cluster of IBM System p servers (POWER6/7) communicating with each other over an InfiniBand interconnect.

In 2010, scale-out network storage manufacturers increasingly adopt InfiniBand as primary cluster interconnect for modern NAS designs, like Isilon IQ or IBM SONAS.[9] Since scale-out systems run distributed metadata operations without "master node", internal low latency communication is a critical success factor for highest scalability and performance.

In 2010, Oracle releases Exadata and Exalogic machines, those implement the InfiniBand QDR with 40 Gbit/s (32 Gbit/s effective) using Sun Switches (Sun Network QDR InfiniBand Gateway Switch). The InifiniBand fabric is used to connect compute nodes and those with the storage, and is used to connect several Exadata and Exalogic machines also.

In June 2011, FDR switches and adapters were announced at the International Supercomputing Conference.[10]

  See also

  References

  External links

   
               

   Publicité ▼

 

Toutes les traductions de InfiniBand


Contenu de sensagent

  • définitions
  • synonymes
  • antonymes
  • encyclopédie

dictionnaire et traducteur pour sites web

Alexandria

Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !

Essayer ici, télécharger le code;

SensagentBox

Avec la boîte de recherches Sensagent, les visiteurs de votre site peuvent également accéder à une information de référence pertinente parmi plus de 5 millions de pages web indexées sur Sensagent.com. Vous pouvez Choisir la taille qui convient le mieux à votre site et adapter la charte graphique.

Solution commerce électronique

Augmenter le contenu de votre site

Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.

Parcourir les produits et les annonces

Obtenir des informations en XML pour filtrer le meilleur contenu.

Indexer des images et définir des méta-données

Fixer la signification de chaque méta-donnée (multilingue).


Renseignements suite à un email de description de votre projet.

Jeux de lettres

Les jeux de lettre français sont :
○   Anagrammes
○   jokers, mots-croisés
○   Lettris
○   Boggle.

Lettris

Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.

boggle

Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer

Dictionnaire de la langue française
Principales Références

La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).

Copyright

Les jeux de lettres anagramme, mot-croisé, joker, Lettris et Boggle sont proposés par Memodata.
Le service web Alexandria est motorisé par Memodata pour faciliter les recherches sur Ebay.
La SensagentBox est offerte par sensAgent.

Traduction

Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.

 

5234 visiteurs en ligne

calculé en 0,031s


Je voudrais signaler :
section :
une faute d'orthographe ou de grammaire
un contenu abusif (raciste, pornographique, diffamatoire)
une violation de copyright
une erreur
un manque
autre
merci de préciser :