Editing Server/DC-MHS

Jump to navigation Jump to search
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
1
1


1
==Welcome==
 
Welcome to the OCP Data Center – Modular Hardware System (DC-MHS) Sub-Project.
 
DC-MHS R1 envisions interoperability between key elements of datacenter, edge and enterprise infrastructure by providing consistent interfaces and form factors among modular building blocks.
 
DC-MHS R1 standardizes a collection of HPM (Host Processor Modules) form-factors and supporting ingredients to allow interoperability of HPMs and platforms.
 
There are six workstreams that comprise DC-MHS. The objectives of the six workstreams are the following:
 
*M-HPM (Host Processor Modules) Workstream which involves three specifications:
**M-FLW (FulL Width HPM)
***Specify the requirements of a Full Width Host Processor Module (HPM). This is for use within products designed for minimum 19” rack, also known as compliant with EIA-310-E but can also accommodate larger 21” racks. This form factor enables a full width HPM usage for CPUs, DIMMs, and related features.
**M-DNO (DeNsity Optimized HPM)
***Outline the requirements of a family of partial width, DeNsity Optimized Host Processor Module (HPM) form factors within the OCP Modular 240 hardware system group of specifications (M-DNO for short). This M-DNO specification embodies design considerations for CPU, DIMMs, and other server processor related features commonly used by the industry today but is not limited to only those functions.
**M-SDNO (Modular Hardware System Scalable DeNsity Optimized Specification)
***Builds upon M-DNO, defines partial and full width HPMs with variable board outlines covering 19” EIA-310 and 21” OpenRack v3 infrastructure.
*M-XIO/PESTI (eXtended I/O Connectivity/PEripheral SideBand Tunneling Interface)
**Outline the Modular Extensible I/O (M-XIO) source connector hardware strategy. An M-XIO source connector enables entry and exit points between sources such as Motherboards, Host Processor Modules & RAID Controllers, and peripheral subsystems such as PCIe risers, backplanes, etc. M-XIO includes the connector, high speed and management signal interface details and supported pinouts. Additionally, the workstream defines Interface (M-PESTI) base requirements for electrical and protocol compatibility between components of a DC-MHS platform. The M-PESTI protocol overloads a common PRSNT# signal with additional capabilities beyond simple presence/absence of a peripheral.
*M-PIC (Platform Infrastructure Connectivity)
**Defines and standardizes common elements needed to interface a Host Processor Module (HPM) to the platform/chassis infrastructure elements/subsystems within the DC-MHS 1.0 family of OCP servers. Standardization of the common interfaces and connectors enables hardware compatibility between DC-MHS HPMs and various DC-MHS system components.
*M-CRPS (Common Redundant Power Supply)
**Defines all the requirements for an M-CRPS internal redundant power supply used in Open Compute Project that could be used in different environments like home/office, datacenter, and high-performance computing, hence harmonizing the server power supply requirements used in the industry with the purpose of creating a standard specification that the customers and vendors of Enterprise and Hyperscale can use for their products.
*M-SIF (Shared InFrastructure)
**Improve interoperability related to shared infrastructure enclosures with multiple, serviceable modules. Modules containing elements (HPMs, DC-SCM, peripherals, etc.) are blind-matable and hot-pluggable into a shared infrastructure enclosure.
*M-PnP (Modular Plug and Play)
**To ease system firmware and platform enablement integration. Defines interoperability for DC-MHS supporting systems to require and use industry standard processes using accepted algorithms and protocols to discover, manage, secure and maintain systems based on DC-MHS ingredients.
 
 
Disclaimer: Please do not submit any confidential information to the Project Community. All presentation materials, proposals, meeting minutes and/or supporting documents are published by OCP and are open to the public in accordance with OCP's Bylaws and IP Policy. This can be found on the [http://www.opencompute.org/about/ocp-policies/ OCP Policies] page. If you have any questions please contact OCP.


==Project Leadership==
==Project Leadership==
Please note that all contributions to OpenCompute may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see OpenCompute:Copyrights for details). Do not submit copyrighted work without permission!
Cancel Editing help (opens in new window)