Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
[[File:NetworkOperations.jpg|thumb|right|An operation engineer overseeing a network operations control room of a data center]]
{{Continuum mechanics}}
The expression '''Gay-Lussac's law''' is used for each of the two relationships named after the French chemist [[Joseph Louis Gay-Lussac]] and which concern the properties of [[gases]], though it is more usually applied to his law of combining volumes, the first listed here. One law relates to volumes before and after a chemical reaction while the other concerns the pressure and temperature relationship for a sample of gas.


A '''data center''' or '''computer centre''' (also '''datacenter''') is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.
== Law of combining volumes ==
[[Image:Law_of_combining_volumes.svg|thumb|150px|right|Under [[Standard conditions for temperature and pressure|STP]], a reaction between three cubic meters of hydrogen gas and one cubic meter of nitrogen gas will produce circa two cubic meters of [[ammonia]]]]
The law of combining volumes states that, when gases react together to form other gases, and all volumes are measured at the same temperature and pressure:
<blockquote>
'''The ratio between the volumes of the reactant gases and the products can be expressed in simple whole numbers.'''
</blockquote>
For example, Gay-Lussac found that 2 volumes of Hydrogen and 1 volume of Oxygen would react to form 2 volume of gaseous water. In addition to Gay-Lussac's results, Amedeo Avogadro theorized that, at the same temperature and pressure, equal volumes of gas contain equal numbers of molecules ([[Avogadro's law]]). This hypothesis meant that the previously stated result
:2 volumes of Hydrogen + 1 volume of Oxygen = 2 volumes of gaseous water
could also be expressed as
:2 molecules of Hydrogen + 1 molecule of Oxygen = 2 molecules of water.


==History==
The law of combining gases was published by Joseph Louis Gay-Lussac in 1808.<ref>http://www.chemistryexplained.com/Fe-Ge/Gay-Lussac-Joseph-Louis.html</ref> Avogadro's hypothesis, however, was not initially accepted by chemists until the Italian chemist [[Stanislao Cannizzaro]] was able to convince the First International Chemical Congress in 1860.<ref>Hartley, Harold (1966). "Stanislao Cannizzaro, F.R.S. (1826 – 1910) and the First International Chemical Conference at Karlsruhe". Notes and Records of the Royal Society of London 21: 56–63. {{doi|10.1098/rsnr.1966.0006}}.</ref>


Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard [[19-inch rack|rack]]s to mount equipment, [[Raised floor|elevated floor]]s, and [[cable tray]]s (installed overhead or under the elevated floor). Also, a single mainframe required a great deal of power, and had to be cooled to avoid overheating. Security was important&nbsp;– computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.
== Pressure-temperature law ==


During the boom of the microcomputer industry, and especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as [[information technology]] (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of [[Client–server model|client-server]] computing, during the 1990s, microcomputers (now called "[[Server (computing)|servers]]") started to find their places in the old computer rooms. The availability of inexpensive [[Networking hardware|networking]] equipment, coupled with new standards for network [[structured cabling]], made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time.
Gay-Lussac's name is also associated in physics with another gas law, the so-called pressure law, which states that:
<blockquote>
'''The pressure of a gas of fixed [[mass]] and fixed [[volume]] is [[Proportionality (mathematics)|directly proportional]] to the gas' absolute temperature.'''
</blockquote>
[[File:Temperature Pressure law.svg|thumb|Illustration of pressure varying with temperature.]]
Simply put, if a gas' temperature increases then so does its pressure, if the mass and volume of the gas are held constant. The law has a particularly simple mathematical form if the temperature is measured on an absolute scale, such as in [[kelvin]]s. The law can then be expressed mathematically as:


The boom of data centers came during the [[dot-com bubble]]. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results.
:<math>{P}\propto{T}</math>
or
:<math>\frac{P}{T}=k</math>


With an increase in the uptake of [[cloud computing]], business and government organizations are scrutinizing data centers to a higher degree in areas such as security, availability, environmental impact and adherence to standards. Standard Documents from accredited professional groups, such as the [[Telecommunications Industry Association]], specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption. There is still a lot of development being done in operation practice, and also in environmentally friendly data center design. Data centers are typically very expensive to build and maintain.
where:


==Requirements for modern data centers==
:''P'' is the [[pressure]] of the gas (measured in ATM).
[[File:Datacenter-telecom.jpg|thumb|left|Racks of telecommunications equipment in part of a data center]]
:''T'' is the [[temperature]] of the gas (measured in Kelvin).
IT operations are a crucial aspect of most organizational operations. One of the main concerns is '''business continuity'''; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation.
:''k'' is a [[Constant (mathematics)|constant]].


The [[Telecommunications Industry Association]]'s [http://global.ihs.com/doc_detail.cfm?currency_code=USD&customer_id=2125442B200A&oshid=2125442B200A&shopping_cart_id=292558332D4B404849594D5B260A&country_code=US&lang_code=ENGL&item_s_key=00414811&item_key_date=940819&input_doc_number=TIA-942&input_doc_title= TIA-942 Telecommunications Infrastructure Standard for Data Centers], which specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.<ref>http://www.tiaonline.org/standards/</ref>
This law holds true because temperature is a measure of the average [[kinetic energy]] of a substance; as the kinetic energy of a gas increases, its particles collide with the container walls more rapidly, thereby exerting increased pressure.


Telcordia  [http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-3160& GR-3160, ''NEBS Requirements for Telecommunications Data Center Equipment and Spaces''], provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:
For comparing the same substance under two different sets of conditions, the law can be written as:
*Operate and manage a carrier’s telecommunication network
*Provide data center based applications directly to the carrier’s customers
*Provide hosted applications for a third party to provide services to their customers
*Provide a combination of these and similar data center applications.


Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment suitable for equipment installation.  Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers.
:<math>\frac{P_1}{T_1}=\frac{P_2}{T_2} \qquad \mathrm{or} \qquad {P_1}{T_2}={P_2}{T_1}.</math>


Standardization means integrated building and equipment engineering. Modularity has the benefits of scalability and easier growth, even when planning forecasts are less than optimal. For these reasons, telecommunications data centers should be planned in repetitive building blocks of equipment, and associated power and support (conditioning) equipment when practical. The use of dedicated centralized systems requires more accurate forecasts of future needs to prevent expensive over construction, or perhaps worse&nbsp;— under construction that fails to meet future needs.
'''Amontons' Law of Pressure-Temperature:''' The pressure law described above should actually be attributed to [[Guillaume Amontons]], who, between 1700 and 1702<ref>{{citation | author = Barnett, Martin K. | year = Aug 1941 | title = A brief history of thermometry | journal = Journal of Chemical Education | volume = 18 | issue = 8 | page = 358|bibcode = 1941JChEd..18..358B |doi = 10.1021/ed018p358 }}. [http://pubs.acs.org/doi/abs/10.1021/ed018p358 Extract.]</ref><ref>http://web.fccj.org/~ethall/gaslaw/gaslaw.htm</ref>, discovered that the pressure of a fixed mass of gas kept at a constant volume is proportional to the temperature.  Amontons discovered this while building an "air thermometer".  Calling it Gay-Lussac's law is simply incorrect as Gay-Lussac investigated the relationship between volume and temperature (i.e. Charles' Law), not pressure and temperature.


The "lights-out" data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.<ref>{{cite book | first=Victor | last=Kasacavage | year=2002 | page=227 | title=Complete book of remote access: connectivity and security | series=The Auerbach Best Practices Series | publisher=CRC Press | isbn=0-8493-1253-1
[[Charles' Law]] was also known as the Law of Charles and Gay-Lussac, because Gay-Lussac published it in 1802 using much of Charles's unpublished data from 1787. However, in recent years the term has fallen out of favor, and Gay-Lussac's name is now generally associated with the law of combining volumes. [[Amontons' Law]], [[Charles' Law]], and [[Boyle's law]] form the [[combined gas law]]. The three gas laws in combination with [[Avogadro's Law]] can be generalized by the [[ideal gas law]].
}}</ref><ref>{{cite book | author=Burkey, Roxanne E.; Breakfield, Charles V. | year=2000 | title=Designing a total data solution: technology, implementation and deployment | page=24 | series=Auerbach Best Practices | publisher=CRC Press | isbn=0-8493-0893-3 }}</ref>


There is a trend to modernize data centers in order to take advantage of the performance and energy efficiency increases of newer IT equipment and capabilities, such as [[cloud computing]]. This process is also known as data center transformation.<ref name="mspmentor.net">Mukhar, Nicholas. "HP Updates Data Center Transformation Solutions," August 17, 2011 [http://www.mspmentor.net/2011/08/17/hp-updates-data-transformation-solutions/]</ref>
==See also==
 
{{portal|Underwater diving}}
Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company [[International Data Corporation]] (IDC) puts the average age of a data center at nine-years-old.<ref name="mspmentor.net"/> [[Gartner]], another research company says data centers older than seven years are obsolete.<ref>[http://www.forbes.com/2010/03/12/cloud-computing-ibm-technology-cio-network-data-centers.html Sperling, Ed. "Next-Generation Data Centers," Forbes, March 15. 2010]</ref>
* [[Avogadro's law]]
 
* [[Boyle's law]]
In May 2011, data center research organization [[Uptime Institute]], reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.<ref>Niccolai, James. "Data Centers Turn to Outsourcing to Meet Capacity Needs," CIO.com, May 10, 2011 [http://www.cio.com/article/681897/Data_Centers_Turn_to_Outsourcing_to_Meet_Capacity_Needs]</ref>
* [[Charles' law]]
 
* [[Combined gas law]]
Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.<ref>Tang, Helen. "Three Signs it's time to transform your data center," August 3, 2010, Data Center Knowledge [http://www.datacenterknowledge.com/archives/2010/08/03/three-signs-it%E2%80%99s-time-to-transform-your-data-center/]</ref> The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, [[automation]] and security.
 
*Standardization/consolidation: The purpose of this project is to reduce the number of data centers a large organization may have. This project also helps to reduce the number of hardware, software platforms, tools and processes within a data center. Organizations replace aging data center equipment with newer ones that provide increased capacity and performance. Computing, networking and management platforms are standardized so they are easier to manage.<ref name="datacenterknowledge.com">Miller, Rich. "Complexity: Growing Data Center Challenge," Data Center Knowledge, May 16, 2007
[http://www.datacenterknowledge.com/archives/2007/05/16/complexity-growing-data-center-challenge/]</ref>
 
*Virtualize: There is a trend to use IT virtualization technologies to replace or consolidate multiple data center equipment, such as servers. Virtualization helps to lower capital and operational expenses,<ref>Sims, David. "Carousel's Expert Walks Through Major Benefits of Virtualization," TMC Net, July 6, 2010
[http://virtualization.tmcnet.com/topics/virtualization/articles/193652-carousels-expert-walks-through-major-benefits-virtualization.htm]</ref> and reduce energy consumption.<ref>Delahunty, Stephen. "The New urgency for Server Virtualization," InformationWeek, August 15, 2011. [http://www.informationweek.com/news/government/enterprise-architecture/231300585]</ref> Virtualization technologies are also used to create virtual desktops, which can then be hosted in data centres and rented out on a subscription basis.<ref>{{cite web|title=HVD: the cloud's silver lining|url=http://www.intrinsictechnology.co.uk/FileUploads/HVD_Whitepaper.pdf|publisher=Intrinsic Technology|accessdate=30 August 2012}}</ref>  Data released by investment bank Lazard Capital Markets reports that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.<ref>Miller, Rich. "Gartner: Virtualization Disrupts Server Vendors," Data Center Knowledge, December 2, 2008 [http://www.datacenterknowledge.com/archives/2008/12/02/gartner-virtualization-disrupts-server-vendors/]</ref>
 
*Automating: Data center automation involves automating tasks such as [[provisioning]], configuration, [[Patch (computing)|patching]], release management and compliance. As enterprises suffer from few skilled IT workers,<ref name="datacenterknowledge.com"/> automating tasks make data centers run more efficiently.
 
*Securing: In modern data centers, the security of data on virtual systems is integrated with existing security of physical infrastructures.<ref>Ritter, Ted. Nemertes Research, "Securing the Data-Center Transformation Aligning Security and Data-Center Dynamics," [http://lippisreport.com/2011/05/securing-the-data-center-transformation-aligning-security-and-data-center-dynamics/]</ref> The security of a modern data center must take into account physical security, network security, and data and user security.
 
==Carrier neutrality==
Today many data centers are run by [[Internet service provider]]s solely for the purpose of hosting their own and third party [[Server (computing)|server]]s.
 
However traditionally data centers were either built for the sole use of one large company (i.e. Google, Amazon etc.) or as [[carrier hotel]]s or [[Network-neutral data center]]s.
 
These facilities enable interconnection of carriers and act as regional fiber hubs serving local business in addition to hosting content [[Server (computing)|server]]s.
 
==Data center tiers==
The [[Telecommunications Industry Association]] is a trade association accredited by ANSI (American National Standards Institute). In 2005 it published [http://global.ihs.com/doc_detail.cfm?currency_code=USD&customer_id=2125452B2C0A&oshid=2125452B2C0A&shopping_cart_id=292558332C4A2020495A4D3B200A&country_code=US&lang_code=ENGL&item_s_key=00414811&item_key_date=940819&input_doc_number=TIA-942&input_doc_title= ANSI/TIA-942], Telecommunications Infrastructure Standard for Data Centers, which defined four levels (called tiers) of data centers in a thorough, quantifiable manner. TIA-942 was amended in 2008 and again in 2010. [http://www.adc.com/Attachment/1270711929361/102264AE.pdf ''TIA-942:Data Center Standards Overview''] describes the requirements for the data center infrastructure. The simplest is a Tier 1 data center, which is basically a [[server room]], following basic guidelines for the installation of computer systems. The most stringent level is a Tier 4 data center, which is designed to host mission critical computer systems, with fully redundant subsystems and compartmentalized security zones controlled by [[biometric]] access controls methods. Another consideration is the placement of the data center in a subterranean context, for data security as well as environmental considerations such as cooling requirements.<ref>A ConnectKentucky article mentioning Stone Mountain Data Center Complex {{cite web|title=Global Data Corp. to Use Old Mine for Ultra-Secure Data Storage Facility|url=http://connectkentucky.org/_documents/connected_fall_FINAL.pdf|format=PDF|publisher=ConnectKentucky|accessdate=2007-11-01|date=2007-11-01}}</ref>
 
The German [[Datacenter star audit]] program uses an auditing process to certify 5 levels of "gratification" that affect Data Center criticality.
 
Independent from the ANSI/TIA-942 standard, the [[Uptime Institute]], a think tank and professional-services organization based in [[Santa Fe, New Mexico|Santa Fe]], [[New Mexico]], has defined its own four levels on which it holds a copyright. The levels describe the availability of data from the hardware at a location. The higher the tier, the greater the availability. The levels are:
<ref>A document from the Uptime Institute describing the different tiers (click through the download page) {{cite web|title=Data Center Site Infrastructure Tier Standard: Topology
|url=http://uptimeinstitute.org/index.php?option=com_docman&task=doc_download&gid=82|format=PDF|publisher=Uptime Institute|accessdate=2010-02-13|date=2010-02-13}}</ref>
<ref>The rating guidelines from the Uptime Institute {{cite web|title=Data Center Site Infrastructure  Tier Standard: Topology
|url=http://professionalservices.uptimeinstitute.com/UIPS_PDF/TierStandard.pdf|format=PDF|publisher=Uptime Institute|accessdate=2010-02-13|date=2010-02-13}}</ref>
 
{| class="wikitable"
|-
! Tier Level
! Requirements
|-
| 1
|
* Single non-redundant distribution path serving the IT equipment
* Non-redundant capacity components
* Basic site infrastructure with expected availability of 99.671%
|-
| 2
|
* Meets or exceeds all Tier 1 requirements
* Redundant site infrastructure capacity components with expected availability of 99.741%
|-
| 3
|
* Meets or exceeds all Tier 1 and Tier 2 requirements
* Multiple independent distribution paths serving the IT equipment
* All IT equipment must be dual-powered and fully compatible with the topology of a site's architecture
* Concurrently maintainable site infrastructure with expected availability of 99.982%
|-
| 4
|
* Meets or exceeds all Tier 1, Tier 2 and Tier 3 requirements
* All cooling equipment is independently dual-powered, including chillers and heating, ventilating and air-conditioning (HVAC) systems
* Fault-tolerant site infrastructure with electrical power storage and distribution facilities with expected availability of 99.995%
|}
 
NOTE: The difference between 99.982% and 99.995%, 0.013%, while seemingly nominal, it could be significant depending on the application. Looking at one (1) year or 525,600 minutes, T3 will be unavailable 94.608 minutes whereas T4 will only be unavailable 26.28 minutes. Therefore, T4 in a year will be available for 68.328 more minutes than T3. One can only imagine how many more credit cards transactions can take place when the services are run by a T4 as opposed to a T3 center. Finally, when one considers T3 vs. T2, one would find T3 designed to be 22.6 hrs more available than T2.


==Design considerations==
== References ==
[[File:Rack001.jpg|thumb|right|A typical server rack, commonly seen in [[colocation center|colocation]]]]
{{Reflist}}
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in [[19 inch rack]] cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from [[Rack unit|1U server]]s to large freestanding storage silos which occupy many tiles on the floor. Some equipment such as [[mainframe computer]]s and [[computer storage|storage]] devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use [[intermodal container|shipping container]]s packed with 1,000 or more servers each;<ref>{{cite web|url=http://www.youtube.com/watch?v=zRwPSFpLX8I|title=Google Container Datacenter Tour (video)}}</ref> when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).<ref>{{cite web| title=Walking the talk: Microsoft builds first major container-based data center| url=http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075519| archiveurl=http://web.archive.org/web/20080612193106/http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075519| archivedate=2008-06-12| accessdate=2008-09-22}}</ref>
http://www.ausetute.com.au/gaylusac.html


Local building codes may govern the minimum ceiling heights.
http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch4/gaslaws3.html#amonton


===Design Programming===
http://www.bookrags.com/biography/joseph-louis-gay-lussac-wsd/
Design programming, also known as architectural programming, is the process of researching and making decisions to identify the scope of a design project.<ref>Cherry, Edith. “Architectural Programming: Introduction”, Whole Building Design Guide, Sept. 2, 2009</ref> Other than the architecture of the building itself there are three elements to design programming for data centers: facility topology design (space planning), engineering infrastructure design (mechanical systems such as cooling and electrical systems including power) and technology infrastructure design (cable plant). Each will be influenced by performance assessments and modelling to identify gaps pertaining to the owner’s performance wishes of the facility over time.


Various vendors who provide data center design services define the steps of data center design slightly differently, but all address the same basic aspects as detailed below.
== Further reading ==
 
===Modeling Criteria===
Modeling criteria is used to develop future-state scenarios for space, power, cooling, and costs.<ref>Mullins, Robert. “Romonet Offers Predictive Modelling Tool For Data Center Planning”, Network Computing, June 29, 2011 [http://www.networkcomputing.com/data-center/231000669]</ref> The aim is to create a master plan with parameters such as number, size, location, topology, IT floor system layouts, and power and cooling technology and configurations.
 
===Design Recommendations===
Design recommendations/plans generally follow the modelling criteria phase. The optimal technology infrastructure is identified and planning criteria is developed, such as critical power capacities, overall data center power requirements using an agreed upon PUE (power utilization efficiency), mechanical cooling capacities, kilowatts per cabinet, raised floor space, and the resiliency level for the facility.
 
===Conceptual Design===
Conceptual designs embody the design recommendations or plans and should take into account “what-if” scenarios to ensure all operational outcomes are met in order to future-proof the facility. Conceptual floor layouts should be driven by IT performance requirements as well as lifecycle costs associated with IT demand, energy efficiency, cost efficiency and availability. Future-proofing will also include expansion capabilities, often provided in modern data centers through modularity.
 
===Detail Design===
Detail design is undertaken once the appropriate conceptual design is determined, typically including a proof of concept. The detail design phase should include the development of facility schematics and construction documents as well as schematic of technology infrastructure, detailed IT infrastructure design and IT infrastructure documentation.
 
===Mechanical Engineering Infrastructure Design===
Mechanical engineering infrastructure design addresses mechanical systems involved in maintaining the interior environment of a data center, such as heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization; and so on.<ref name="nxtbook.com">Jew, Jonathan. “BICSI Data Center Standard: A Resource for Today’s Data Center Operators and Designers,” BICSI News Magazine, May/June 2010, page 28. [http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26]</ref>
This stage of the design process should be aimed at saving space and costs, while ensuring business and reliability objectives are met as well as achieving PUE and green requirements.<ref>Data Center Energy Management: Best Practices Checklist: Mechanical, Lawrence Berkeley National Laboratory [http://hightech.lbl.gov/dctraining/strategies/mam.html]</ref> Modern designs include modularizing and scaling IT loads, and making sure capital spending on the building construction is optimized.
 
===Electrical Engineering Infrastructure Design===
Electrical Engineering infrastructure design is focused on designing electrical configurations that accommodate various reliability requirements and data center sizes. Aspects may include utility service planning; distribution, switching and bypass from power sources; uninterruptable power source (UPS) systems; and more.<ref name="nxtbook.com"/>
 
These designs should dovetail to energy standards and best practices while also meeting business objectives. Electrical configurations should be optimized and operationally compatible with the data center user’s capabilities. Modern electrical design is modular and scalable,<ref>Clark, Jeff. “Hedging Your Data Center Power”, The Data Center Journal, Oct. 5, 2011. [http://www.datacenterjournal.com/design/hedging-your-data-center-power/]</ref> and is available for low and medium voltage requirements as well as DC (direct current).
 
===Technology Infrastructure Design===
Technology infrastructure design addresses the telecommunications cabling systems that run throughout data centers. There are cabling systems for all data center environments, including horizontal cabling, voice, modem, and facsimile telecommunications services, premises switching equipment, computer and telecommunications management connections, keyboard/video/mouse connections and data communications.<ref>Jew, Jonathan. “BICSI Data Center Standard: A Resource for Today’s Data Center Operators and Designers,” BICSI News Magazine, May/June 2010, page 30. [http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26]</ref> Wide area, local area, and storage area networks should link with other building signaling systems (e.g. fire, security, power, HVAC, EMS).
 
===Availability expectations===
The higher the availability needs of a data center, the higher the capital and operational costs of building and managing it. Business needs should dictate the level of availability required and should be evaluated based on characterization of the criticality of IT systems estimated cost analyses from modeled scenarios. In other words, how can an appropriate level of availability best be met by design criteria to avoid financial and operational risks as a result of downtime?
If the estimated cost of downtime within a specified time unit exceeds the amortized capital costs and operational expenses, a higher level of availability should be factored into the data center design. If the cost of avoiding downtime greatly exceeds the cost of downtime itself, a lower level of availability should be factored into the design.<ref>Clark, Jeffrey. “The Price of Data Center Availability—How much availability do you need?”, Oct. 12, 2011, The Data Center Journal [http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability]</ref>
 
===Site selection===
Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines and emergency services can affect costs, risk, security and other factors to be taken into consideration for data center design. Location affects data center design also because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling.<ref>Tucci, Linda. “Five tips on selecting a data center location”, May 7, 2008, SearchCIO.com [http://searchcio.techtarget.com/news/1312614/Five-tips-on-selecting-a-data-center-location]</ref> For example, the topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate.
 
===Modularity and flexibility===
{{main|Modular data center}}
 
Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.<ref>Niles, Susan. “Standardization and Modularity in Data Center Physical Infrastructure,” 2011, Schneider Electric, page 4. [http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf]</ref>
 
A modular data center may consist of data center equipment contained within shipping containers or similar portable containers.<ref>Pitchaikani, Bala. “Strategies for the Containerized Data Center,” DataCenterKnowledge.com, Sept. 8, 2011. [http://www.datacenterknowledge.com/archives/2011/09/08/strategies-for-the-containerized-data-center/]</ref> But it can also be described as a design style in which components of the data center are prefabricated and standardized so that they can be constructed, moved or added to quickly as needs change.<ref>Niccolai, James. “HP says prefab data center cuts costs in half,” InfoWorld, July 27, 2010. [http://www.infoworld.com/d/green-it/hp-says-prefab-data-center-cuts-costs-in-half-837?page=0,0]</ref>
 
===Environmental control===
{{main|Data center environmental control}}
The physical environment of a data center is rigorously controlled.
[[Air conditioning]] is used to control the temperature and humidity in the data center. [[ASHRAE]]'s "Thermal Guidelines for Data Processing Environments"<ref>{{cite web|url=http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf|format=PDF|title=ASHRAE's "Thermal Guidelines for Data Processing Environments"}}</ref> recommends a temperature range of {{convert|16|–|24|C|F}} and humidity range of 40–55% with a maximum dew point of 15&nbsp;°C as optimal for data center conditions.<ref>{{cite web|url=http://www.serverscheck.com/blog/2008/07/why-monitor-humidity-in-computer-rooms.html|title=ServersCheck's Blog on Why Humidity Monitoring|date=July 1, 2008}}</ref>  The temperature in a data center will naturally rise because the electrical power used heats the air. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control [[humidity]] by cooling the return space air below the [[dew point]]. Too much humidity, and water may begin to [[condensation|condense]] on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in [[electrostatics|static electricity]] discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.
 
Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. At least one data center (located in [[Upstate New York]]) will cool servers using outside air during the winter. They do not use chillers/air conditioners, which creates potential energy savings in the millions.<ref>{{cite news| url=http://www.reuters.com/article/pressRelease/idUS141369+14-Sep-2009+PRN20090914 | work=Reuters | title=tw telecom and NYSERDA Announce Co-location Expansion | date=2009-09-14}}</ref>
 
Telcordia [http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-2930& GR-2930, ''NEBS: Raised Floor Generic Requirements for Network and Data Centers''], presents generic engineering requirements for raised floors that fall within the strict NEBS guidelines.
 
There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of raised floors include stringerless, stringered, and structural platforms, all of which are discussed in detail in GR-2930 and summarized below.
 
*'''''Stringerless Raised Floors''''' - One non-earthquake type of raised floor generally consists of an array of pedestals that provide the necessary height for routing cables and also serve to support each corner of the floor panels. With this type of floor, there may or may not be provisioning to mechanically fasten the floor panels to the pedestals. This stringerless type of system (having no mechanical attachments between the pedestal heads) provides maximum accessibility to the space under the floor. However, stringerless floors are significantly weaker than stringered raised floors in supporting lateral loads and are not recommended.
 
*'''''Stringered Raised Floors''''' - This type of raised floor generally consists of a vertical array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright and the overall height is adjustable with a leveling nut on the welded stud of the pedestal head.
 
*'''''Structural Platforms''''' - One type of structural platform consists of members constructed of steel angles or channels that are welded or bolted together to form an integrated platform for supporting equipment. This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.
 
====Metal whiskers====
Raised floors and other metal structures such as cable trays and ventilation ducts have caused many problems with [[zinc whiskers]] in the past, and likely are still present in many data centers. This happens when microscopic metallic filaments form on metals such as zinc or tin that protect many metal structures and electronic components from corrosion. Maintenance on a raised floor or installing of cable etc. can dislodge the whiskers, which enter the airflow and may short circuit server components or power supplies, sometimes through a high current metal vapor [[plasma arc]]. This phenomenon is not unique to data centers, and has also caused catastrophic failures of satellites and military hardware.<ref>{{cite web|title=NASA - metal whiskers research|url=http://nepp.nasa.gov/whisker/other_whisker/index.htm|publisher=NASA|accessdate=1 August 2011}}</ref>
 
===Electrical power===
 
[[File:Datacenter Backup Batteries.jpg|thumb|right|A bank of batteries in a large data center, used to provide power until diesel generators can start]]
 
Backup power consists of one or more [[uninterruptible power supply|uninterruptible power supplies]], battery banks, and/or [[Electrical generator|diesel generator]]s.<ref>Detailed expanation of UPS topologies {{cite web|url=http://www.emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf|format=PDF|title=EVALUATING THE ECONOMIC IMPACT OF UPS TECHNOLOGY}}</ref>
 
To prevent [[single point of failure|single points of failure]], all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve [[N+1 redundancy]] in the systems. Static switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
 
Data centers typically have [[raised floor]]ing made up of {{convert|60|cm|ft|abbr=on|0}} removable square tiles. The trend is towards {{convert|80|-|100|cm|in|abbr=on}} void to cater for better and uniform air distribution. These provide a [[plenum space|plenum]] for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling.
 
===Low-voltage cable routing===
Data cabling is typically routed through overhead [[cable tray]]s in modern data centers. But some{{Who|date=May 2012}} are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a [[hot aisle]] arrangement to maximize airflow efficiency.
 
===Fire protection===
Data centers feature [[fire protection]] systems, including [[passive fire protection|passive]] and [[active fire protection|active]] design elements, as well as implementation of [[fire prevention]] programs in operations. [[Smoke detectors]] are usually installed to provide early warning of a developing fire by detecting particles generated by smoldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A [[fire sprinkler system]] is often provided to control a full scale fire if it develops. Fire sprinklers require {{convert|18|in|cm|abbr=on}} of clearance (free of cable trays, etc.) below the sprinklers. [[Aspirating smoke detector]]s and [[clean agent]] fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of [[Firewall (construction)|fire walls]] around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, such as making sure the door is not left open or if they are not installed. For critical facilities these firewalls are often insufficient to protect heat-sensitive electronic equipment, however, because conventional firewall construction is only rated for flame penetration time, not heat penetration. There are also deficiencies in the protection of vulnerable entry points into the server room, such as cable penetrations, coolant line penetrations and air ducts. For mission critical data centers [[fireproofing|fireproof vaults]] with a [[fireproofing|Class 125]] rating are necessary to meet [[library fires|NFPA 75]]<ref>Fixen, Edward L. and Vidar S. Landa,"Avoiding the Smell of Burning Data," Consulting-Specifying Engineer, May 2006, Vol. 39 Issue 5, p47-51
</ref> standards.
 
===Security===
Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including [[bollard]]s and [[mantrap]]s.<ref>[http://www.csoonline.com/article/220665 19 Ways to Build Physical Security Into a Data Center]</ref> [[Video camera]] surveillance and permanent [[security guard]]s are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition man traps is starting to be commonplace.
 
==Energy use==
[[File:Google Data Center, The Dalles.jpg|thumb|right|Google Data Center, The Dalles]]
{{main|IT energy management}}
 
Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.<ref>{{cite web|url=http://www1.eere.energy.gov/femp/program/dc_energy_consumption.html|title=Data Center Energy Consumption Trends|publisher=U.S. Department of Energy|accessdate=2010-06-10}}</ref> For higher power density facilities, electricity costs are a dominant [[operating expense]] and account for over 10% of the [[total cost of ownership]] (TCO) of a data center.<ref>J Koomey, C. Belady, M. Patterson, A. Santos, K.D. Lange. [http://www.intel.com/assets/pdf/general/servertrendsreleasecomplete-v25.pdf Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers] Released on the web August 17th, 2009.</ref> By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.<ref>{{cite web|url=http://www1.eere.energy.gov/femp/pdfs/data_center_qsguide.pdf|title=Quick Start Guide to Increase Data Center Energy Efficiency|publisher=U.S. Department of Energy|accessdate=2010-06-10}}</ref>
 
===Greenhouse gas emissions===
In 2007 the entire [[information and communication technologies]] or ICT sector was estimated to be responsible for roughly 2% of global [[Greenhouse gas|carbon emissions]] with data centers accounting for 14% of the ICT footprint.<ref name="smart1">{{cite web|url=http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf|title=Smart 2020: Enabling the low carbon economy in the information age|publisher=The Climate Group for the Global e-Sustainability Initiative|accessdate=2008-05-11}}</ref> The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption,<ref name="energystar1">{{cite web|url=http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf|title=Report to Congress on Server and Data Center Energy Efficiency|publisher=U.S. Environmental Protection Agency ENERGY STAR Program}}</ref> or roughly .5% of US GHG emissions,<ref>A calculation of data center electricity burden cited in the [http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf Report to Congress on Server and Data Center Energy Efficiency] and electricity generation contributions to green house gas emissions published by the EPA in the [http://epa.gov/climatechange/emissions/downloads10/US-GHG-Inventory-2010_ExecutiveSummary.pdf Greenhouse Gas Emissions Inventory Report]. Retrieved 2010-06-08.</ref>  for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020.<ref name="smart1"/>
 
Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as: Canada,<ref>[http://www.theglobeandmail.com/report-on-business/canada-called-prime-real-estate-for-massive-data-computers/article2071677/ Canada Called Prime Real Estate for Massive Data Computers - Globe & Mail] Retrieved June 29, 2011.</ref> Finland,<ref>[http://www.fincloud.freehostingcloud.com/ Finland - First Choice for Siting Your Cloud Computing Data Center.]. Retrieved 4 August 2010.</ref> Sweden<ref>[http://www.stockholmbusinessregion.se/templates/page____41724.aspx?epslanguage=EN Stockholm sets sights on data center customers.] Accessed 4 August 2010. {{Dead link|date=October 2010|bot=H3llBot}}</ref> and Switzerland,<ref>[http://www.greenbiz.com/news/2010/06/30/swiss-carbon-neutral-servers-hit-cloud Swiss Carbon-Neutral Servers Hit the Cloud.]. Retrieved 4 August 2010.</ref> are trying to attract cloud computing data centers.
 
In an 18-month investigation by scholars at Rice University’s Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020.
<ref>{{Cite news
| author = Katrice R. Jalbuena
| title = Green business news.
| quote = 
| publisher = EcoSeed
| date = October 15, 2010
| pages =
| url = http://ecoseed.org/en/business-article-list/article/1-business/8219-i-t-industry-risks-output-cut-in-low-carbon-economy
| accessdate = November 11, 2010
}}</ref>
 
===Energy efficiency===
The most commonly used metric to determine the energy efficiency of a data center is [[power usage effectiveness]], or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment.
 
:<math> \mathrm{PUE}  =  {\mbox{Total Facility Power} \over \mbox{IT Equipment Power}} </math>
 
Power used by support equipment, often referred to as overhead load, mainly consists of cooling systems, power delivery, and other facility infrastructure like lighting. The average data center in the US has a PUE of 2.0,<ref name="energystar1"/> meaning that the facility uses one Watt of overhead power for every Watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2.<ref>{{cite web|url=https://microsite.accenture.com/svlgreport/Documents/pdf/SVLG_Report.pdf|title=Data Center Energy Forecast|publisher=Silicon Valley Leadership Group}}</ref> Some large data center operators like [[Microsoft]] and [[Yahoo!]] have published projections of PUE for facilities in development; [[Google]] publishes quarterly actual efficiency performance from data centers in operation.<ref>{{cite web|url=http://www.datacenterknowledge.com/archives/2009/10/15/google-efficiency-update-pue-of-1-22/|title=Google Efficiency Update|publisher=Data Center Knowledge|accessdate=2010-06-08}}</ref>
 
The [[U.S. Environmental Protection Agency]] has an [[Energy Star]] rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities.<ref>Commentary on introduction of Energy Star for Data Centers {{cite web|title=Introducing EPA ENERGY STAR for Data Centers|url=http://www.emerson.com/edc/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx|format=Web site|publisher=Jack Pouchet|accessdate=2010-09-27|date=2010-09-27}}</ref>
 
European Union also has a similar initiative: EU Code of Conduct for Data Centres<ref>http://re.jrc.ec.europa.eu/energyefficiency/html/standby_initiative_data_centers.htm EU Code of Conduct for Data Centres</ref>
 
===Energy use analysis===
Often, the first step toward curbing energy use in a data center is to understand how energy is being used in the data center. Multiple types of analysis exist to measure data center energy use. Aspects measured include not just energy used by IT equipment itself, but also by the data center facility equipment, such as chillers and fans.<ref>Sweeney, Jim. "Reducing Data Center Power and Energy Consumption: Saving Money and 'Going Green,' " GTSI Solutions, pages 2–3. [http://www.gtsi.com/cms/documents/white-papers/green-it.pdf]</ref>
 
===Power and cooling analysis===
Power is the largest recurring cost to the user of a data center.<ref name=DRJ_Choosing>{{Citation
| title = Choosing a Data Center
| url = http://www.atlantic.net/images/pdf/choosing_a_data_center.pdf
| publication = Disaster Recovery Journal
| year = 2009
| author = Cosmano, Joe
| accessdate = 2012-07-21
}}</ref>  A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas of your data center, as well as the ability of your data center to tolerate specific temperatures.<ref>Needle, David. “HP's Green Data Center Portfolio Keeps Growing,” InternetNews, July 25, 2007. [http://www.internetnews.com/xSP/article.php/3690651/HPs+Green+Data+Center+Portfolio+Keeps+Growing.htm]</ref> Among other things, a power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center.  Power cooling density is a measure of how much square footage the center can cool at maximum capacity.<ref name=Inc_Howtochoose>{{Citation
| title = How to Choose a Data Center
| url = http://www.inc.com/guides/2010/11/how-to-choose-a-data-center_pagen_2.html
| year = 2010
| author = Inc. staff
| accessdate = 2012-07-21
}}</ref>
 
===Energy efficiency analysis===
An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center’s power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.<ref>Siranosian, Kathryn. “HP Shows Companies How to Integrate Energy Management and Carbon Reduction,” TriplePundit, April 5, 2011. [http://www.triplepundit.com/2011/04/hp-launches-program-companies-integrate-manage-energy-carbon-reduction-strategies/]</ref>
 
===Computational fluid dynamics (CFD) analysis===
{{main|Computational fluid dynamics}}
 
This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling.<ref>Bullock, Michael. “Computation Fluid Dynamics - Hot topic at Data Center World,” Transitional Data Services,” March 18, 2010. [http://blog.transitionaldata.com/aggregate/bid/37840/Seeing-the-Invisible-Data-Center-with-CFD-Modeling-Software]</ref> By predicting the effects of these environmental conditions, CFD analysis in the data center can be used to predict the impact of high-density racks mixed with low-density racks<ref>Bouley, Dennis (editor). “Impact of Virtualization on Data Center Physical Infrastructure,” The Green grid, 2010. [http://www.thegreengrid.org/~/media/WhitePapers/White_Paper_27_Impact_of_Virtualization_Data_On_Center_Physical_Infrastructure_020210.pdf?lang=en]</ref> and the onward impact on cooling resources, poor infrastructure management practices and AC failure of AC shutdown for scheduled maintenance.
 
===Thermal zone mapping===
Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.<ref>Fontecchio, Mark. “HP Thermal Zone Mapping plots data center hot spots,” SearchDataCenter, July 25, 2007. [http://searchdatacenter.techtarget.com/news/1265634/HP-Thermal-Zone-Mapping-plots-data-center-hot-spots]</ref>
 
This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.
 
===Green datacenters===
Datacenters use a lot of power, consumed by two main usages: the power required to run the actual equipment (CPU's, memory, etc.) and then the power required to cool the equipment. The first category is addressed by designing computers and storage systems that are more and more power-efficient. And to bring down the cooling costs datacenter designers try to use natural ways to cool the equipment. Many datacenters have to be located near people-concentrations to manage the equipment, but there are also many circumstances where the datacenter can be miles away from the users and don't need a lot of local management. Examples of this are the 'mass' datacenters like Google or Facebook: these DC's are built around many standarised servers and storage-arrays and the actual users of the systems are located all around the world. After the initial build of a datacenter there is not much staff required to keep it running: especially datacenters that provide mass-storage or computing power don't need to be near population centers. Datacenters in arctic locations where outside air provides all cooling are getting more popular as cooling and electricity are the two main variable cost. components<ref>Gizmag [http://www.gizmag.com/fjord-cooled-data-center/20938/ Fjord-cooled DC in Norway claims to be greenest], 23 December 2011. Visited: 1 April 2012</ref>
 
==Network infrastructure==
[[File:Paris servers DSC00190.jpg|thumb|left|An example of "rack mounted" servers]]
Communications in data centers today are most often based on [[computer network|networks]] running the [[Internet protocol|IP]] [[protocol (computing)|protocol]] suite. Data centers contain a set of [[Router (computing)|router]]s and [[Network switch|switch]]es that transport traffic between the servers and to the outside world. [[Redundancy (engineering)|Redundancy]] of the Internet connection is often provided by using two or more upstream service providers (see [[Multihoming]]).
 
Some of the servers at the data center are used for running the basic [[Internet]] and [[intranet]] services needed by internal users in the organization, e.g., [[e-mail]] servers, [[proxy server]]s, and [[Domain Name System|DNS]] servers.
 
Network security elements are also usually deployed: [[firewall (networking)|firewalls]], [[VPN]] [[Gateway (computer networking)|gateways]], [[intrusion detection system]]s, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.
 
==Data Center Infrastructure Management==
[[Data center infrastructure management]] (DCIM) is the integration of [[information technology]] (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM enables common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.
 
Depending on the type of implementation, DCIM products can help data center managers identify and eliminate sources of risk to increase availability of critical IT systems. DCIM products also can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy, and provide dynamic, holistic benchmarks on power consumption and efficiency to measure the effectiveness of “green IT” initiatives.
 
Measuring and understanding important data center efficiency metrics. A lot of the discussion in this area has focused on energy issues, but other metrics beyond the PUE can give a more detailed picture of the data center operations. Server, storage, and staff utilization metrics can contribute to a more complete view of an enterprise data center. In many cases, disc capacity goes unused and in many instances the organizations run their servers at 20% utilization or less.<ref>{{cite web|url= http://content.dell.com/us/en/enterprise/d/large-business/measure-data-center-efficiency.aspx |title= Measuring Data Center Efficiency: Easier Said Than Done|publisher=Dell.com | accessdate=2012-06-25}}</ref> More effective automation tools can also improve the number of servers or virtual machines that a single admin can handle.
 
==Applications==
[[File:IBMPortableModularDataCenter.jpg|thumb|right|A 40-foot [[Portable Modular Data Center]]]]
 
The main purpose of a data center is running the applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from [[enterprise software]] vendors. Such common applications are [[Enterprise resource planning|ERP]] and [[Customer relationship management|CRM]] systems.
 
A data center may be concerned with just [[operations architecture]] or it may provide other services as well.
 
Often these applications will be composed of multiple hosts, each running a single component. Common components of such applications are [[database]]s, [[file server]]s, [[application server]]s, [[middleware]], and various others.
 
Data centers are also used for off site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with [[Tape drive|backup tapes]]. Backups can be taken of servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely.
 
For quick deployment or [[disaster recovery]], several large hardware vendors have developed mobile solutions that can be installed and made operational in very short time. Companies such as [[Cisco Systems]],<ref>{{cite web|title=Info and video about Cisco's solution|url=http://www.datacenterknowledge.com/archives/2008/May/15/ciscos_mobile_emergency_data_center.html|publisher=Datacentreknowledge|accessdate=2008-05-11|date=May 15, 2007}}</ref> [[Sun Microsystems]] ([[Sun Modular Datacenter]]),<ref>{{cite web|url=http://www.sun.com/products/sunmd/s20/specifications.jsp|archiveurl=http://web.archive.org/web/20080513090300/http://www.sun.com/products/sunmd/s20/specifications.jsp|archivedate=2008-05-13|title=Technical specs of Sun's Blackbox|accessdate=2008-05-11}}</ref><ref>And English Wiki article on [[Sun Modular Datacenter|Sun's modular datacentre]]</ref> [[Groupe Bull|Bull]],<ref>{{cite web|title=Mobull Plug and Boot Datacenter|url=http://www.bull.com/extreme-computing/mobull.html|publisher=Bull|first=Daniel|last=Kidger|accessdate=2011-05-24}}</ref> [[IBM]] ([[Portable Modular Data Center]]), [[HP]], and [[Google]] ([[Google Modular Data Center]]) have developed systems that could be used for this purpose.<ref>{{cite web|url=http://www.crn.com/hardware/208403225|publisher=ChannelWeb|accessdate=2008-05-11|title=IBM's Project Big Green Takes Second Step|first=Brian|last=Kraemer|date=June 11, 2008}}</ref><ref>[http://hightech.lbl.gov/documents/data_centers/modular-dc-procurement-guide.pdf Modular/Container Data Centers Procurement Guide: Optimizing for Energy Efficiency and Quick Deployment]</ref>
 
==See also==
* [[Central apparatus room]]
* [[Colocation center]]
* [[Data center infrastructure management]]
* [[Disaster recovery]]
* [[Dynamic Infrastructure]]
* [[Electrical network]]
* [[HVAC]]
* [[Internet exchange point]]
* [[Modular data center]]
* [[Network operations center]]
* [[Peering]]
* [[Server farm]]
* [[Server room]]
* [[Server Room Environment Monitoring System]]
* [[Server sprawl]]
* [[Sun Modular Datacenter]]
* [[Telecommunications network]]
* [[Vendor-neutral data centre|Vendor-neutral data center]]
* [[Web hosting service]]
* [[Internet hosting service]]
* [[Neher–McGrath]]
* [[Datacenter star audit]]


==References==
* {{cite book | author=Castka, Joseph F.; Metcalfe, H. Clark; Davis, Raymond E.; Williams, John E. | title=Modern Chemistry | publisher=Holt, Rinehart and Winston | year=2002 | isbn=0-03-056537-5}}
{{reflist|colwidth=30em}}
* {{cite book | author=Guch, Ian | title=The Complete Idiot's Guide to Chemistry | publisher=Alpha, Penguin Group Inc. | year=2003 | isbn=1-59257-101-8}}
* {{cite book | author=Mascetta, Joseph A. | title=How to Prepare for the SAT II Chemistry | publisher=Barron's | year=1998 | isbn=0-7641-0331-8}}


==External links==
{{DEFAULTSORT:Gay-Lussac's Law}}
{{Commons category|Data centers}}
{{Diving medicine, physiology and physics}}
{{wikibooks|The Design and Organization of Data Centers}}
* [http://hightech.lbl.gov/datacenters.html Lawrence Berkeley Lab] - Research, development, demonstration, and deployment of energy-efficient technologies and practices for data centers
* [http://www.uptimeinstitute.org/ The Uptime Institute] - Organization that defines data center reliability and conducts site certifications.
* [https://weblog.bit.nl/blog/2009/07/29/timelapse-video-bouw-bit-2bcd/ Timelapse BIT2BCD] - timelapse video of a data centre building in [[Ede, Netherlands]].
* [http://www.neher-mcgrath.org/ The Neher-McGrath Institute] - Organization that certifies data center underground duct bank installation to provide lower operating cost and increased up-time.
* [http://hightech.lbl.gov/dc-powering/faq.html DC Power For Data Centers Of The Future] - FAQ: 380VDC testing and demonstration at a Sun data center.
* [http://www.atlantic.net/images/pdf/choosing_a_data_center.pdf Choosing a Data Center] - article from Disaster Recovery Journal.


{{DEFAULTSORT:Data Center}}
[[Category:Gas laws]]
[[Category:Networks]]
[[Category:Applications of distributed computing]]
[[Category:Cloud storage]]
[[Category:Data management]]
[[Category:Distributed data storage]]
[[Category:Distributed data storage systems]]
[[Category:Servers (computing)]]
[[Category:Data centers| ]]


[[ar:مركز بيانات]]
[[ar:قانون جاي-لوساك]]
[[bg:Дата център]]
[[be:Закон Гей-Люсака]]
[[ca:Centre de càlcul]]
[[bg:Закон на Гей-Люсак]]
[[cs:Serverovna]]
[[bs:Gay-Lussacov zakon]]
[[de:Rechenzentrum]]
[[ca:Llei dels volums de combinació]]
[[es:Centro de procesamiento de datos]]
[[cs:Gay-Lussacův zákon]]
[[fa:مرکز داده]]
[[de:Thermische Zustandsgleichung idealer Gase#Gesetz von Gay-Lussac]]
[[fr:Centre de traitement de données]]
[[et:Gay-Lussaci seadus]]
[[gl:Centro de procesamento de datos]]
[[el:Νόμος Γκέι-Λουσάκ]]
[[ko:데이터 센터]]
[[es:Ley de Gay-Lussac]]
[[id:Pusat data]]
[[eu:Gay-Lussacen legea]]
[[is:Netþjónabú]]
[[fr:Loi de Gay-Lussac]]
[[it:Centro elaborazione dati]]
[[ga:Dlí Gay-Lussac]]
[[lt:Duomenų centras]]
[[gd:Lagh Gay-Lussac]]
[[nl:Rekencentrum]]
[[gl:Lei de Gay-Lussac]]
[[ja:データセンター]]
[[hr:Gay-Lussacov zakon]]
[[uz:Axborot markazi]]
[[id:Hukum Gay-Lussac]]
[[pt:Centro de processamento de dados]]
[[is:Lögmál Gay-Lussac]]
[[ru:Дата-центр]]
[[it:Legge dei volumi di combinazione]]
[[sv:Datorhall]]
[[he:חוק גה-ליסאק]]
[[th:ศูนย์ข้อมูล]]
[[hu:Gay-Lussac-törvény]]
[[uk:Дата-центр]]
[[ml:ഗേ-ലുസാക് നിയമം]]
[[zh:数据中心]]
[[pl:Prawo stosunków objętościowych]]
[[pt:Lei de Gay-Lussac]]
[[ru:Закон Гей-Люссака]]
[[si:‍ගේලුසැක් නියමය]]
[[sk:Zákon objemových zlučovacích pomerov]]
[[sl:Gay-Lussacov zakon]]
[[sr:Геј-Лисаков закон]]
[[fi:Gay-Lussacin laki]]
[[sv:Gay-Lussacs lag]]
[[tr:Gay-Lussac yasası]]
[[uk:Закон Гей-Люссака]]
[[vi:Định luật Gay-Lussac 2]]
[[zh:盖-吕萨克定律]]
[[par-sci]][[21]]

Revision as of 10:00, 12 August 2014

Template:Continuum mechanics The expression Gay-Lussac's law is used for each of the two relationships named after the French chemist Joseph Louis Gay-Lussac and which concern the properties of gases, though it is more usually applied to his law of combining volumes, the first listed here. One law relates to volumes before and after a chemical reaction while the other concerns the pressure and temperature relationship for a sample of gas.

Law of combining volumes

Under STP, a reaction between three cubic meters of hydrogen gas and one cubic meter of nitrogen gas will produce circa two cubic meters of ammonia

The law of combining volumes states that, when gases react together to form other gases, and all volumes are measured at the same temperature and pressure:

The ratio between the volumes of the reactant gases and the products can be expressed in simple whole numbers.

For example, Gay-Lussac found that 2 volumes of Hydrogen and 1 volume of Oxygen would react to form 2 volume of gaseous water. In addition to Gay-Lussac's results, Amedeo Avogadro theorized that, at the same temperature and pressure, equal volumes of gas contain equal numbers of molecules (Avogadro's law). This hypothesis meant that the previously stated result

2 volumes of Hydrogen + 1 volume of Oxygen = 2 volumes of gaseous water

could also be expressed as

2 molecules of Hydrogen + 1 molecule of Oxygen = 2 molecules of water.

The law of combining gases was published by Joseph Louis Gay-Lussac in 1808.[1] Avogadro's hypothesis, however, was not initially accepted by chemists until the Italian chemist Stanislao Cannizzaro was able to convince the First International Chemical Congress in 1860.[2]

Pressure-temperature law

Gay-Lussac's name is also associated in physics with another gas law, the so-called pressure law, which states that:

The pressure of a gas of fixed mass and fixed volume is directly proportional to the gas' absolute temperature.

File:Temperature Pressure law.svg
Illustration of pressure varying with temperature.

Simply put, if a gas' temperature increases then so does its pressure, if the mass and volume of the gas are held constant. The law has a particularly simple mathematical form if the temperature is measured on an absolute scale, such as in kelvins. The law can then be expressed mathematically as:

or

where:

P is the pressure of the gas (measured in ATM).
T is the temperature of the gas (measured in Kelvin).
k is a constant.

This law holds true because temperature is a measure of the average kinetic energy of a substance; as the kinetic energy of a gas increases, its particles collide with the container walls more rapidly, thereby exerting increased pressure.

For comparing the same substance under two different sets of conditions, the law can be written as:

Amontons' Law of Pressure-Temperature: The pressure law described above should actually be attributed to Guillaume Amontons, who, between 1700 and 1702[3][4], discovered that the pressure of a fixed mass of gas kept at a constant volume is proportional to the temperature. Amontons discovered this while building an "air thermometer". Calling it Gay-Lussac's law is simply incorrect as Gay-Lussac investigated the relationship between volume and temperature (i.e. Charles' Law), not pressure and temperature.

Charles' Law was also known as the Law of Charles and Gay-Lussac, because Gay-Lussac published it in 1802 using much of Charles's unpublished data from 1787. However, in recent years the term has fallen out of favor, and Gay-Lussac's name is now generally associated with the law of combining volumes. Amontons' Law, Charles' Law, and Boyle's law form the combined gas law. The three gas laws in combination with Avogadro's Law can be generalized by the ideal gas law.

See also

Sportspersons Hyslop from Nicolet, usually spends time with pastimes for example martial arts, property developers condominium in singapore singapore and hot rods. Maintains a trip site and has lots to write about after touring Gulf of Porto: Calanche of Piana.

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro. http://www.ausetute.com.au/gaylusac.html

http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch4/gaslaws3.html#amonton

http://www.bookrags.com/biography/joseph-louis-gay-lussac-wsd/

Further reading

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534


Template:Diving medicine, physiology and physics

ar:قانون جاي-لوساك be:Закон Гей-Люсака bg:Закон на Гей-Люсак bs:Gay-Lussacov zakon ca:Llei dels volums de combinació cs:Gay-Lussacův zákon de:Thermische Zustandsgleichung idealer Gase#Gesetz von Gay-Lussac et:Gay-Lussaci seadus el:Νόμος Γκέι-Λουσάκ es:Ley de Gay-Lussac eu:Gay-Lussacen legea fr:Loi de Gay-Lussac ga:Dlí Gay-Lussac gd:Lagh Gay-Lussac gl:Lei de Gay-Lussac hr:Gay-Lussacov zakon id:Hukum Gay-Lussac is:Lögmál Gay-Lussac it:Legge dei volumi di combinazione he:חוק גה-ליסאק hu:Gay-Lussac-törvény ml:ഗേ-ലുസാക് നിയമം pl:Prawo stosunków objętościowych pt:Lei de Gay-Lussac ru:Закон Гей-Люссака si:‍ගේලුසැක් නියමය sk:Zákon objemových zlučovacích pomerov sl:Gay-Lussacov zakon sr:Геј-Лисаков закон fi:Gay-Lussacin laki sv:Gay-Lussacs lag tr:Gay-Lussac yasası uk:Закон Гей-Люссака vi:Định luật Gay-Lussac 2 zh:盖-吕萨克定律 par-sci21

  1. http://www.chemistryexplained.com/Fe-Ge/Gay-Lussac-Joseph-Louis.html
  2. Hartley, Harold (1966). "Stanislao Cannizzaro, F.R.S. (1826 – 1910) and the First International Chemical Conference at Karlsruhe". Notes and Records of the Royal Society of London 21: 56–63. 21 year-old Glazier James Grippo from Edam, enjoys hang gliding, industrial property developers in singapore developers in singapore and camping. Finds the entire world an motivating place we have spent 4 months at Alejandro de Humboldt National Park..
  3. Many property agents need to declare for the PIC grant in Singapore. However, not all of them know find out how to do the correct process for getting this PIC scheme from the IRAS. There are a number of steps that you need to do before your software can be approved.

    Naturally, you will have to pay a safety deposit and that is usually one month rent for annually of the settlement. That is the place your good religion deposit will likely be taken into account and will kind part or all of your security deposit. Anticipate to have a proportionate amount deducted out of your deposit if something is discovered to be damaged if you move out. It's best to you'll want to test the inventory drawn up by the owner, which can detail all objects in the property and their condition. If you happen to fail to notice any harm not already mentioned within the inventory before transferring in, you danger having to pay for it yourself.

    In case you are in search of an actual estate or Singapore property agent on-line, you simply should belief your intuition. It's because you do not know which agent is nice and which agent will not be. Carry out research on several brokers by looking out the internet. As soon as if you end up positive that a selected agent is dependable and reliable, you can choose to utilize his partnerise in finding you a home in Singapore. Most of the time, a property agent is taken into account to be good if he or she locations the contact data on his website. This may mean that the agent does not mind you calling them and asking them any questions relating to new properties in singapore in Singapore. After chatting with them you too can see them in their office after taking an appointment.

    Have handed an trade examination i.e Widespread Examination for House Brokers (CEHA) or Actual Property Agency (REA) examination, or equal; Exclusive brokers are extra keen to share listing information thus making certain the widest doable coverage inside the real estate community via Multiple Listings and Networking. Accepting a severe provide is simpler since your agent is totally conscious of all advertising activity related with your property. This reduces your having to check with a number of agents for some other offers. Price control is easily achieved. Paint work in good restore-discuss with your Property Marketing consultant if main works are still to be done. Softening in residential property prices proceed, led by 2.8 per cent decline within the index for Remainder of Central Region

    Once you place down the one per cent choice price to carry down a non-public property, it's important to accept its situation as it is whenever you move in – faulty air-con, choked rest room and all. Get round this by asking your agent to incorporate a ultimate inspection clause within the possibility-to-buy letter. HDB flat patrons routinely take pleasure in this security net. "There's a ultimate inspection of the property two days before the completion of all HDB transactions. If the air-con is defective, you can request the seller to repair it," says Kelvin.

    15.6.1 As the agent is an intermediary, generally, as soon as the principal and third party are introduced right into a contractual relationship, the agent drops out of the image, subject to any problems with remuneration or indemnification that he could have against the principal, and extra exceptionally, against the third occasion. Generally, agents are entitled to be indemnified for all liabilities reasonably incurred within the execution of the brokers´ authority.

    To achieve the very best outcomes, you must be always updated on market situations, including past transaction information and reliable projections. You could review and examine comparable homes that are currently available in the market, especially these which have been sold or not bought up to now six months. You'll be able to see a pattern of such report by clicking here It's essential to defend yourself in opposition to unscrupulous patrons. They are often very skilled in using highly unethical and manipulative techniques to try and lure you into a lure. That you must also protect your self, your loved ones, and personal belongings as you'll be serving many strangers in your home. Sign a listing itemizing of all of the objects provided by the proprietor, together with their situation. HSR Prime Recruiter 2010. Extract.
  4. http://web.fccj.org/~ethall/gaslaw/gaslaw.htm