资源

在线研讨会:2019 年数据中心行业调查结果

Presentation Title:
Uptime Institute 2019 Data Center Industry Survey Results

发表者:
Andy Lawrence, Executive Director, Research
Rhonda Ascierto, Vice President, Research
Christopher Brown,首席技术官

Summary:
The data center industry’s largest and most influential survey results are in. Join us to see what’s trending. Are data centers getting more efficient? How are outages changing? What proportion of workloads are running in the cloud? Will rack density rise at last?
 

Questions and answers we weren't able to address in the live session:
Audience QuestionUptime Institute Answer
The numbers show PUE hasn't improved recently, but rack density is increasing. I would assume greater rack density would improve PUE. Any thoughts on this?  In theory an increase in rack density would result in an increase in return air temperature which results in an increase in the air side differential temperature and that can result in the cooling systems running more efficiently. However in practice I have not really witnessed it working because there is so much with the design that comes into play such as is there containment to limit supply and return air mixing along with how the electrical system is operating to truly result in a decrease in PUE.
1. What about Efficiency based metrics: Data Center Infrastructure Efficiency (DCiE)

2. Also Resource-based metrics:
- Carbon intensity (CUE)
- Water utilization (WUE)
As humans we like metrics. They can be measured and automatically reported upon. They are good indicators of performance as long as the metrics being used are tailored for the individual data center and company needs and desires. For example if a data center is located where potable water is plentiful and inexpensive monitoring the WUE can inform about how well they are utilizing the water for the environment but may not matter much to the overall data center costs. This is the reason I cautioned against using just PUE as a metric of how efficient the data center is operating as it can show an efficient operation but other metrics can show over consumption of power or IT assets that results in higher operating costs. It is important to always keep in mind what the company goals are around efficiency.
What do you define as mission critical loads? Data centers are built to support a business need. Mission critical loads are those loads that are necessary to the operation of the business and without which the business would be impacted negatively or even halted.
What is your definition of "Enterprise"? I believe you are referring to our use of the term Enterprise Data Center. In this instance an Enterprise Data Center is a data center that is owned by a company to support its own IT needs. It is not based on the size of the company just that the data center is owned by the company and is being used to serve its own IT needs.
What do you feel companies, DC's, Colo's and their key players within the DC with traditional virtualization & consolidation need to hear to become (more) open minded to machine learning/predictive analytics that could impact their power infrastructure positively and potentially increase their rack density and improve their power infrastructure. Uptime Institute’s 2018 Global Annual Survey showed that 70% of suppliers, consultants and vendors surveyed believe that AI will be widely used in the data center. The complexity and scale of modern data centers, coupled with the rapid speed of change in IT environments, is becoming too great for humans to effectively manage. For computers and artificial intelligence, however, it can be relatively simple. A key goal is to predict and prevent incidents and to detect and remediate inefficiencies and capacity shortfalls. Given that many data centers are suboptimally utilized, many operators are likely to benefit from the technology.
You mention Networks are the second biggest reason for downtime. What is the Issue? Is this an issue with the quality of equipment the network equipment manufacturers are providing?

In this survey and other research, Uptime Institute has seen a clear increase in failures due to networking incidents. The data also suggests that IT/network related disruptions can take longer to fix than power/data center facilities disruptions.

Because more workloads are cloudy and distributed, the network has become critical – and network issues more problematic (causing cascading issues across sites). We haven’t seen an uptick in quality issues with specialist networking equipment. However, with software-defined networking, more networking functions are moving off specialist equipment and into ‘standard’ x86 servers…while the network itself may be fully configured and redundant, the servers may not be – and an issue with the server hosting the networking functions may be erroneously classified by some as a ‘networking failure.’

Given that 12% of your respondents were from Software/Cloud Services, 16% were telecom, and 26% were Colo/Multi-tenant, do you feel the discussion about Cloud needs to be more nuanced? A colo provider responding to "where do we place critical workloads" aren't responding in the same way as enterprise respondents. Possibly this has already been accounted for? Yes, our in-depth analysis is more nuanced. During the webinar, we shared only a sample of top-line data points. More detailed survey results and the opportunity to discuss findings with Uptime Institute Intelligence staff directly is available to Uptime Institute Network members.
What techniques are available for proactively managing for performance and efficiency? DCIM, DMaaS and other AI-based approaches are all great ways to proactively manage for performance and efficiency.
While the issue of resiliency is clearly a major concern and challenges continue, do you find executives are also looking to the aspect of utilization of the data center assets as they do other resources in their companies, either physical or human? Yes, although it is not uncommon for executives to accept low utilization rates of data center assets as a trade-off for high resiliency or the control/data governance that comes with owning a private facility. In both private and third-party facilities, such as colocation, it’s also common that utilization rates are not even tracked. We are, however, slowly beginning to see more data centers adopt software-defined power approaches that enable higher utilization of assets (via load shifting/shedding and/or with transactive agreements with local utilities.) Human resources are, generally speaking, in short supply across the sector – low utilization is not an issue that we hear about.
On the slide re: data center skills shortage, is the forcing function personnel supply or organizational budget? Based upon the "cuts" language is this slide are we seeing possibly a management expectation of more automation effects and pursuit of those opportunities? What we hear repeatedly throughout the sector is that personnel supply is the forcing function. This is of particular concern given the greying of the sector. Staff cuts are often the result of moving from on-premises privately owned data centers to third-party facilities, such as colo and cloud. However, even hyperscalers are struggling to fill open positions. In terms of automation, we asked only about AI (closely related). In response to the question, Do you believe artificial intelligence (AI) will reduce your data center operations staffing levels in the next 5 years? 29% said yes, 42% said yes but in more than 5 years, and 29% said no. Generally speaking, AI and automation are short- or mid-term goals for many data center operators.
How are you defining "outage"? Broadly, we define an outage as a service interruption.  Within that broad definition, we also rate specific outages by the impact of the outage.  We define these ratings in more detail in the Uptime Institute Outage Severity Rating table here: https://uptimeinstitute.com/resources/outage-severity-rating 
Have questions about the webinar? Email us at info@uptimeinstitute.com

Powered By OneLink