Dave Williams, is the Senior Principal at PAE Engineers, he has 20 years of experience in mechanical engineering. Through his extensive work with data centers, laboratories, and healthcare facilities, he has become exceptionally skilled at providing designs for controlled environments. Dave focuses on reducing operating and maintenance costs, and increasing energy efficiency. He counts Kaiser Permanente, Legacy, Tuality, and PeaceHealth among his long-term clients. A LEED Accredited Professional, Dave has worked on many sustainable projects, including several LEED and Net-Zero Energy certified buildings.
PAE Engineers specializes in designing energy-efficient, high-performance data centers that prioritize sustainability and operational efficiency. Their work integrates innovative cooling solutions and advanced engineering to optimize performance while reducing environmental impact.
Can you take us back to your first experience designing a data center? What was the project, and what were some of the biggest challenges or learning moments you encountered?
The first major was memorable and, honestly, one of my favorites. It was a conversion of an old semiconductor facility into a data center. That conversion, as you can imagine, had multiple challenges, from repurposing the equipment to different operating conditions, adding resiliency, and ensuring proper operation with the reuse of equipment was challenging. Commissioning was quite challenging as well, and ensuring all components, especially the cooling equipment and controllers, could ramp up quickly on a power failure created some interesting and fun challenges to solve. In the end, it all worked great, and the team really bonded in the collaboration of many players to execute and turn over a good work data center facility.
How have data center design priorities shifted in the last 20 years? What are the biggest changes you’ve seen in cooling, power distribution, and redundancy?
Priorities have shifted quite a bit. Data centers exist to provide a resilient, high-uptime environment for servers. As servers have become significantly more powerful and adaptable to environments, the facilities have changed. They have become very efficient, pushing the envelope of energy efficiency from the transformation and transportation of power to the racks, and the HVAC systems that provide a stable environment for the facilities to operate in have changed from large power-consuming equipment to innovative high-efficiency systems that allow for much lower PUE’s than ever before.
What are the biggest challenges in designing data centers right now? How do factors like power availability, cooling capacity, and stricter regulations impact design decisions?
The availability of reliable power is typically the largest factor in locating a data center. Land availability, proximity to fiber, security, and climate concerns all factor in, but finding power seems to be a big driver.
How does PAE Engineers integrate sustainability into data center projects from the initial planning stages?
Starting with the end in mind is always a good reminder. Sustainability is a broad term that is more than just energy, but with the amount of energy needed, that typically gets the most attention. Setting goals for low PUE and sticking to that through the course of design is a guiding light and helps shape decisions. As noted before the goal of a data center is to power the servers. Any other energy used at the facility supports that and is where we want to focus our efforts on reducing it. Cooling systems are the main offenders in this, and starting there is really where we like to start. Considering the climate the facility will be located in and optimizing the system to maximize free cooling and considering ways to avoid refrigeration systems with large compressor motors really helps the efficiency. Subsequent review of heat recovery with support spaces from the data halls, water reduction measures, controls optimization, and even considerations like renewables and rainwater capture are additional items considered when designing for a high level of sustainability in the facility.
What are the most promising innovations driving energy and water efficiency in data centers today?
As servers become more resilient and able to handle warmer temperatures, the ability to increase the hours in the year for free cooling has really had an impact. Because of this, many of the data centers deployed now have some of the most innovative and energy-efficient systems installed. Liquid cooling with more dense capacity servers brings new challenges and opportunities, as water has much more potential for large-scale heat transfer than air.
How do you balance high-performance computing demands with sustainability goals? Are there trade-offs when lowering PUE (Power Usage Effectiveness) or water usage while maintaining reliability?
Not if done correctly. The infrastructure must support the needs of the server racks with a resilient deployment, but if done thoughtfully, it can be optimized to function very efficiently, reducing the PUE and water consumption without compromising the compute’s performance demands.
What challenges do you face when implementing renewable energy, advanced cooling, or other sustainability measures in high-density data centers?
The challenges vary from project to project. Sometimes, there may be local code or jurisdictional hurdles to overcome, land may not be available to deploy a certain type of system, or honestly, based on the cost to implement an innovative design, the benefit doesn’t justify implementing it. What works well for one climate or site may not be the best for another, so each project should consider this.
How are AI workloads changing data center design? What adjustments are needed for power and cooling to support high-density AI racks?
Densification can affect both the power and cooling systems quite a bit. Larger feeds, breakers, piping, etc., all supporting a much more localized power load create challenges, but also opportunities to rethink how the infrastructure should be laid out, located, and deployed in a way that is safe, resilient and functional.
What role do you see AI playing in optimizing energy efficiency and real-time operations in data centers?
It is massive. The ability to create a machine learning facility that deploys massive computing power towards the optimization of energy-consuming systems that serve not all data centers but the built environment in totality is enormous. Peak shaving, understanding how each facility varies in its operation, and optimization on a specific case-by-case basis is a huge opportunity. Rolling compute across multiple facilities in real-time with adjustment to varying climate zones. I can only imagine how much AI learns about the operation. The suggestions and optimization it could deploy would have a massive effect on how the systems that serve these facilities can be adjusted and tuned.
What might the next generation of AI-driven data centers look like? Do you anticipate widespread adoption of immersion cooling, onsite renewable power, or fully automated facilities?
Continually higher-density racks are the trend and will likely continue for a few years. That densification creates interesting opportunities for the physical size of data centers with potentially smaller footprints, but also new things to consider regarding infrastructure like larger breakers/piping, etc, needed for them. Immersion cooling is interesting and has been discussed in the industry for quite some time; right now, it is not as prevalent and not being used much, but as densities increase, it may become more viable. Onsite renewables are always a great idea on any project, data center or not, but again, the cost, ROI, viability of the site, consideration of the climate, etc., all need to be considered when deploying and selecting which type of renewable and how much should be installed.
Thank you for the great interview, readers who wish to learn more should visit PAE Engineers.
Credit: Source link