Downtown Data Center
As part of the University of Utah’s CHPC team, I’m building a low-poly, high-readability virtual twin of a downtown data center—optimized for remote debugging and live monitoring. Leveraging Unreal Engine’s capabilities, the simulation integrates over 200 uniquely textured server units, offering technicians instant visual clarity on rack layouts, occupancy, and alert states.
The environment is visually streamlined and alert-optimized, so key information pops without clutter. Hosted on the web, it’s accessible across all OS platforms—no downloads required—making it an ideal live-service tool that stays updated and accessible. I also architected a conversion pipeline and onboarding workflow, including a simplified process guide, to empower team scalability and maintain real-time infrastructure accuracy.
Game Type
Live-Service Simulation
Date
December 2023 - Present
Location
Salt Lake City, Utah, US
Role
Game Designer Generalist
WALKTROUGH
CREATION PROCESS
​
Mapped workflows & User Research​​
​
-
Legacy Integration: Reused and adapted assets from a prior 3D touring project to accelerate development, ensuring continuity while building the new interactive simulation from a fresh foundation.
-
User Research: Surveyed data center staff to learn the key metrics they track and studied their info-seeking patterns, then mirrored those behaviors in the simulation to make monitoring feel authentic.
-
Environment Mapping: Mapped out the data center to familiarize myself with the environment. Focused on the areas most important to staff to ensure the simulation reflected their real monitoring needs.
​
​
Building Core Systems & Assets​
​
-
Optimized 3D Assets: Created functional low-poly 3D models of servers and pods in Maya, balancing performance with visual accuracy by ensuring precise measurements and detailed features like screw points.
-
Inventory Integration: Integrated common servers from inventory data by sourcing textures through research, creating individual blueprints, and adding them as assets to the project.
-
​Data Verification: Tested engineering team systems that mapped server locations and racks using portal-generated data, ensuring accuracy by identifying issues such as overlapping servers and inconsistencies with real-life.
​
​
Expanded Systems, Optimized Workflows, Improved Usability
​
-
Stakeholder Engagement: Individually presented the project to the department, recruited volunteer testers, addressed feedback, and integrated familiar server data pages to improve usability.
-
Engine Migration: Led transition from Unreal 5.2 to 4.23 for HTML5 deployment, after evaluating alternatives like NVIDIA WebGPU and Pixel Streaming. Migrated assets, optimized scenes, and built supporting blueprints to aid engineering, improving project performance and accessibility.
-
UI/UX Design: Designed clear UI/UX representations of sensor data across multiple areas of the simulation to improve readability and user interaction.
-
Server Blueprints: Developed blueprints for 200+ unique servers by creating size-accurate 3D models, sourcing textures through online research, and capturing reference imagery onsite at the data center.
​
​
Final Refinements for Scalability and Clarity
​
-
Pipeline Documentation: Authored in-depth systems and pipeline documentation to onboard new team members, ensuring a clear understanding of workflows and tools I established at the project’s start.
-
Polishing: Polished assets by resolving UV issues and creating additional models to enhance environmental detail and deliver a more complete, immersive simulation.
-
System Optimization: Migrated from individual server blueprints to a dynamic system using Unreal Engine data tables, enabling real-time spawning based on server material, size, and name.
​
​
Taking the next steps​
​
-
Engine Transition: Leading the shift from Unreal to Unity for long-term scalability, leveraging its Web build capabilities while evaluating alternatives like Godot.
-
Team Mentorship: Adding and mentoring new team members, guiding them through pipeline systems, and shaping a collaborative work culture.
-
User Testing: Collecting user data and observing interactions with the Unreal application while continuously identifying and bug-fixing issues that arise.
BEHIND THE BUILD
DESIGNING FOR USABILITY
My primary goal wasn’t to create a photorealistic replica of the data center - it was to make a clear, functional tool that staff would actually use. To achieve that, I focused on usability through several key design choices:
-
Analyzed workflows - observed how staff interacted with servers and identified where physical methods slowed them down.
-
Identified friction points - mapped common struggles such as clutter, poor visibility, or difficulty spotting critical alerts.
-
Delivered clarity through design:
-
Low-poly environment to avoid visual noise and keep performance high.
-
Visually loud alerts so warnings stand out instantly, even across the room.
-
Purposeful lighting that naturally highlights server fronts for quick recognition.
-
Web-hosted deployment to eliminate download barriers and make access universal across systems.
-
The outcome is an environment that feels simple, responsive, and approachable - something technicians can rely on instead of defaulting back to older, physical methods.
​
SERVER INTEGRATION & PIPELINE​
To accurately represent the physical infrastructure, I integrated 200+ unique server units into the Unreal Engine project. Each asset was:
-
Assigned a distinct front-panel texture to visually differentiate models.
-
Scaled according to real-world dimensions provided by the inventory management team.
This ensured that rack occupancy, spacing, and visibility matched operational reality while still maintaining a low-clutter layout optimized for alert detection and remote monitoring. The result is a simulated environment that feels authentic to technicians, supports fast visual parsing, and scales to the full size of the center’s hardware footprint.​​

Output: Distinct servers being loaded into their designated rack locations.

Input: Data table optimizing RAM usage; contains server name, texture, material, and physical size.

Behind the scenes, adding a server is a large and complex process. I worked with a large team to design a workflow pipeline that streamlines integration. To support long-term scalability, I also created a documentation guide that explains the process step-by-step in simple terms, empowering new team members to add servers independently.
The Legwork: To make sure everyone knows what's in the project.