On May 11, 2026, the University of Arizona, in conjunction with hardware firm Post-Quantum Tek (PQT), unveiled a commercially viable diffractive optical switch that routes data center traffic entirely through dynamic, computer-generated holograms. The High-Speed Optical Switch (PQT-HOS) completely eliminates the need to convert optical signals into electrical signals for routing, achieving an all-optical, or "Optical-Optical-Optical" (OOO) architecture.
Evaluated and verified in bench tests by Microsoft Labs, the University of California, Berkeley Computer Science Department, and Texas Instruments, this hardware operates 1,000 times faster than existing optical circuit switches. Crucially, it consumes a thousandth of the energy of traditional electronic switches. By projecting microsecond-reconfigurable diffraction patterns onto a spatial light modulator, the system bends incoming laser light directly to its target output fiber.
The release of the PQT-HOS marks the culmination of more than 15 years of research funded by the National Science Foundation’s Center for Integrated Access Networks (CIAN) and recent Department of Energy SBIR grants. Dr. Pierre-Alexandre Blanche, a leading researcher in diffraction optics at the University of Arizona and the core architect behind the switch, stated upon the release: "Solving the energy crisis in data centers is central to my mission. The diffractive (holographic) optical switch sits at the core of a more sustainable path for cloud computing and AI infrastructure".
This announcement lands at a critical threshold for the hyperscale computing industry. Traditional data center networking relies heavily on electronic packet switching, a process that is running headlong into hard physical and thermal limits. As the deployment of the PQT-HOS moves from academic verification into early commercial prototyping, the architecture of the modern artificial intelligence data center is facing a structural rewrite.
The Impending Grid Collapse and the "OEO Tax"
To understand the necessity of this technology, one must examine the baseline mechanics of how modern AI clusters communicate. When tens of thousands of Graphics Processing Units (GPUs) are networked together to train a massive artificial intelligence model, they operate as a single distributed computer. This requires continuous, synchronized data sharing across the entire cluster—a process known as an "all-to-all" or "Ring All-Reduce" operation.
Currently, this communication relies on fiber optic cables connecting servers to massive racks of electronic Ethernet or InfiniBand switches. When a pulse of light carrying data arrives at a conventional switch, it cannot be routed as light. It must undergo an Optical-Electrical-Optical (OEO) conversion.
Inside the transceiver, a photodetector absorbs the photons and converts them into an electrical current. This analog electrical signal is digitized by a Digital Signal Processor (DSP), fed into a massive switching Application-Specific Integrated Circuit (ASIC) that reads the packet headers, and electrically routed to the correct output port. Finally, a laser modulates this electrical signal back into light to send it down the next fiber.
This OEO conversion is colloquially known in the networking industry as the "OEO tax." It taxes the system in two ways: latency and power. Every time a packet is converted from light to electricity and back again, it adds nanoseconds or microseconds of delay. In a highly synchronized AI training run, these micro-delays compound, leaving multi-million-dollar GPUs sitting idle waiting for data to arrive.
More critically, the OEO process is highly energy-intensive. A modern 800-Gigabit optical transceiver consumes roughly 15 watts of power. In a data center containing 100,000 GPUs, there may be hundreds of thousands of transceivers. The power required just to run these transceivers and the accompanying switching ASICs cascades into megawatts. Furthermore, these electrical components generate immense heat, requiring energy-hungry liquid cooling systems and massive HVAC units to prevent the silicon from melting.
The macroscopic impact of this architecture is severe. In 2024, data centers accounted for approximately 4% of total electricity consumption in the United States. Due almost entirely to the density and networking requirements of AI training, that figure is aggressively projected to reach 12% by 2030. Utility companies are already delaying new hyperscale builds due to grid constraints. The PQT-HOS bypasses this bottleneck entirely by keeping the data in the optical domain from end to end, generating virtually zero heat at the switching layer.
Inside the Machine: The Physics of Diffractive Routing
To bypass the OEO tax, researchers have long sought a purely optical switch. The concept of holographic data center interconnects relies on the physics of diffraction rather than physical mechanics or electrical logic.
Traditional electronic switching uses logic gates to move bits. Early optical switching, like Google's Apollo platform, uses physical mechanics—specifically, micro-electrical mechanical systems (MEMS) with tiny, motorized mirrors that physically tilt to bounce a beam of light from one port to another.
The PQT-HOS discards both logic gates and physical mirror tilting in favor of dynamic holography. At the heart of the switch sits a Spatial Light Modulator (SLM), which can be based on technologies like a Texas Instruments Digital Light Processing (DLP) chip or a Liquid Crystal on Silicon (LCoS) array. Instead of moving a mirror to redirect a beam, the SLM chip displays a Computer Generated Hologram (CGH).
When a laser beam carrying hundreds of gigabits of data strikes this chip, the light interacts with the microscopic diffraction pattern displayed on the surface. The physics of constructive and destructive interference instantly bends the beam at a precise angle, directing it perfectly into the correct output fiber.
Because the hologram is entirely software-defined and generated digitally on the SLM, the switch can be reconfigured by simply displaying a new image on the chip. The light never stops, never converts to electricity, and never waits for a network packet header to be read. The data flows continuously at the speed of light.
Engineers at the University of Arizona discovered that using amplitude modulation (turning pixels on and off, as seen in standard DLP projectors) resulted in poor diffraction efficiency, losing roughly 90% of the light. The breakthrough that made the PQT-HOS viable was the shift to multi-level phase control using phase-modulation MEMS or LCoS. By altering the phase of the light rather than its amplitude, the switch can direct over 90% of the optical energy into the first diffraction order—meaning the signal arrives at the output port strong, clear, and without unacceptable insertion loss.
This non-blocking N x N architecture is also inherently scalable and resilient. In traditional reflective MEMS switches, if a single motorized mirror fails, that entire network port is permanently dead. In a diffractive holographic switch, the routing is performed by a distributed pattern of thousands of pixels across the chip. If a cluster of pixels fails, the algorithm simply recalculates the computer-generated hologram to compensate, resulting in a slightly modified pattern that still routes the light perfectly.
Why Millisecond Switching Failed the AI Era
The concept of Optical Circuit Switching (OCS) is not new, but its historical limitations prevented it from replacing traditional Ethernet in dynamic environments.
In the early 2020s, Google achieved massive success by deploying its Apollo OCS platform across its data center backbones. Apollo utilized custom-developed motorized MEMS mirrors, optical circulators, and specialized Wavelength-Division Multiplexing (WDM) transceivers. By deploying OCS, Google managed to slash power consumption in its network backbone by 30% and significantly reduce capital expenditures, proving that optical switching could operate reliably at a hyperscale level.
However, Apollo and similar 3D MEMS optical switches suffer from a severe operational constraint: physical inertia. Because they rely on moving mechanical mirrors, the time required to reconfigure a connection from one server to another is measured in milliseconds.
For static network backbones—where massive trunks of data flow between data center buildings for hours or days at a time—a millisecond switching time is perfectly acceptable. But inside an AI training cluster, data flows are highly dynamic. GPUs communicate in rapid, bursty "elephant flows" that last only microseconds. If an optical switch takes 10 milliseconds to physically rotate a mirror to a new position, the GPUs will sit idle during that time. In the economics of AI, where GPU compute time is the most expensive commodity on earth, idling for 10 milliseconds millions of times a day is financially catastrophic.
This is exactly where the PQT-HOS deployment changes the paradigm. Because a liquid crystal display or a high-speed micro-mirror array can refresh its pixels rapidly, the diffractive holographic switch boasts a reconfiguration time of just 12 microseconds. This represents a speed increase of up to four orders of magnitude over mechanical OCS platforms.
At 12 microseconds, holographic data center interconnects cross the threshold of viability for dynamic, intra-cluster AI workloads. They are fast enough to integrate directly into the leaf and spine layers of the data center, dynamically provisioning high-bandwidth optical pipes between specific GPU racks precisely when the scheduling algorithm demands it.
The Cost of Electrons: Calculating the Megawatt Savings
The commercialization of the PQT-HOS is driven almost entirely by power economics. By eliminating the DSPs, the network switching ASICs, and the OEO conversion process, holographic switching fundamentally alters the power footprint of a data center.
Traditional electronic switching technologies consume roughly 20 picojoules of energy per bit of data transferred (20 pJ/bit). At the scale of modern AI networking—where switches must push hundreds of terabits per second—that energy consumption scales vertically. According to benchmarking verified by the University of Arizona's College of Optical Sciences, the holographic optical switch reduces that energy footprint to just 1 picojoule per bit (1 pJ/bit).
This 95% to 99.9% reduction in switching energy translates into massive absolute numbers. If a traditional data center network consumes 20 megawatts of power to support a massive AI cluster, swapping the core networking fabric to diffractive optical switches could reclaim over 18 megawatts of power.
That reclaimed power does not simply lower the utility bill. In a power-constrained hyperscale environment, every megawatt saved on networking overhead is a megawatt that can be redirected to powering additional GPUs. Furthermore, because optical routing generates minimal heat compared to the intense thermal output of switching ASICs, the secondary energy savings from reduced cooling requirements—fewer chiller plants, lower water consumption, and reduced air handling—often equal the primary electrical savings.
Dr. Blanche previously noted that adopting microsecond optical switching across domestic data centers could theoretically save hundreds of terawatt-hours of electricity over a decade. As AI models scale from billions to trillions of parameters, this transition moves from an ecological goal to a strict mathematical requirement for continued scaling.
Erasing the Clos Network: A New Topology for Intelligence
The integration of high-speed holographic routing allows data center architects to redesign the physical layout of the network. For decades, data centers have been built using a "folded-Clos" or "Fat-Tree" topology.
In a Clos network, servers are connected to "Top of Rack" (ToR) leaf switches, which are connected to spine switches, which are in turn connected to core switches. When Server A needs to talk to Server B in another rack, the data packet must hop up the hierarchy, be electronically processed and routed at every layer, and then travel back down. This introduces latency, requires massive over-provisioning of expensive switches, and results in a highly rigid infrastructure.
With holographic data center interconnects, this rigid hierarchy can be flattened. Because the optical switch is agnostic to data rates and protocols—meaning it doesn't care if the light is carrying 100 Gigabits, 800 Gigabits, or 3.2 Terabits, nor does it care if the protocol is Ethernet or InfiniBand—it operates as a pure, transparent pipe.
Simulations using realistic data center workloads have shown that augmenting or replacing a folded-Clos network with a diffractive optical cross-connect improves mean flow completion time by 30% to 95%, while simultaneously reducing the capital expenditure on network hardware by up to 40%.
Instead of forcing data to climb a hierarchical tree, the central software-defined controller can observe the traffic demands of the AI cluster in real-time. If Rack 1 suddenly needs to dump petabytes of training data to Rack 15, the network controller simply updates the hologram on the central switch. Instantly, a direct, dedicated, speed-of-light optical pipe is established between the two racks. When the transfer is complete 12 microseconds later, the hologram shifts again, reallocating that bandwidth elsewhere.
This creates a dynamic, liquid network topology where bandwidth is summoned on demand, rather than being statically wired in a permanent, inefficient hierarchy.
Holographic Data Center Interconnects: The Supply Chain Reality
Transitioning a breakthrough from a university optical bench to a global supply chain requires robust manufacturing ecosystems. The viability of the PQT-HOS rests heavily on the fact that it leverages existing, mature semiconductor manufacturing techniques.
The core component—the spatial light modulator—does not require exotic, unproven fabrication methods. Texas Instruments has been refining its Digital Light Processing (DLP) technology for decades, primarily for use in digital projectors and cinema displays. Similarly, Liquid Crystal on Silicon (LCoS) technology, provided by companies like Holoeye, is a mature industry standard used in everything from augmented reality headsets to telecommunications wavelength-selective switches (WSS).
By utilizing these existing foundries, Post-Quantum Tek and its partners bypass the typical hardware manufacturing "valley of death." The rapid advancement in consumer display technology—pushing LCoS and DMD chips to 4K and 8K resolutions—directly benefits holographic networking. In a diffractive optical switch, the number of individual locations potentially addressable (the number of input/output ports) scales proportionally with the number of pixels on the SLM. Therefore, an 8K LCoS chip originally designed for consumer virtual reality can be adapted to route thousands of concurrent optical channels in a data center.
Furthermore, the supporting infrastructure is actively expanding. Companies like Princeton Optronics are currently scaling up production of cutting-edge, low-power vertical-cavity surface-emitting lasers (VCSELs) suited for data center environments, supported by teaming partnerships with the ARPA-E eXCHANGE programs. Chiral Photonics is simultaneously pushing the boundaries of optoelectronic packaging, creating the highest density optical input/output (I/O) fiber couplers necessary to feed thousands of fiber strands into these central holographic switches.
This convergence of existing display technology, mature laser foundries, and high-density fiber packaging suggests that commercial scale-up could occur much faster than historical networking hardware transitions.
Integration Hurdles: Diffraction Efficiency and Insertion Loss
Despite the successful bench validations by Microsoft and Texas Instruments, scaling holographic switching from a controlled laboratory to a chaotic hyperscale environment presents severe engineering challenges.
The primary physical obstacle is optical insertion loss. When a beam of light passes through any medium or reflects off any surface, a percentage of the photons are lost. In a diffractive system, light that is not perfectly redirected into the primary target port becomes "stray light" or crosstalk. If this stray light enters a neighboring fiber port, it creates noise that degrades the signal integrity.
Early prototypes utilizing amplitude-modulated DLP chips struggled with a diffraction efficiency of roughly 10%, meaning 90% of the laser power was wasted. While the transition to LCoS phase modulators pushed that efficiency far higher, maintaining strict multi-level phase control across fluctuating data center temperatures remains difficult. The liquid crystal alignment in an LCoS chip is highly sensitive to thermal variations. In a server room where ambient temperatures can swing based on server loads and cooling cycles, the phase delay introduced by the liquid crystals can drift, altering the diffraction angle and causing the laser beam to literally miss the microscopic core of the output fiber.
Additionally, LCoS technology is typically polarization-dependent. Laser light traveling through standard single-mode optical fiber naturally rotates and changes its polarization state over distance. If the holographic switch only operates on one specific polarization of light, the incoming signal must be split, managed, and recombined, which adds physical complexity, cost, and further insertion loss.
Beyond the physics, the software integration poses a massive hurdle. Modern Network Operating Systems (NOS) and Software Defined Networking (SDN) controllers are built to manage electronic packet switches using protocols like BGP (Border Gateway Protocol). Instructing a network controller to stop reading packet headers and instead calculate and project a Fourier transform diffraction pattern onto an LCoS chip requires a total rewrite of network control planes.
Startups and researchers have had to develop custom software platforms, similar in concept to early tools like SPIDER (Software Package for Interconnect Design, Evaluation and Reconstruction) originally developed for optical neural networks, just to generate the holograms fast enough to keep up with network demands. For hyperscalers to adopt this technology fully, these custom control planes must integrate flawlessly into existing orchestration systems like Kubernetes.
Co-Packaged Optics and the Elimination of the Pluggable
The introduction of the PQT-HOS diffractive switch accelerates another inevitable shift in hardware: the move toward Co-Packaged Optics (CPO).
Currently, optical transceivers are built as modular plugs—Small Form-factor Pluggable (SFP) or Quad Small Form-factor Pluggable (QSFP) modules—that slot into the front faceplate of a server. Electrical traces run from the GPU or CPU across the motherboard to the faceplate, where the signal is handed off to the pluggable transceiver.
As data rates surpass 800 Gigabits and push toward 1.6 and 3.2 Terabits per second, driving electrical signals across a large motherboard results in unacceptable signal degradation. The industry solution is CPO, where the silicon photonics (the lasers and photodetectors) are moved directly onto the same substrate package as the GPU itself.
When Co-Packaged Optics are combined with holographic data center interconnects, the true potential of photonic computing emerges. If the GPU package emits light directly, and that light travels over a fiber straight into a holographic optical switch, the data center achieves a purely optical path from the processor core of one machine directly to the processor core of another.
This eliminates the front-panel pluggable transceivers entirely. This synthesis removes multiple layers of electronic retiming and signal amplification, severely depressing latency limits and pushing network performance to the absolute theoretical limit of the speed of light in glass.
In this architecture, the data center ceases to look like a collection of distinct, networked computers. Instead, unified by microsecond optical switching, the entire warehouse functions as a single, contiguous processor. High-speed memory protocols like Compute Express Link (CXL) could operate across the optical fabric, allowing a GPU in Rack A to access the high-bandwidth memory of a GPU in Rack Z with the same latency as if they were physically soldered to the same motherboard.
The Algorithmic Impact: Synchronizing Trillion-Parameter Models
The implications of this hardware overhaul extend deep into the mathematics of artificial intelligence training. Training a multi-trillion parameter Large Language Model (LLM) requires distributing the neural network weights across tens of thousands of GPUs.
In these distributed environments, a technique called "pipeline parallelism" and "tensor parallelism" is utilized. GPUs must constantly share their calculated gradients with their peers before the next cycle of computation can begin. If the network is congested, the GPUs enter a state of "blocking"—they stop calculating and wait.
Because traditional electronic networks use packet switching, large data flows are broken down into thousands of tiny packets, interleaved, and buffered in switch memory queues. When network traffic spikes, these queues fill up, leading to packet drops, retransmissions, and highly unpredictable latency tails. A single dropped packet in an InfiniBand network can stall a critical Ring All-Reduce operation, halting the entire 100,000-GPU cluster for precious milliseconds.
Diffractive optical switching solves this by providing deterministic latency. When the holographic switch establishes a connection between two nodes, it provides a dedicated, unbuffered physical pipeline of light. There are no packets queuing in a switch buffer, no headers to inspect, and no possibility of a network bottleneck within the switch itself. The latency is exactly equal to the speed of light through the length of the fiber optic cable—and it remains entirely constant, regardless of the load.
This predictability allows AI software engineers to write highly optimized communication primitives. When the networking hardware behaves with perfect mathematical precision, training frameworks like PyTorch or JAX can schedule distributed tensor operations without having to account for unpredictable network jitter. The result is a dramatic increase in GPU utilization rates, allowing a company to train the exact same foundational AI model in a fraction of the time, burning a fraction of the power.
The Path to 2030: Commercialization and Deployment Timelines
With bench validation complete and strong academic and corporate verification backing the PQT-HOS, the industry timeline is shifting rapidly. Dr. Blanche and the Post-Quantum Tek team are expected to move through the University of Arizona's Tech Launch institute to develop full-scale commercial prototypes aimed at beta deployments by late 2027.
The transition will not occur overnight as a wholesale rip-and-replace of existing Ethernet fabrics. Data center operators traditionally build facilities on a 20-year horizon and cannot afford to reconstruct infrastructure aggressively on unproven topologies.
The immediate next step involves parallel data center implementations. Pilot programs, heavily subsidized by joint government and private equity partnerships such as the DOE ARPA-E initiatives and venture capital firms like Tantric Technologies Investments, will test these switches in hybrid architectures. In these initial rollouts, traditional electronic switches will continue to handle small, latency-sensitive control packets, while the diffractive optical switches will be dynamically assigned to carry the massive, high-bandwidth "elephant flows" that make up the bulk of AI training traffic.
Standardization bodies, including the ITU-T and IEEE, are actively evaluating the integration of microsecond optical circuit switching into broader network architectures for the 2030 horizon. As 6G wireless backhaul and real-time volumetric video processing demand terabit-scale edge-to-cloud connectivity, the necessity for ultra-low power routing will expand far beyond the walls of the hyperscaler.
The successful commercialization of this technology fundamentally decouples the growth of compute from the limitations of electrical power transmission. By manipulating phase and diffraction at a microscopic level, data center engineering has finally found a method to route the world's information at the speed of light, ensuring the continued acceleration of artificial intelligence without collapsing the electrical grid that powers it.
Reference:
- https://briefglance.com/articles/uarizonas-optical-leap-a-switch-to-power-green-ai-and-slash-energy-use
- https://www.moomoo.com/403
- https://wp.optics.arizona.edu/pablanche/wp-content/uploads/sites/37/2016/06/Holographic-Optical-Switch.pdf
- https://wp.optics.arizona.edu/pablanche/wp-content/uploads/sites/37/2016/06/1503_Blanche_OFC-Tu3D4.pdf
- https://arpa-e-foa.energy.gov/TeamingPartners.aspx?foaid=1f0dcc91-4c1b-4255-9241-f6684e0958c1
- https://validate.perfdrive.com/?ssa=3bc1db46-f54d-86d8-3d85-d44eda7cb701&ssb=33532201314&ssc=https%3A%2F%2Fopg.optica.org&ssi=4e25e20e-d3hy-2d55-0a93-74ae90b6b5ad&ssk=botmanager_support@radware.com&ssm=52330030718588429134195440873124&ssn=114ec46be4b3f00458dced72b6c81929841e42e64124-7a0a-b1c5-7a008f&sso=468c6c5-7a9a-ba34b260224895b57c6c61a2b523f01960fb9f87a201d1af&ssp=61989406821778760801177873820314572&ssq=04551621986727231291419866724037079385167&ssr=MTI0LjIxNy4xODguMTcy&sst=python-requests%2F2.32.3&ssv=&ssw=
- https://www.hectorweyl.com/blogs/blog/the-ai-driven-revolution-in-optical-networking-powering-the-next-era-of-high-speed-energy-efficient-connectivity
- https://www.mdpi.com/2076-3417/7/4/411
- https://www.graphapp.ai/engineering-glossary/cloud-computing/holographic-data-centers
- https://fosterbrahm.com/wp-content/uploads/2023/04/tti-presentation-.pdf.pdf
- https://www.itu.int/epublications/ar/publication/itu-t-gstr-ion-2030-2025-10-technical-report-on-international-optical-networks-towards-2030-and-beyond
- https://www.nokia.com/blog/the-next-frontier-of-gaming-virtual-worlds-that-move-like-real-ones/