The artificial intelligence (AI) boom is redefining digital infrastructure and exposing the limits of traditional approaches to data center space, power, and connectivity. The financial sector’s experience in navigating similar trading infrastructure challenges offers a blueprint for companies building out AI capabilities. By looking at the evolution of data center strategy in financial markets, these companies can identify ways to develop scalable, cost-effective, and resilient infrastructure. Here’s how.
Why Traditional Co-Location Models Aren’t Ideal for AI Servers
AI workloads have introduced unprecedented demands for power and cooling, rendering many premium data centers either obsolete or cost-prohibitive. Modern servers can draw three to four times more power than previous generations of hardware, meaning that firms hit power limits long before they run out of physical space. The financial industry’s experience with prime exchange co-location facilities offers a direct parallel. These sites can be limiting, and there are a few issues that are now becoming apparent to the broader AI market.
First is the exorbitant cost, a premium paid for ultra-low latency that many AI training and inference applications simply don’t need. The second issue is the lack of power and space. Many of these prime facilities are at capacity and need further investment to expand, a situation seen in hubs like Secaucus NY4 and the NYSE Mahwah data center. The third limitation is slow innovation. Changes in exchange owned and managed data centers require regulatory approval from bodies like the SEC and CFTC. This process can slow the adoption of new technologies that are needed to support dense AI hardware.
Finding the Right Balance
When organizations face these constraints, a common reaction is to seek out remote locations with cheap power. However, this approach introduces latency and reliability risks. A data strategy that relies entirely on long-haul subsea cables is vulnerable to outages that can create gaps in operations. An AI model is less than desirable if it cannot access complete and reliable data. The network connecting a data center is just as important as the computing infrastructure within it.
This becomes critical during periods of high market volatility (events like elections or economic announcements) when data volumes can spike heavily. During one such period, exchanges proactively warned infrastructure partners to upgrade their network capacity to handle an anticipated data surge, demonstrating that even the exchanges anticipate moments when unprepared networks will fail.
A happy medium is the “proximity” model. This strategy involves establishing a data center presence close enough to major financial and connectivity hubs to ensure low latency and reliable network performance, but outside the premium-cost radius of primary exchange data centers. While not every workload requires microsecond-level latency, all need dependable, high-bandwidth connectivity to function effectively. Companies can implement this approach through proximity data center migrations, leveraging dark fiber to connect to exchange colocation sites. This allows them to reduce power costs while remaining within microseconds of critical market data and applications. The surge in market data volumes, with exchanges reporting exponential growth in data feeds, is creating a bigger need for this kind of efficient infrastructure.
Right-Sizing Infrastructure for AI Scale
The current AI infrastructure challenge is a problem the financial industry has already solved. The trading industry initially made the costly error of putting everything into the most expensive co-location space. Latency-sensitive trading engines sat alongside back-office systems and data normalization servers, all of which were consuming premium power and space.
Over time, firms learned a valuable lesson and adopted a more intelligent, hybrid approach. Today, they place only the most latency-sensitive applications in premium colocation facilities. Workloads that are compute-heavy and less-latency sensitive are offloaded to more cost-effective proximity locations. This “right-sizing” strategy is directly applicable to companies building out AI infrastructure. It allows them to scale powerful systems without paying an unnecessary premium for every workload.
Everything is about scale and cost effectiveness with AI. The financial markets data center expansion showed that clients who had placed non-latency applications and back-office compute functions in premium exchange co-location facilities became subject to higher costs and less availability to power.
The lessons are pretty straightforward. Adopting a thoughtful, tiered approach to infrastructure allows companies to build more resilient and scalable AI operations in a cost-effective way. The financial industry’s journey provides a proven approach for supporting the next generation of AI technology.
Rick Gilbody is Global Head of Sales and Marketing at TNS Financial Markets. His expertise includes data center colocation, low latency exchange infrastructure, network design and real-time market data.
Fireside Chat Episode 4: EMEA Financial Market Trends
Watch Episode 4 of our Fireside Chat series to learn more about the trends affecting EMEA trading firms in 2026 and beyond.




