Biotechnology

How AI Is Helping Decode the Tumor Microenvironment — and What It Means for Cancer Care

A closer look at how machine intelligence is helping doctors see cancer in an entirely new light.

Updated

November 28, 2025 4:18 PM

Serratia marcescens colonies on BTB agar medium. PHOTO: UNSPLASH

Artificial intelligence is beginning to change how scientists understand cancer at the cellular level. In a new collaboration, Bio-Techne Corporation, a global life sciences tools provider, and Nucleai, an AI company specializing in spatial biology for precision medicine, have unveiled data from the SECOMBIT clinical trial that could reshape how doctors predict cancer treatment outcomes. The results, presented at the Society for Immunotherapy of Cancer (SITC) 2025 Annual Meeting, highlight how AI-powered analysis of tumor environments can reveal which patients are more likely to benefit from specific therapies.

Led in collaboration with Professor Paolo Ascierto of the University of Napoli Federico II and Istituto Nazionale Tumori IRCCS Fondazione Pascale, the study explores how spatial biology — the science of mapping where and how cells interact within tissue — can uncover subtle immune behaviors linked to survival in melanoma patients.

Using Bio-Techne’s COMET platform and a 28-plex multiplex immunofluorescence panel, researchers analyzed 42 pre-treatment biopsies from patients with metastatic melanoma, an advanced stage of skin cancer. Nucleai’s multimodal AI platform integrated these imaging results with pathology and clinical data to trace patterns of immune cell interactions inside tumors.

The findings revealed that therapy sequencing significantly influences immune activity and patient outcomes. Patients who received targeted therapy followed by immunotherapy showed stronger immune activation, marked by higher levels of PD-L1+ CD8 T-cells and ICOS+ CD4 T-cells. Those who began with immunotherapy benefited most when PD-1+ CD8 T-cells engaged closely with PD-L1+ CD4 T-cells along the tumor’s invasive edge. Meanwhile, in patients alternating between targeted and immune treatments, beneficial antigen-presenting cell (APC) and T-cell interactions appeared near tumor margins, whereas macrophage activity in the outer tumor environment pointed to poorer prognosis.

“This study exemplifies how our innovative spatial imaging and analysis workflow can be applied broadly to clinical research to ultimately transform clinical decision-making in immuno-oncology”, said Matt McManus, President of the Diagnostics and Spatial Biology Segment at Bio-Techne.

The collaboration between the two companies underscores how AI and high-plex imaging together can help decode complex biological systems. As Avi Veidman, CEO of Nucleai, explained, “Our multimodal spatial operating system enables integration of high-plex imaging, data and clinical information to identify predictive biomarkers in clinical settings. This collaboration shows how precision medicine products can become more accurate, explainable and differentiated when powered by high-plex spatial proteomics – not limited by low-plex or H&E data alone”.

Dr. Ascierto described the SECOMBIT trial as “a milestone in demonstrating the possible predictive power of spatial biomarkers in patients enrolled in a clinical study”.

The study’s broader message is clear: understanding where immune cells are and how they interact inside a tumor could become just as important as knowing what they are. As AI continues to map these microscopic landscapes, oncology may move closer to genuinely personalized treatment — one patient, and one immune network, at a time.

Keep Reading

AI

The Real Cost of Scaling AI: How Supermicro and NVIDIA Are Rebuilding Data Center Infrastructure

The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.

Updated

December 16, 2025 3:43 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK

As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.

Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.

At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.

Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.

This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.

The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.

Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.

Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.