Graph Foundation Model for Scientific Porous Media Data

GIFT-KASTL

Graph-In-Fracture Transformer with a Kolmogorov-Arnold Superposition Layer

A graph foundation model for discrete fracture networks in porous media.

GIFT-KASTL combines a graph transformer backbone for graph-wide representation learning with a structured downstream refinement layer inspired by the Kolmogorov-Arnold Superposition Theorem. The goal is to model complex fracture-network data in a way that is expressive, scientifically meaningful, and architecturally disciplined.

GIFT-KASTL system overview
Project abstract

A graph foundation model with structured nonlinear refinement.

GIFT-KASTL is a graph foundation model for discrete fracture networks in porous media. The system combines a graph transformer backbone for global graph representation learning with a Kolmogorov-Arnold Superposition Layer (KASTL) applied at the output stage for structured nonlinear refinement. This design aims to produce scientifically meaningful predictions on graph-structured fracture data while preserving a clear mathematical interpretation of the downstream refinement mechanism.

Why this matters

Scientific fracture systems are graph-structured, global, and hard to model well.

Discrete fracture networks in porous media naturally form graph-structured scientific data, where prediction depends on both local relationships and long-range graph connectivity. Standard pipelines often struggle to capture that structure cleanly. GIFT-KASTL is built to address this challenge with a two-stage design: a graph transformer learns graph-wide representations, and a KASTL refinement layer introduces structured nonlinear composition at the output stage.

Scientific setting

The project targets porous media and fracture-network data, where geometry, connectivity, and interaction structure matter simultaneously.

System design

The graph transformer acts as the global representation learner, aggregating information across the fracture graph before prediction refinement.

Mathematical identity

KASTL gives the model a distinctive downstream refinement mechanism inspired by the Kolmogorov-Arnold Superposition Theorem.

Architecture

Two stages: graph-wide learning first, structured refinement second.

GIFT-KASTL begins with graph fracture data and preprocessing, then uses a graph transformer backbone to learn global graph representations. The final prediction is refined through a Kolmogorov-Arnold-inspired downstream layer applied at the output stage.

Proposed architecture

A closer view of the graph foundation model pipeline.

This view highlights the internal flow of the proposed foundation model. Graph-structured fracture data is first encoded into node and edge representations, processed through a message-passing and graph transformer backbone, and then refined by the KASTL module to produce the final scientific prediction.

Proposed foundation model architecture for GIFT-KASTL

Proposed foundation model architecture. The pipeline begins with graph fracture data, builds node and edge representations, applies graph transformer message passing and training, and uses KASTL as the final structured refinement layer before prediction. This figure emphasizes the two-part system design: graph-wide representation learning first, theorem-inspired nonlinear refinement second.

KASTL

Kolmogorov–Arnold Superposition Layer

The final stage of GIFT-KASTL introduces a structured refinement layer inspired by the Kolmogorov–Arnold Superposition Theorem. Rather than relying only on the raw transformer output, the model applies a second stage of nonlinear functional composition, giving the system a mathematically structured way to refine graph-based predictions.

Mathematical Formulation

Kolmogorov-style superposition as a refinement principle

For a continuous multivariate function \( f : [0,1]^n \to \mathbb{R} \), the Kolmogorov superposition principle gives the representation

\[ f(x_1,\ldots,x_n) = \sum_{j=0}^{2n} \mathcal{O}_j \left( \sum_{i=1}^{n}\mathcal{I}_{ij}(x_i) \right), \]

where \( \mathcal{O}_j \) are one-dimensional outer functions and \( \mathcal{I}_{ij} \) are one-dimensional inner functions. In GIFT-KASTL, the inner and outer functions are chosen as follows:

\[ \mathcal{I}_i(x)=\arctan(\sinh x) \] \[ \mathcal{O}_i(x)=\sum_{k=0}^{N<\infty}\frac{|E_k|}{(k+1)!}x^{k+1}, \]

Here \(E_k\) denotes the Euler secant numbers. These choices define the nonlinear composition mechanism used in the KASTL refinement stage. We apply the KAST for a single variable input so that the tuning on all predicted values can happen effectively and easily, thus avoiding any mathematical function complexities. The relation with the choice of inner functions and outer functions in our application is unique. This means that an inappropriate choice of such functions can lead to a bad tuning of the forecast values, leading to huge and unpleasant errors.

Inner function used in the KASTL layer

Inner function. The inner function introduces bounded nonlinear shaping before aggregation, transforming latent graph features into a structured intermediate form.

Outer function used in the KASTL layer

Outer function. The outer function composes the transformed representation and governs the final output geometry of the refinement stage.

Heatmap of inner function behavior

Inner-function heatmap. A visual summary of how the inner-stage transformation behaves across its input domain.

Heatmap of outer function behavior

Outer-function heatmap. A complementary view of the output-stage composition used by the KASTL refinement layer.

Architecture diagram of the KASTL layer

KASTL architecture. Multiple KASTL units are composed and aggregated through a structured summation-and-adjustment stage to refine the final output of the graph transformer backbone.

Why KASTL

A structured alternative to arbitrary nonlinear layers.

Structured nonlinear refinement

Standard pipelines rely on generic feedforward layers for nonlinear transformation. KASTL replaces this with a structured composition of inner and outer functions, providing a disciplined way to refine predictions after the graph transformer stage.

Theorem-inspired design

The KASTL layer is motivated by the Kolmogorov–Arnold Superposition Theorem, which shows that multivariate functions can be represented through compositions of one-dimensional functions. This gives the architecture a principled mathematical foundation.

Scientific inductive bias

Graph-based scientific data often exhibits structured interactions rather than arbitrary nonlinear behavior. KASTL introduces a functional composition bias that aligns naturally with physical and spatial processes in fracture networks.

Instead of learning arbitrary nonlinear mappings, GIFT-KASTL introduces a structured functional decomposition that aligns with both mathematical theory and scientific modeling needs.

Why KASTL

Why use KASTL instead of a generic nonlinear refinement layer?

KASTL is not introduced as a decorative add-on. It is a structured refinement stage designed to bring theorem-inspired nonlinear composition into the output of the graph transformer. In this project, the layer is motivated by both mathematical form and practical modeling needs in scientific graph data.

Why KASTL figure showing adaptability, ease-of-use, high-end accuracies, stability, probabilistic analysis, and finite approximations

The figure highlights six reasons for adopting the KASTL layer: adaptability, ease of use, high predictive accuracy, stability, probabilistic interpretability, and finite approximation structure. Together, these properties make KASTL a compelling downstream refinement mechanism for graph-based scientific machine learning.

1. Adaptability
2. Easy-to-use
3. High-end accuracies
4. Stable
5. Probabilistic analysis
6. Finite approximations
Results

A clean research system with room for stronger empirical additions.

GIFT-KASTL is designed as a two-stage scientific graph learning pipeline. A graph transformer backbone first learns graph-wide representations from discrete fracture network data, and the KASTL layer then refines the final prediction through a structured nonlinear composition inspired by the Kolmogorov-Arnold Superposition Theorem. The current project page focuses on the architectural and mathematical identity of the system, while broader experiment panels and ablations can be layered in as the project evolves.

Core implementation · KASTL layer
import torch
import torch.nn as nn

class KASTLLayer(nn.Module):
    """
    Kolmogorov-Arnold Superposition Theorem Layer
    used as a structured refinement stage after the graph transformer.
    """

    def __init__(self, scale: float = 1.0):
        super().__init__()
        self.scale = scale

    def inner_function(self, x: torch.Tensor) -> torch.Tensor:
        return torch.atan(torch.sinh(x))

    def outer_function(self, x: torch.Tensor) -> torch.Tensor:
        # Truncated outer expansion inspired by the theorem-driven construction
        return (
            x
            + x**3 / 6
            + x**5 / 24
            + 61 * x**7 / 5040
            + 277 * x**9 / 72576
        )

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        z = self.inner_function(x)
        y = self.outer_function(z)
        return self.scale * y
Poster and project assets

Use the poster as the deeper technical companion.

The poster provides a broader view of the system design, mathematical formulation, and current experimental material for GIFT-KASTL. The project page acts as the polished entry point; the poster can carry the denser technical details until additional experiments and benchmarks are added here.

Data and scaling

Interactive views of graph size, wall time, and prediction behavior.

These interactive plots summarize the footprint of the fracture-network datasets and the scaling behavior of the pipeline with graph size and connectivity.

Nodes vs edges. Each point is a graph instance, colored by wall time.

Wall time vs nodes. Interactive scaling with graph size.

Wall time vs edges. Scaling with graph connectivity.

Prediction error by horizon. Distribution of prediction error across targets.

Reference

Project reference

Reference
@misc{giftkastl,
  title  = {GIFT-KASTL: Graph-In-Fracture Transformer tuned by Kolmogorov-Arnold Superposition Theorem Layer, a novel graph foundation model for scientific porous media data},
  author = {Himanshu Singh},
  note   = {Project website and technical materials},
  year   = {2026},
  url    = {https://himanshuvnm.github.io/gift-kastl/}
}