IMPORTANT ANNOUNCEMENT: We've got some cool events coming up this season...
Swipe for more
Technologies
23/2/2026

%%LARA 9.0 Delivers:%% GPUs Support, OpenTelemetry & Faster Terraform

Everyone who has built a platform on top of Kubernetes knows it is not trivial. You are orchestrating dozens of moving parts - control plane versions, addons, ingress controllers, observability stacks, IAM integrations, networking layers. All of them evolve independently. All of them introduce change...

Now imagine maintaining that not for a single cluster, but across many customers with different workloads, compliance requirements, and scaling patterns.

That is why every LARA release requires focused engineering effort. Platform engineering is not just about adding features. It is about tracking upstream changes, evaluating impact, testing compatibility, and ensuring that upgrades remain predictable.

Changes like ingress controller deprecations or shifts in Helm chart maintenance and licensing models are not theoretical concerns. They directly affect architecture decisions, upgrade paths, and operational cost. Ignoring them is not an option.

Continuous investment in LARA ensures that these ecosystem shifts are handled at the platform level - once & properly - instead of being solved repeatedly by every team on their own.

Labyrinth Labs LARA 9.0.0 

This is a release shaped by how modern platform teams actually work. As platforms grow, the real wins come from smoother workflows, clearer signals, and infrastructure that quietly does the right thing. This release focuses exactly there.

We upgraded core foundations, unlocked new capabilities in Kubernetes, and invested heavily in developer experience, observability, and traffic efficiency. The result is a platform that scales more naturally, costs less to operate, and is easier to reason about day to day.

Labyrinth Labs LARA 9 brings smarter routing, faster Terraform workflows, deeper observability integrations, and new options for traffic management, security, and compute. Not flashy for the sake of it. Practical improvements that add up once your platform is under real load.

With this release named Byenami we finalized the transition from Bitnami images and Helm charts, which are no longer distributed freely.

Kubernetes 1.33 and a Smarter Ingress Path

We upgraded all EKS clusters to Kubernetes 1.33 (Octarine). That alone brings the usual batch of stability, security fixes, and upstream improvements. But the real value comes from what we built on top of it. Few select updates:

  • Sidecar containers (Stable)
  • In-place resource resize for vertical scaling of Pods (Beta)
  • New configuration option for kubectl with .kuberc for user preferences (Alpha)
  • Backoff limits per index for indexed Jobs (Stable)
  • Job success policy (Stable)
  • Subresource support in kubectl (Stable)
  • Multiple Service CIDRs (Stable)
  • Dynamic Resource Allocation (DRA) for network interfaces (Beta)
  • nftables backend for kube-proxy (Stable)
  • Configurable container restart delay (Alpha)
  • Configurable tolerance for HorizontalPodAutoscalers (Alpha)

We immediately leveraged topology-aware routing to optimize the ingress path into clusters. Traffic now prefers the shortest, cheapest route instead of bouncing unnecessarily across availability zones. That means lower Cross-AZ costs and more predictable latency, especially for clusters that sit behind shared ingress layers or are accessed from other environments.

Alongside the control plane upgrade, we refreshed both EKS managed addons and LARA-managed addons to their latest compatible versions. Karpenter now supports spot-to-spot consolidation, which further squeezes waste out of spot-heavy workloads. We also migrated external-secrets to the v1 API, removing a long-standing source of noise and future-proofing secret management going forward.

Ingress Cross-AZ Traffic Optimization, Done Properly

Ingress traffic optimization is easy to talk about and easy to get wrong. In LARA 9 we reworked the ingress path using topology-aware routing across Route53, AWS load balancers, and the Ingress NGINX controller.

The result is simple. Traffic enters the cluster closer to where it should, Cross-AZ hops are reduced, and you pay less for data transfer that adds no value. This is already live, and we are not done. Support for additional ingress and Gateway API controllers like Traefik and Envoy Gateway is coming next.

Terraform Plans That Respect Your Time

If you have ever run terraform plan on a large state and gone for coffee while it figured itself out, this part is for you.

We focused hard on improving the base state Terraform experience. Plans are now significantly faster, especially for larger installations. Output is more concise, easier to read, and far less likely to scare you with changes that are not actually changes. We also stabilized base state outputs to reduce unnecessary drift and plan noise.

This is not the end of the story. Next up is even clearer diffs and better signal-to-noise across more parts of the LARA Terraform stack.

OpenTelemetry, Your Way

Observability is not a checkbox. It is the ability to understand what your system is doing without guessing.

In LARA 9 we introduced platform-level support for OpenTelemetry across EKS workloads. The goal was not to force a new stack, but to make telemetry emission standardized, configurable, and consistent with the rest of the platform.

Here is what that means in practice.

Workloads running in EKS can now emit logs, metrics, and traces using OpenTelemetry, with routing controlled via LARA-level configuration. We introduced new configuration parameters and labels that allow teams to explicitly opt in to OpenTelemetry signal collection per workload or per namespace.

At the platform layer, LARA deploys and manages the necessary OpenTelemetry components so that:

  • telemetry signals can be collected inside the cluster
  • signals can be forwarded either to the LARA observability stack or to an external backend
  • routing is configurable without rewriting application code
  • signal pipelines remain consistent across environments

This design keeps things backward compatible. Existing workloads that rely on the current observability setup continue to work unchanged. OpenTelemetry is an additive capability, not a breaking change.

Teams can run both models in parallel:

  • continue using the existing logging and metrics pipelines
  • gradually adopt OpenTelemetry where it makes sense
  • or fully standardize on OpenTelemetry across services

From an operational perspective, this also aligns application telemetry with platform telemetry. Instead of separate worlds for cluster metrics and app traces, you get a unified, standards-based approach that scales with complexity.

The important part is not just that OpenTelemetry is “supported”. It is integrated in a way that makes adoption incremental and predictable.

Today, this enables clean telemetry emission and flexible routing. In future releases, we will build deeper integrations into the LARA observability stack, making correlation, enrichment, and cross-signal analysis even tighter.

Observability should evolve with your system. LARA now gives you a cleaner path to do exactly that.

Atlantis Integration, Now in Preview

We added Atlantis integration to automate Terraform workflows directly from pull requests. Plans and applies become part of the review process, collaboration improves, and infrastructure changes stop living in someone’s terminal history.

This is a preview feature. We are testing it internally, gathering feedback, and refining the experience before making it generally available. If you care about Terraform automation, this is one to watch closely.

Envoy Gateway Joins the Platform

LARA 9 introduces Envoy Gateway as an alternative ingress solution with native Gateway API support and multiple proxy deployments.

Ingress is not going away. Ingress and Gateway API can happily coexist in the same cluster, and you can choose what fits each workload best. We are also very aware of the Ingress NGINX controller deprecation timeline and are actively working on a clean migration path for existing users.

When an Ingress Controller Becomes a Strategic Risk

Ingress NGINX has been one of the most widely adopted ingress controllers in Kubernetes. For many companies, it quietly became the default traffic layer.

On November 11, 2025, the Kubernetes project officially announced its retirement. Maintenance, bug fixes, and security patches will end in March 2026. After that, no new releases and no CVE fixes.

That is not a cosmetic change. It means:

  • A critical traffic component becomes unsupported
  • Security exposure increases over time
  • Future Kubernetes upgrades may break compatibility
  • Migration becomes unavoidable, not optional

And migration is not a Helm upgrade.

Ingress controllers sit directly in the request path. They terminate TLS, apply routing logic, enforce annotations deeply embedded in application manifests, and often integrate with authentication or rate limiting setups. Replacing them requires configuration rewrites, traffic testing, rollout planning, and careful validation.

For many companies, this becomes a sudden platform initiative they did not budget for. This is exactly where LARA creates leverage.

We track ecosystem changes like this early. We introduce supported alternatives such as Envoy Gateway and Gateway API. We design migration paths and validate coexistence strategies. We absorb the architectural shift at the platform level.

Our customers do not need to assemble a migration task force or freeze product work to redesign their ingress layer. That is the advantage of LARA as a managed platform. When foundational components shift, it becomes our engineering problem - not theirs.

Cloudflare Zero Trust for Secure Access

We replaced Firezone with Cloudflare Zero Trust for access to internal services. This brings stronger identity provider integration and a more flexible security model out of the box.

WireGuard-based VPN access is still available and remains the baseline option. Zero Trust is an addition, not a replacement, giving teams more choice in how they secure internal access.

EKS NVIDIA GPU Support - AI and ML Ready by Design

Running GPU workloads on Kubernetes follows a different operational model than standard CPU workloads. Even on EKS, where the control plane is managed and AWS provides optimized GPU AMIs, there are additional layers that must be configured correctly.

GPU support in Kubernetes requires:

  • NVIDIA drivers compatible with the underlying AMI
  • the NVIDIA device plugin to expose GPUs as schedulable resources
  • proper node labeling and taints
  • workload scheduling constraints to prevent inefficient placement
  • GPU-level metrics collection beyond standard node telemetry

Unlike CPU and memory, GPU resources are discrete and expensive. Scheduling mistakes are not minor inefficiencies - they directly translate into cost or stalled workloads. At the same time, standard node metrics are not enough to understand GPU utilization, memory pressure, or thermal and power conditions.

LARA 9 adds structured, production-ready GPU support for EKS clusters.

We integrated:

  • NVIDIA device plugin for proper resource discovery and scheduling
  • DCGM exporter for detailed GPU metrics
  • dashboards and alerts aligned with the rest of the LARA observability stack
  • support for GPU-enabled node groups following standard LARA scaling patterns

This ensures that GPU workloads behave as first-class citizens in the cluster. They are schedulable, observable, and manageable using the same operational model as the rest of your platform.

EKS provides a solid foundation for GPU workloads. LARA builds the operational layer on top of it - making accelerated compute predictable, visible, and aligned with platform best practices.

Observability Upgrades Across the Stack

We migrated from Bitnami to the official community Prometheus Helm chart and upgraded Prometheus to 3.1. Grafana is now at 12.3 and Tempo at 2.8. These upgrades bring better performance, long-term support, and fewer surprises when upstream changes land.

More Database Options

RDS Instance support is now available alongside RDS Aurora and RDS Clusters. This gives teams more flexibility when choosing the right database shape for their workloads without bending the platform to fit.

Change Summary: A Quick Look Back

In this cycle we delivered:

  • 49 features
  • 17 bugfixes
  • 11 chores and internal improvements

On paper, that is just a release delta.

In reality, it reflects something more important - consistent, deliberate investment into LARA as a product.

LARA is not a one-off platform template. It is not a frozen reference architecture. It is a living system that evolves with Kubernetes, AWS, observability tooling, security models, and real-world customer usage. Every release is part of a long-term compounding effect.

Labyrinth Labs LARA 9 is already rolling out to our customers - starting with dev and staging environments, and then gradually into production. Most upgrades are fully handled by our platform team, but if you manage LARA yourself, the migration guide is ready and waiting for you.

If you’re not on LARA yet, but you’re curious what a platform like this could do for your business - whether it’s faster delivery, less operational noise, or just happier DevOps teams - let’s talk.

We’re always up for a conversation about real challenges, real solutions, and how a solid platform can give your developers more time to build and your business more room to grow.

Reach out at lablabs.io/contact

Let’s build something reliable together.

Something not clear?
Check Our FAQ

Similar articles

Have some time to read more? Here are our top picks if this topic interested you.

Back-to-Back AWS Consulting Partner of the Year 2025 for CEE
AWS
18/12/2025
Oops, we Did It Again: %%Back-to-Back AWS Consulting Partner of the Year 2025 for CEE%%

Labyrinth Labs wins AWS Consulting Partner of the Year for CEE again. Two years in a row, built on true results, cloud-native principles, and trust earned in production.

Building container images in cloud-native CI pipelines
Technologies
16/12/2022
Building container images in %%cloud-native CI pipelines%%

How to build container images using Docker in Docker (DinD), comparing it to other popular tools such as Kaniko, Buildah, and BuildKit in the cloud-native environment.

Diving Deeper: A Comprehensive Look at Infrastructure Evaluation
AWS
4/9/2023
%%Diving Deeper:%% A Comprehensive Look at Infrastructure Evaluation

Before diving headfirst into the vast sea of cloud computing, it’s essential to understand the starting point.