IMPORTANT ANNOUNCEMENT: We've got some cool events coming up this season...
Swipe for more

%%AI Is a Force Multiplier,%% Not a Shortcut

AI has fundamentally changed how we create, and it’s something we actively discuss with our team. Not just in terms of how it helps us work, but also what we stand for and how we want to use it. Sort of an AI manifesto, if you will.

AI as a Force Multiplier

In general, it's remarkable what AI has made possible. The range of things we can now create and explore has expanded in ways that would have felt unrealistic just a few years ago.

From that perspective, AI works as a genuine force multiplier. If you have knowledge or a skill, it amplifies your ability to use it. As a developer, you already produce and create things, AI just gives you superpowers on top of that. Honestly, I sometimes have to stop myself from jumping between too many ideas at once, because I still have a job to focus on. But overall, the tools we have right now are genuinely exciting.

Burnout and Content Overload

That said, it also brings real challenges. I know several programmers who are simultaneously working on 15 different apps and backends. That path leads to burnout fast.

You can see the same pattern in content. LinkedIn, blogs, newsletters, there's an endless stream of AI-generated material coming at you every single day, with very little meaningful substance behind it. Whenever I go online to research a topic or evaluate an idea, I typically find 15 different articles that are mostly AI-generated. They look polished at first glance. You get the impression there might be something useful in there. But once you start reading, it becomes clear there's nothing underneath: no real experience, no actual perspective.

Today, anyone can generate an article about almost any topic. I could open an AI tool right now and produce something about biochemistry or genetics, even though I know almost nothing about either.

And even within our own field, you see it everywhere. Go to Reddit, look at SaaS or Kubernetes communities, and every day there are multiple posts that are obviously AI-generated. Real interaction has practically disappeared. It's become spam. And I'm genuinely concerned that it won't be long before it's bots generating content for other bots, because no one will actually be reading it anymore.

What Makes Content Valuable

Whenever I write something and I do it far more often than I ever expected in my line of work, I try to bring my personal opinions, experience, and some genuine emotion into it.

Blogs, case studies, and posts shouldn't read like technical documentation stripped of context and perspective, simply listing steps: "you do this, you do that, and this is the result." Even documentation should have a point of view. "This is how we do things. You can agree with it or not, but this is our approach."

Whenever we write an article, case study, or even a LinkedIn post, we try to bring a human element into it, and I think that comes through. Even in something like our end-of-year article, I aim to include my perspective, not just a list of events. Not just "we had a hackathon, we did this, we did that," but what it actually meant to us.

That's what creates value. That's what makes content feel alive.

AI as an Enhancer, Not a Replacement

There's an important distinction here. Truthfully, grammar isn't my strong suit. My English is manageable, but Slovak grammar not so much. [My Slovak & Czech readers will know exactly what I mean.]

So I primarily use AI to ensure that when I express my opinions, they come out clearly, without mistakes, and structured in a way that's easy to follow. I use it for refactoring, rephrasing, and improving structure.

That's where AI creates real opportunity. It helps you write faster, produce better output, and make sure what you create is readable and well-organized.

AI for Research and Thinking

AI is also genuinely useful for research. When I'm figuring out what to write about or which angle to take on a topic, I use it to explore different perspectives and see what connects and what doesn't.

A good example is something like scaling in Kubernetes. It's a complex topic that you need to approach from multiple directions. You might start from one angle, but there are always other approaches worth considering. For research and structuring ideas, AI is a real asset.

The Role of the Human in Writing

But then there's the actual writing. At that point, you as a human need to sit down and write the article. Because it's not only bots or algorithms reading it. There are real people on the other side.

You should write about what you've done, how you did it, what your experience was, and why you chose one approach over another. That's what makes an article worth reading. That's what gives it value that AI-generated content simply can't replicate.

Content, Value, and Company Perspective

I'm not here to judge what counts as a good or bad use of AI. Everyone gets to decide that for themselves.

But if we as a company want to produce something with real value, it has to be grounded in our expertise and our relationships with customers.

When we write a case study, for example, it's not just about us. It's also about the client. We've worked together, built something together not just infrastructure, but trust and collaboration. Every case study carries that history. It reflects that relationship.

The same goes for posts about our technology or how we work with LARA. It shouldn't just explain what LARA is. It should reflect our experience, how we developed it, and how we arrived at certain decisions. We shouldn't just show the result. We should take the reader through how we got there.

What Should [and Shouldn't] Be Delegated to AI

I've thought about this a lot, and it's genuinely hard to give a clear answer. Things are moving so fast that whatever I say today might be outdated in a year, or even sooner.

AI tools are evolving rapidly and technically there will likely be very little we can't delegate. But the real question isn't whether we can. It's whether we want to. And as a company, we don't want to delegate everything.

We still sell to humans. And when we do, we want our content written for humans, not for machines.

Responsibility and Reputation

This applies to every company. We all set our own standards. What feels wrong to one organization might be perfectly acceptable to another. There's also a reputational dimension to this. When we deploy infrastructure for an AI project, we want to make sure it doesn't produce harmful content  that it genuinely brings value to end users, and that it's highly scalable and reliable for the client.

It's hard to tell customers you deliver high-quality work and care about relationships, and then put out low-quality, AI-generated marketing content. If your strategy is to generate articles and hope for the best, the disconnect is obvious. It's similar to a company that fights spam while acquiring its own customers through spam. And as AI-generated content continues to grow, it will become harder and harder to find anything genuinely useful online.

The AI Manifesto: Human for Human

That's why we started talking about an AI manifesto. We want to clearly state that the content on our website is human-generated.

When you read a blog post, it was written by a person. It carries real value, for us and for our customers. Yes, we use AI to prepare, improve structure, and refine wording. That's completely fine. But we don't generate content blindly, without supervision or real substance behind it.

The core message is simple: human for human.

The Future and the Risk of Over-Reliance

We believe in AI. It has enormous potential  for humanity, for engineers and for us as a company. We want to use it wherever it makes sense. But we also recognize the risks. Not just technical ones, but things like AI-generated content that's becoming increasingly difficult to distinguish from reality.

From our perspective, volume matters far less than quality. That standard applies to infrastructure, marketing, business development, and how we communicate.

I do think AI will affect engineering craftsmanship. There will be a meaningful difference between engineers who developed their skills before AI and those who came after. It may make learning harder, because people will lean on AI before building their own understanding. Though perhaps they'll learn how to work alongside it in ways we can't fully anticipate yet.

The real risk is when people rely on AI for everything without ever building their own skills. Because when the tool goes down, everything stops. Historically, development was decentralized, different engineers with different skill levels contributing independently. If that gets centralised around AI and that central point fails, everyone is blocked at once.

That's not a resilient place to be.

Something not clear?
Check Our FAQ

Similar articles

Have some time to read more? Here are our top picks if this topic interested you.

2025 Building a Company One Year at a Time
2025 Building a Company %%One Year at a Time%%

Building a company is like growing an orchard. You plant, nurture, adjust and wait. A CEO’s 2025 reflection on people, customers, product, and the long game of growth.

Back-to-Back AWS Consulting Partner of the Year 2025 for CEE
AWS
18/12/2025
Oops, we Did It Again: %%Back-to-Back AWS Consulting Partner of the Year 2025 for CEE%%

Labyrinth Labs wins AWS Consulting Partner of the Year for CEE again. Two years in a row, built on true results, cloud-native principles, and trust earned in production.

Common Cloud Myths: Or Why Your Infrastructure Doesn’t Need to Look Like Netflix’s
Technologies
15/1/2026
%%Common Cloud Myths:%% Or Why Your Infrastructure Doesn’t Need to Look Like Netflix’s

Learn the most common cloud myths and why simpler, pragmatic infrastructure helps teams move faster and waste less.