← Back to posts 9 min read views

We're building the culture of AI work right now

Right now, every way we talk about AI at work is helping define what ownership, competence, and judgment look like in the AI era. We should be more deliberate about that.

Right now, across the industry, engineers are figuring out how to work with AI. How to use it for brainstorming, for drafting, for research, for code. How much to trust it. When to override it. How to fold it into workflows that already existed before any of this showed up. That part is obvious and everyone is talking about it.

What fewer people are paying attention to is the other thing that’s happening at the same time. We’re also building a culture around how we collectively perceive, talk about, and judge work that involves AI. Every casual phrase we attach to the things we share is helping set norms. Norms about ownership. About what counts as real work. About whether AI involvement is something to apologize for or something to own.

Most of this culture-building is happening by accident. That’s exactly why it matters.

Everyone is navigating this differently

If you’ve shared AI-assisted work recently, you’ve probably thought about how to frame it. And there’s no established playbook. The norms are genuinely unsettled.

Some people say “Claude helped with this” because their workplace expects disclosure. Some say it because they want to be transparent about their process. Some say it because they’re excited about what AI can do and want to show it off. Some say it because the norms still feel unsettled, and hedging feels safer than ownership while the social rules are still being written.

All of that is understandable. Nobody taught us how to talk about this. But some of those improvisations are becoming habits, and some of those habits are shaping the culture in ways worth examining.

The pattern I keep noticing

I keep seeing experienced engineers share work with a little disclaimer attached to it.

“This was generated by Codex.”

“Claude helped with this.”

“Most of this came from AI.”

Sometimes the phrasing is transparent and informative. But often, something else is going on. The disclaimer isn’t really explaining the process. It’s creating distance between the person and the work. Quietly asking the reader to lower their standards.

It can also run in the opposite direction. I’ve seen senior engineers mention AI not to hedge, but almost to flex. The subtext is less “don’t judge me too hard” and more “I’m so secure in my own competence that I can openly admit AI helped.” That’s not defensive. But the sentence is still doing social work around the author rather than clarifying the artifact.

Whether the motive is insecurity, transparency, status, or genuine uncertainty, the phrase “AI helped with this” often ends up doing cultural work beyond the literal words.

Is AI use something to apologize for? Is it a way to lower standards? Is it an excuse to stop thinking? Is it a workflow detail? Is it evidence of modern competence? The answer depends on the norms we set now, and those norms are being built out of a thousand little repeated moves that nobody bothers to question.

We already knew how this worked

The easiest way to see what I mean is to remove AI from the story.

In the pre-AI world, I might work on something with a coworker. Maybe I bounced ideas off a peer. Maybe a junior engineer drafted the first version. Maybe I pair programmed with Mike and he was typing for most of the session.

When I go to post the result, it sometimes makes sense to mention the other person. Credit them for a real contribution. Acknowledge collaboration. That’s normal.

But what I would not do is mention them as a way of hedging responsibility.

I would not say “Mike wrote most of this code” in a tone that really means “so if this is bad, direct some of that at Mike.” I would not say “Sarah came up with a lot of this” in a tone that means “please don’t judge me too hard.”

That would be weak. I’m still the one posting it. I’m still the one saying: this represents something I’m willing to put forward.

Engineers have always worked collaboratively. We use docs. We use Google. We use Stack Overflow. We use examples. We ask coworkers. We brainstorm in chats. We sketch a bad version and get feedback. We review each other’s pull requests. We pair. We copy a pattern from another codebase. We have one person navigate while another types. We argue through trade-offs. We reformulate an idea six times before landing on the version that works.

None of this has ever threatened the basic idea of ownership. The artifact belongs to the people who stand behind it.

AI is now part of that same family of collaboration. Sometimes it plays a small role, sometimes a large one. Either way, I think its involvement should be treated the same way we treat other forms of collaboration: mention it when it’s relevant, but don’t use it to smuggle in a disclaimer on your own responsibility.

Human credit and AI “credit” are different things

The coworker analogy works, but it also reveals something deeper.

When I mention a human collaborator, there’s often a real moral and social reason for it. They deserve recognition. They may care whether their work gets erased. There’s a relationship there, professional visibility, fairness. If I pair programmed with Mike and his contribution was substantial, there’s a genuine social reason to say so. Mike is a person. He has feelings. He’s building a career. Not crediting him would be a kind of erasure.

When I mention Claude or Codex, that logic doesn’t carry over.

Claude doesn’t care. Claude is not hoping this work helps it get promoted, and it’s not a colleague whose effort I’m morally obliged to recognize.

So when AI “credit” looks like human credit but doesn’t serve the same purpose, it’s worth asking what purpose it is serving. Often, it signals “this was not fully me,” which easily turns into “please calibrate your judgment accordingly.”

Where I think we should go: say what you mean

Mentioning AI is sometimes exactly the right thing to do. The goal isn’t silence about AI use. Silence has its own problems, and a culture where everyone quietly uses AI but never talks about it would be a different kind of bad. The goal is precision.

Maybe you want to show that these tools are no longer toys. Say that. “I used Claude heavily here because I wanted to show what this kind of collaboration can produce.” That’s a real statement. Or maybe you’re explaining workflow: “This was vibe-coded.” “The model generated most of the first pass and I refined it.” Those are useful descriptions. Or maybe you’re communicating the status of the artifact: “This is a quick PoC, not production code” is clearer and more honest than “AI wrote this lol.”

But be precise. “I’m highlighting AI capability” is not the same as “this is not production-quality.” “I collaborated heavily with Claude” is not the same as “please lower the standard you apply to me.”

Those are different claims. Say the one you actually mean.

A lot of people say “AI wrote this” when what they actually mean is something else. This is rough. This is a PoC. I’m showing what AI can do. I moved quickly and didn’t polish every edge. The model made a big contribution. I’m excited that AI is capable of participating meaningfully in this kind of work.

All of those are valid things to say. So say them.

The moment “AI wrote this” becomes a vague proxy for all of them, it stops being informative and starts being social insulation. That’s when the phrase starts doing cultural work beyond what anyone meant by it.

And yes, sometimes AI provenance is itself useful information. AI has specific failure modes. Reviewers might want to know. Fair enough. But if what you’re really trying to say is “I’m not sure this is fully reliable,” the answer is more review, not a disclaimer. The disclaimer doesn’t fix the problem. It just transfers the burden to the reader.

The norms are still wet cement

We are still early in AI-era work. The norms are still forming. They will harden. And they’re being shaped right now by the small, repeated ways we talk about AI involvement in our work.

If vague AI attribution becomes the default, I think we risk teaching people two bad lessons at once. The first: that AI-assisted work is something you should subtly distance yourself from. AI is already too useful and too normal for that line to hold. The second: that the machine’s involvement dissolves your responsibility. That one teaches people to stop thinking, stop editing, stop reviewing, and stop owning.

This matters because habits of speech become habits of judgment. If we normalize vague AI disclaimers, we teach people that AI use weakens legitimacy, that responsibility becomes blurry once a model is involved, and that precision about process is optional. Those are bad norms, and they will compound.

There’s also something that happens to the person doing the work. If every time you use AI you mentally frame the output as something slightly outside yourself, you make it easier to skip the hard part, which is judgment. You become a courier instead of an editor. A presenter of outputs instead of an owner of decisions.

The real value of AI is not that it replaces your responsibility. It changes where your effort goes. Less raw drafting, more evaluation, direction, and judgment. If people don’t internalize that, they will use AI in the laziest way possible and wonder why the results feel hollow.

And we are in the middle of redefining what competence looks like. In the old world, people could pretend the highest form of competence was doing everything manually. That was never fully true, but AI makes the myth much harder to defend. The question now is: what counts as real skill? I think the answer is sound judgment over increasingly powerful tools. Directing, evaluating, integrating, and deciding. The competent person is the one who can guide AI toward something worth standing behind, not the one who avoids it.

The healthier norm is simpler. AI is a legitimate collaborator, its contribution can be large, disclosure should be precise, and judgment stays with the human.

Own the work

If you think something is too undercooked to be associated with you, don’t post it. If you think it’s worth posting, own the level at which it should be judged. Ownership doesn’t mean pretending everything is finished. It means standing behind what you chose to share.

If you read the work, approved it, and chose to post it under your name, own it. Whether the AI contributed five percent or ninety-five percent, what matters is that you adopted it.

Treat AI more like a coworker and less like a contamination warning.