Nobody Has GTM Engineering Figured Out (And That's the Point)

GTM Engineering is better understood as a set of principles than a defined role. The core principle: apply software solutions to sales problems. Not "go buy a tool" — but where can we insert code, no-code workflows, AI, and automation to create real leverage in the revenue function?

I had a conversation last week with a technical founder of an early-stage regulatory SaaS company. They're between 10 and 50 employees, growing fast, and starting to think seriously about top-of-funnel. They wanted to know what I thought about hiring GTM engineering talent, how I think about the role, and what my own process looks like day to day.

We covered a lot of ground. It reflects where a lot of founders are right now: they know AI is changing how sales and marketing operate, but they're not sure what to actually do about it from a hiring and infrastructure perspective.

Should You Hire a GTM Engineer With a Technical Background or a Sales Background?

My position is pretty straightforward: bias heavily toward sales and business acumen.

We're betting on the frontier models and the tooling around them to keep getting better. Every month, the bar for what you can accomplish without deep technical knowledge drops lower. You don't need to write Apex or SOQL to interact with Salesforce through Claude Code. You need to know what actually needs to happen in the sales process, what data matters, what workflows create leverage.

You can literally just ask the tool if it knows how to solve your problem. Open Cursor or Claude Code and type "do you know how to bulk update Salesforce opportunity stages based on last activity date?" and it'll either do it or tell you it can't. The figuring-out portion of this work is not that hard if you understand what questions to ask. And understanding what questions to ask comes from knowing the sales domain, not from knowing how to code.

The jobs-to-be-done knowledge is what matters. And there's a trap I think a lot of people fall into.

There's a category of AI-powered work that I'd describe as dopamine-hit workflows. Building a CRUD app with a sleek frontend by prompting Cursor is genuinely cool. It feels productive. You get that rush of "look what I just made." But does it help close deals? Does it drive pipeline? Almost certainly not.

Someone with deep sales experience will naturally gravitate toward workflows that solve actual business problems because they've lived those problems. They know what the bottleneck is at each stage of the funnel. Someone who's primarily a technical builder, without that sales context, is going to be drawn toward building things that are impressive but not necessarily impactful. They'll optimize for the build, not the outcome.

How Realistic Is It to Get Salespeople Using Git and Claude Code?

I had two answers.

First, the mechanics are not nearly as intimidating as people assume. Using GitHub is not a mysterious process. You go to the site, you download a file, you put it where it needs to go. With Claude Code, it's even simpler. You paste a repo URL and tell it to download. Imagine onboarding a new rep: they clone a company repo that has all the pre-built skills, slash commands, and context files baked in. The setup is a five-minute conversation with the tool, not a two-week bootcamp.

Second, and more importantly, the best operators and sellers are already using tools like this. Whether it's Claude Code, ChatGPT, or any of the other options out there, the people who are going to thrive in these roles are already experimenting. So instead of asking "how do we train people to use these tools," the better question is "how do we hire people who are already using them?"

It's just a question you ask in an interview. What have you built with AI tools? How are you using them in your current workflow? If the answer is "nothing," that's a hard no for me. Not because everyone needs to be a power user on day one, but because if someone isn't using any of these tools at all, we'd be speaking a completely different language. The gap between "I've been experimenting and here's what I've tried" and "I haven't touched any of this" is not a training gap. It's a curiosity gap. And curiosity is not something you can teach.

How Should You Handle Permissioning and Security When Plugging AI Into Sales Tools?

Security and access control are real questions once you start plugging AI into production systems, and I want to be honest about the limits of my knowledge here because there's a lot of hand-waving in this space.

Salesforce provides a decent mental model. When you use the Salesforce CLI, you only have access to the permissions that your user profile grants by default. If you're a sales rep, you see what a sales rep sees. If you're an admin, you see what an admin sees. The tool doesn't elevate your privileges just because you're accessing data programmatically.

That's roughly how I think about plugging APIs into Claude Code in an ideal state. Each user has their own API key or MCP connection. The applications reflect those same permission boundaries in their API access. Your AI assistant can only touch what you'd be able to touch yourself.

I don't have all the answers here, especially for regulated environments. I'd be dishonest if I presented it as a solved problem.

Should You Build an ETL Pipeline for Sales Data or Just Give AI Tools Direct API Access?

The specific scenario: aggregating data from your CRM, call recorder, and sales engagement platform into something like Snowflake, versus giving Claude Code direct API access to each tool.

I haven't personally built that infrastructure, so I can't speak from direct experience. But I can see multiple reasons it would be valuable.

Structured data in a warehouse is better suited for AI consumption. You use fewer tokens. You have more flexibility in how you query and combine datasets. And critically, it creates a buffer between the AI tool and your production systems. When you plug Claude Code directly into Salesforce with full read/write access, the potential for unintended consequences is obvious. Porting your data to a dedicated system of record designed for this kind of analysis could prevent a lot of problems before they start.

The tradeoff is that you're adding another layer between your system of record and your system of action. Your AI reads from Snowflake, but then it still needs to go act in Salesforce, or your engagement platform, or wherever the work actually happens. And if there's any delay in the mirroring between those systems, the information your AI is working with may not be current. It could be reading stale pipeline data, outdated contact info, or activity records that haven't synced yet. I haven't figured out the right way to handle that latency problem, but it's something anyone considering this architecture needs to think through carefully.

Is Anyone Actually Hiring GTM Engineers With a Clear Job Description?

This is the part of the conversation that I keep coming back to. The founder asked whether anyone has definitively scoped what GTM engineering is, what the responsibilities should be, how much you should pay someone. The honest answer is no. Anyone who claims otherwise is selling something or trying to prop themselves up as a thought leader.

I'll use my own progression as an example. Early last year, I started using Cursor to experiment with building apps and interfaces. By the second half of the year, I realized Claude Code was better at the things I needed most, which weren't building applications at all. They were executing functions. Go build me this report. Scrub this list against these criteria. Upload this data to this system. Pull the contact information for everyone who attended this event and create records in Salesforce.

Cursor still has advantages for building static applications and interfaces because the VS Code fork gives you a much better visual development experience. But what I needed in my day-to-day was an assistant that could operate across all my tools without me logging into twelve different platforms. Claude Code turned out to be the better fit for that.

The point of that story isn't "use Claude Code." It's that the answer to "what tools should a GTM engineer use?" is "it depends on what you're trying to accomplish," and what you're trying to accomplish will shift as the tools evolve and as you learn what's possible.

People ask whether GTM engineers should learn Claude Code or just stick with Clay and SmartLead. I think that question misses the point. I think of GTM Engineering as a senior IC role, analogous to a staff engineer on a software development team. These people should have wide leverage, latitude, and access across the business. This is not an entry-level position.

If all you need is someone to send a high volume of outbound emails, there are talented, cost-effective people all over the world who do that work well. There's nothing wrong with that function, and those folks add real value. But it's not economically sensible to pay someone $150k a year to do it. At that compensation level, you want someone operating at a higher altitude.

Is GTM Engineering a Role or a Set of Principles?

GTM Engineering is better understood as a set of principles than a defined role. The core principle: apply software solutions to sales problems. Not "go buy a tool." Where can we insert code, no-code workflows, AI, and automation to create real leverage in the revenue function?

What that looks like depends entirely on the stage of company you're at.

At a very early stage (under 20 people), you want a generalist. Someone who can run outbound, close deals, stand up a Salesforce instance, build automations, and has broad technical proficiency across a range of tools. Their job is to build the foundation that scales as the business grows. They need to be comfortable doing hands-on IC work while also thinking about what systems will hold up at 10x the current volume.

At a larger organization, the skill set shifts. You need someone who can leverage political capital internally to get things done. This is a gap I've noticed in a lot of GTM engineering talent: they come from agency backgrounds or junior-level positions where they could just build things independently. In a larger org, the ability to navigate stakeholders, get buy-in, and drive change management is just as important as the technical chops.

And if you hire someone for this kind of role, you need to give them the latitude to make changes. Otherwise you're going to be disappointed, because you've hired someone whose entire value proposition is transforming how things work, and then you've put them in a box.

What Does It Actually Mean to Build the "AI Brain" of a Sales Organization?

As the conversation went on, the founder's questions kept getting more abstract, and so did my thinking. We started by talking about hiring a GTM engineer, but by the end we were really talking about something bigger. It had me thinking about where my own work has been heading.

I've been spending less time on individual tools and workflows, and more time on what I'd call the AI brain of a sales organization. What does the architecture look like? What access does the brain need? What systems does it plug into? What should it be able to do autonomously, and where does it need a human to intervene?

And once it processes information and generates insights, where do those outputs go? Because an AI that can analyze your pipeline is interesting. An AI that can analyze your pipeline and then draft the follow-up emails, update the CRM records, flag the at-risk deals, and brief the AE before their next call, all without anyone logging into anything, that's a fundamentally different thing. That's what I mean by "where do the thoughts go."

What's the right balance between read access and write access? How do you give an AI enough context to be useful without giving it enough rope to cause damage? How do you build institutional knowledge into the system so it gets smarter as the team uses it, rather than starting from scratch every session?

I don't have clean answers to most of these questions. But I think they're the right questions. The conversation about GTM engineering, what tools to use, who to hire, how to scope the role, is really a conversation about how your organization relates to AI as an operational layer. The specific tools and titles will keep shifting.