Build vs Buy AI Tooling: A Practical Comparison for Mid-Market Firms
How to evaluate no-code platforms, agencies, in-house hires, and solo specialists when building custom software and AI tooling for an operating business.
Updated April 2026 ยท Written by Amiya Sekhar, founder of Internal Systems
Overview
Mid-market firms, firms doing anywhere from US$500,000 and up in profit, have a few realistic ways of building custom software and AI tooling: no-code platforms, full-service agencies, an in-house hire, or a solo specialist. Each path has a real failure mode. The right choice depends on how custom the workflow is, how much domain context the build requires, and how willing the firm is to own the system after delivery.
This comparison is written from the perspective of someone who has shipped this kind of AI tooling and has extensive experience in software development. It is not a marketing piece for any single option. Each section below names the strongest version of the argument for that path and the failure mode that path tends to produce.
The four options
1. No-code platforms (Bubble, Retool, Zapier, n8n, Airtable)
The no-code label is partly a marketing frame. There is still code underneath. The user is just constrained to a visual layer that the platform exposes, which is fine until the workflow needs something the platform did not anticipate.
No-code is genuinely useful for a narrow set of cases: a basic internal form, a Zapier flow stitching two SaaS tools together, a weekend prototype that proves a concept before anyone writes real code. Past that, the abstraction breaks. Custom logic, real data modeling, integrations with anything that does not have a clean API, audit-grade lineage, performance at scale, none of it fits cleanly into a no-code platform.
Using no-code for serious software is like using a fork to pick up a grain of rice. It can sort of be done, but a chopstick or your hands are better suited to the job. The pattern is consistent: a no-code tool gets to 70% of the workflow in two weeks and then sits at 70% for nine months because the last 30% is the part that actually matters, and the platform cannot do it.
2. Full-service agencies
Choose carefully. The agency landscape for AI and custom software is currently full of operators who lead with technical jargon, publish case studies with numbers that do not survive scrutiny, and charge $80K to $200K for builds that are mostly prompt-engineering on top of off-the-shelf APIs. The pattern is consistent: heavy front-end sales motion, light back-end domain understanding.
A real agency engagement involves people who can sit with your operators, understand how the work actually gets done day to day, and build software that fits how the business runs. Most agencies serving this market do not have that depth. They have a sales team and a delivery team that are structurally separated, and the people scoping the project are not the people building it.
The way to evaluate an agency is to ask three questions: who specifically will write the code, can I see a codebase from a prior engagement, and what went wrong on the last project. Vague answers to any of these are a tell.
3. In-house hire (full-time engineer or ML engineer)
Best for: firms with a multi-year roadmap of internal tooling and the management capacity to direct an engineer who has no domain context on day one.
Real strength: compounding institutional knowledge. An engineer who stays for three years and absorbs how the business actually runs becomes more valuable each year.
Failure mode: ramp time and management overhead. A strong ML engineer in a tier-one market costs $180K to $260K all-in and takes six to nine months to be productive on domain-specific work. For a firm with one or two well-defined tooling needs, this is the most expensive option per shipped feature. The second failure mode is retention: a single engineer at a non-tech firm is structurally lonely, has no peer review, and tends to leave inside two years.
4. Solo specialist (independent operator with domain depth)
Best for: firms with one to three well-scoped tooling needs, a preference for direct communication with the person doing the work, and willingness to own the system after delivery.
Real strength: domain-fit and communication compression. A specialist who has shipped this type of tool before skips the discovery phase that an agency or new hire has to go through. The conversation is with the person writing the code, which removes the layer of telephone that agency engagements introduce.
Failure mode: single point of failure. A solo operator gets sick, takes on another client, or shifts focus. There is no second engineer to cover. There is no SOC 2 attestation, no E&O insurance at agency scale, and no formal SLA. For workflows that touch regulated data or need 24/7 operational support, this is a real constraint. The mitigation is documentation, source-code handover, and a clear runbook, but the constraint does not disappear.
Decision matrix
| Criterion | No-code | Agency | In-house | Solo specialist |
|---|---|---|---|---|
| Time to first working version | 1-2 weeks | 8-16 weeks | 6-9 months | 3-6 weeks |
| Total cost (12 months) | $5K-$30K | $200K-$600K | $200K-$280K | $40K-$120K |
| Custom ML / model work | Limited | Yes | Yes | Yes |
| Domain fit at start | N/A | Low | Low | High (if specialist matches) |
| Ongoing ownership burden | Low | Medium | High | Medium |
| Bench depth / redundancy | N/A | High | Low | Low |
| Best fit project size | Small | Large | Continuous | Small to medium |
How to actually choose
The decision usually comes down to three questions:
- Is the workflow standard or custom? If it is standard (basic CRM extension, simple dashboard, lightweight automation), no-code is the right answer. If it is custom (proprietary scoring logic, document extraction across messy formats, multi-system orchestration), no-code will cap out.
- How many tools does the firm need over the next 24 months? One or two: solo specialist or agency. Five or more on a continuous roadmap: in-house hire becomes economical.
- How much domain context does the build require? Low domain context: any of the four works. High domain context (the tool needs to understand how the business actually operates day to day): agencies and new hires both struggle. The realistic options narrow to a specialist with relevant background or an in-house engineer who already has it.
Frequently asked questions
What if we want to start with no-code and migrate later?
This is reasonable for prototypes and discovery. It is not reasonable for production systems. Migration from a no-code platform to custom code is rarely a port. It is a rebuild, because the data model, business logic, and workflow assumptions baked into the no-code tool are not portable. Plan for a rebuild, not a migration.
How do we evaluate a solo specialist when there is no firm behind them?
Three signals matter more than firm size: shipped work relevant to your operation, ability to articulate the failure modes of the work (not just the successes), and clear documentation and handover practices. A specialist who cannot show you the codebase from a prior engagement, cannot explain what went wrong on a previous project, or cannot describe how they hand off a system at the end is a higher risk than a small agency.
What does an agency do better than a specialist?
Concurrent multi-track work and continuity. If the project requires design, frontend, backend, ML, and QA running in parallel, an agency can staff that. If the firm needs the same vendor available for the next three years, an agency has more institutional continuity than any individual.
What does a specialist do better than an agency?
Domain fit and communication directness. If the specialist has built this exact type of tool before, the discovery phase is compressed from weeks to days. The conversation is with the person doing the work, which removes the account-management layer where most scope drift originates.
How do we de-risk a solo specialist engagement?
Source code in your GitHub from day one, weekly working demos, written architecture documentation, a runbook for operating the system after handover, and a defined exit plan. If any of these are missing from the proposal, the risk is real.
What size firms is this comparison aimed at?
Mid-market operating businesses doing roughly $500K and up in annual profit. Below that, no-code or off-the-shelf SaaS usually wins on cost. Above $50M in profit, the firm typically has internal engineering capacity and the question shifts from build vs buy at the tooling level to platform decisions at the architecture level.
Closing note
The honest version of this comparison is that no single option dominates. No-code wins on speed for simple workflows. Agencies win on bench depth for large concurrent builds. In-house hires win on long-term continuity for firms with continuous tooling needs. Solo specialists win on domain fit and communication directness for one-to-three well-scoped builds.
The wrong choice usually comes from picking based on procurement comfort rather than fit. A firm that picks an agency because the procurement process is familiar, when the actual need is a single domain-specific tool, will overpay and underdeliver. A firm that picks a solo specialist for a five-tool roadmap when they should be hiring will burn out the specialist and miss timelines.
Match the option to the shape of the work.