Frequently Asked Questions about Vibe Coding

Vibe coding has attracted much attention in recent weeks with the release of many AI-driven tools. This blog answers some of the Frequently Asked Questions (FAQ) around vibe coding.
Background
Vibe coding is gaining popularity as large language models (LLMs) continue to mature and AI-driven development tools are becoming increasingly available. This blog answers Frequently Asked Questions (FAQ) regarding vibe coding.
FAQ
What is “Vibe Coding”?
The term ‘vibe coding’ was coined in a tweet from Andrej Karpathy. It describes a method of developing code with AI where the AI takes instruction, writes code and fixes errors, all with minimal review. This often includes blindly accepting whatever code the AI has written and whatever changes are suggested. Frequently it will also include usage of a speech-to-text application so the coder can talk directly to the AI.
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper…
— Andrej Karpathy (@karpathy) February 2, 2025
There’s currently some debate around the definition of ‘vibe coding.’ As initially coined, it means using AI tools for code development and blindly accepting what the AI creates without vetting it. There is some semantic diffusion, though, and it’s becoming synonymous with AI assisted code development where you can use AI tools to assist your normal development process.
What are the benefits to vibe coding?
Vibe coding is incredibly powerful for churning out proofs of concept (PoCs), minimum viable products (MVPs) and other prototype projects. In a more organized approach, it’s great at helping make focused changes to existing codebases.
What are the risks of vibe coding?
Currently, vibe coding tends to be fairly myopic. The following are a few of the risks we’ve observed with vibe coding:
- Refactoring. Refactoring code involves changing how code works in several spots throughout a codebase. Often, an AI coding tool will suggest a refactor, but miss several places in the codebase that needed to be changed.
- Large codebase exceeds context windows. Usually the entire codebase will be larger than the LLM’s context window (the amount of text it can store in its ‘memory’), so the application must correctly identify the relevant code to read and understand.
- Introduction of security flaws. Security flaws may exist in generated code.
- Slopsquatting. Slopsquatting is a term used to describe attackers creating malicious packages matching the names of packages that are commonly hallucinated by LLMs when vibe coding.
- Poorly written or difficult to maintain code. The application may write good code for a specific area of the project, but it may not fit well with the overall style or structure of the project.
How can I mitigate these risks?
Here are a few steps you can take to mitigate the risks of vibe coding:
- Conduct a code review of anything vibe coded. Code review is paramount. Ensure that you have engineers that understand the vibe code that was written and can perform a comprehensive review. Never blindly accept the results of your vibe coding for production.
- Lean on your Secure Software Development Life Cycle (SSDLC) and development, security, and operations tools. Don’t abandon your SSDLC or DevSecOps solutions. Continue to use tools like Snyk, Veracode and SonarQube when vibe coding.
- Test, test, test. Continue to test vibe-coded software and scripts in lower environments and perform end-to-end integration and unit testing.
How do I get started?
Just feel the vibes and let the AI do the work. While we say that in jest, successful (and less risky) methods involve using multiple AI tools to draft a specification, refine it and then pass it on to the coding agent. Harper Reed’s blog explains this process as part of a very good workflow. Here's our summary of his guidance::
- Use an LLM to draft a detailed plan first. Give the LLM a prompt indicating you’d like it to ask you detailed questions about the project design and architecture until you have a useful specification.
- Ask for prompts. Ask the LLM to generate a series of prompts from that specification that you can pass to an AI coding tool.
- Walk through the prompts with an AI agent. Ask your AI coding agent to walk through the prompts. Accept the changes as-is if you like (or if you’re feeling lucky).
- Routinely test after each prompt. Test after every prompt and ask the AI coding tool to fix any errors or tweak any issues as they arise.
- Use version tracking solutions like git to take snapshots. Use git to take a snapshot after each testing cycle. The agent can alter your code drastically so it’s very useful to have a way to roll back changes.
- Now you have a new application!
What types of applications are available to help with vibe coding?
Several types of applications are available for vibe coding. There are Integrated Development Environments (IDEs) and IDE extensions, tools that integrate with a continued integration/continuous delivery (CI/CD) pipeline, and then there are the LLMs and LLM desktop applications themselves. Some examples of each:
IDEs & IDE extensions:
- Cursor
- Cline
- GitHub Copilot
- Bolt.new
- Lovable
- Codeium WindSurf
- Replit
CI/CD Integrations
- CodeRabbit AI
- graphite dev
LLM Desktop Apps
- Claude Desktop
- Aider
So I can get rid of all of my junior engineers?
No! That’s a terrible idea. The tools listed above are great at augmenting and enhancing the development process, but they make mistakes and need trained eyes to ensure quality engineering. They are great at helping to write code, but right now, not great at engineering products. Plus, if there are no more junior engineers, there won’t be anyone to promote to senior engineers in a few years.
How do vibe coding apps work?
These apps work just like any other AI applications. They include a set of ‘system prompts’ instructing an LLM on how to act. The prompt then includes the text of the files open in an IDE and their content in a structured method that an LLM can understand. More information, such as closed files, directory structure, etc, might be included. This creates a large prompt that is sent to the selected LLM. For instance, one popular extension’s system prompt includes:
You are an AI programming assistant. When asked for your name, you must respond with
"GitHub Copilot". Follow the user's requirements carefully & to the letter. Follow
Microsoft content policies. Avoid content that violates copyrights. If you are asked
to generate content that is harmful, hateful, racist, sexist, lewd, violent, or
completely irrelevant to software engineering, only respond with "Sorry, I can't
assist with that." Keep your answers short and impersonal. You can answer general
programming questions and perform the following tasks:
* Ask a question about the files in your current workspace
* Explain how the code in your active editor works
* Make changes to existing code
* Review the selected code in your active editor
* Generate unit tests for the selected code
* Propose a fix for the problems in the selected code
* Scaffold code for a new file or project in a workspace
* Create a new Jupyter Notebook
* Ask questions about VS Code
* Generate query parameters for workspace search
* Ask how to do something in the terminal
* Explain what just happened in the terminal
The applications then have several “tools” available for the LLM to use to edit files and run commands. The same popular extension includes this, giving an insight into how it interacts with the user:
The active document is the source code the user is looking at right now. You have
read access to the code in the active document, files the user has recently worked
with and open tabs. You are able to retrieve, read and use this code to answer
questions. You cannot retrieve code that is outside of the current project. You can
only give one reply for each conversation turn.
What configuration options are available?
Many tools have customizable settings where you can indicate libraries or docs to look at. Some tools have favorite libraries and versions set in their system prompts. You can also ask to use specific libraries and languages.
Is Tenable looking into safety and security concerns around vibe coding?
Yes, Tenable Research is actively researching vibe coding methods, tools and results, and will be sharing more of our findings in future publications on the Tenable blog.
- AI
- Exposure Management