{"id":35844,"date":"2026-04-28T17:56:59","date_gmt":"2026-04-28T15:56:59","guid":{"rendered":"https:\/\/www.kaspersky.co.za\/blog\/safer-vibe-coding-2026\/35844\/"},"modified":"2026-04-28T17:56:59","modified_gmt":"2026-04-28T15:56:59","slug":"safer-vibe-coding-2026","status":"publish","type":"post","link":"https:\/\/www.kaspersky.co.za\/blog\/safer-vibe-coding-2026\/35844\/","title":{"rendered":"How to mitigate vibe-coding risks"},"content":{"rendered":"<p>The entry barriers for app development have plummeted in recent times \u2014 with nearly anyone now able to build a professional website, personal news bot, or dashboard simply by giving a chatbot or AI agent a few instructions in natural English. Unfortunately, a massive gap exists between a slick prototype and a reliable, production-ready, secure application. To avoid becoming the subject of another <a href=\"https:\/\/www.kaspersky.com\/blog\/vibe-coding-2025-risks\/54584\/\" target=\"_blank\" rel=\"noopener nofollow\">AI fail story<\/a>, or losing money and sensitive data, follow these straightforward tips. These are intended specifically for non-technical creators and very small teams. Larger enterprises should follow <a href=\"https:\/\/www.kaspersky.com\/blog\/ai-safe-deployment-guidelines\/52789\/\" target=\"_blank\" rel=\"noopener nofollow\">more sophisticated recommendations<\/a>.<\/p>\n<h2>The primary risks of AI-generated code<\/h2>\n<p>While vibe coding can deliver a seemingly functional app in just a few hours, it will likely contain dangerous flaws. AI models are trained on code samples from across the internet, which often include suboptimal tutorials, buggy snippets, and outright junk. Sometimes this code simply fails to run, but more often the situation is subtler and more hazardous: the app appears to work, yet under the hood, it might rely on a crude imitation of the required logic or contain critical vulnerabilities. According to a <a href=\"https:\/\/labs.cloudsecurityalliance.org\/research\/csa-research-note-ai-generated-code-vulnerability-surge-2026\/#:~:text=algorithms%20%28CWE,case%20flaws\" target=\"_blank\" rel=\"noopener nofollow\">study by the Cloud Security Alliance AI Safety Initiative<\/a>, the following facts should be considered when using AI for coding:<\/p>\n<ul>\n<li>At least 45% of AI-generated code contains dangerous vulnerabilities, such as failing to verify the user before granting access to sensitive data.<\/li>\n<li>A professional developer using AI can write code three to four times faster, but may introduce 10 times as many vulnerabilities.<\/li>\n<li>Twenty percent of AI-generated code attempts to use external libraries and modules that don\u2019t actually exist.<\/li>\n<li>Even when an application handles confidential data \u2014 such as payments, private messages, or documents \u2014 AI-generated code sometimes skips credential verification entirely. This can leave the app\u2019s data open for anyone on the internet to read.<\/li>\n<li>In other instances, the app might correctly prompt for a username and password but fail to enforce access controls, allowing any registered user to view everyone else\u2019s data.<\/li>\n<li>Access keys (tokens) for databases and AI services may be embedded directly into the source code, easy to steal, and difficult to rotate after a data breach or cyberattack.<\/li>\n<li>Project code or critical build outputs are often deployed to servers without proper access restrictions, leaving both the application logic and sensitive access keys vulnerable to theft.<\/li>\n<li>AI may implement insecure database access patterns, which can allow attackers to bypass the application to steal data or execute arbitrary code on the database server.<\/li>\n<li>Apps that include API functionality often suffer from insecure API implementations, lacking both user permission checks and rate limiting.<\/li>\n<\/ul>\n<h2>Core principles of securing vibe code<\/h2>\n<p><strong>Always verify.<\/strong> Treat AI-generated code as a rough draft. It should always be reviewed and rigorously tested. Ideally, professional developers should handle this; however, if none are available, the vibe-coder should at least test the application themselves, have friends or colleagues poke around the live app, and ask them to review key code snippets. It\u2019s also possible to evaluate code integrity by submitting a separate prompt to the AI: \u201cReview this code for secure development best practices and check for OWASP Top 10 vulnerabilities\u201d.<\/p>\n<p><strong>Protect secrets.<\/strong> Never include passwords, API keys, or any other sensitive data in AI prompts. Instead, instruct the AI to write code that securely stores all secrets in environment variables (special hidden settings).<\/p>\n<p><strong>Prioritize efforts.<\/strong> The main risks emerge when an application is network-accessible to outsiders, processes valuable data, or runs on infrastructure that would be useful to attackers. The components of an app or system that meet these criteria are precisely what\u2019s needed to be protected first. A static website composed of three HTML pages faces significantly lower risk than a loyalty program integrated into an online store.<\/p>\n<p><strong>Make security an explicit requirement.<\/strong> Even a simple, straightforward line in the prompt, like \u201cFollow industry standards and security best practices when generating this code\u201d, improves the output. Providing more specific requirements for critical code snippets makes the results even better.<\/p>\n<p><strong>Don\u2019t trust default settings.<\/strong> Often, the danger in vibe coding lies in the configuration rather than the code itself. For example, an app processing sensitive company data might be deployed on a public vibe-coding platform (Lovable or the like), and remain accessible to the entire internet by default. Even if the code is flawless, making that information public is a critical security failure. Because of this, every component \u2014 from hosting and database settings to the deployment pipeline \u2014 must be manually reviewed and properly configured. If the purpose of a setting is unclear, consult a chatbot for the optimal values, specifying that its goal is to enhance security, and describing who the app is intended for.<\/p>\n<p><strong>Security is a continuous process.<\/strong> Securing the app should not be treated as a one-off task. Every time an application is updated, hosting providers are changed, or a project undergoes any other major shift, all steps in making it secure should be revisited, and the risks reassessed.<\/p>\n<h2>Tips for securing vibe code<\/h2>\n<p>It\u2019s natural to want an app built from broad prompts like \u201cMake me a beautiful, user-friendly, fast, reliable, and secure app for [use case].\u201d However, for the results to actually be effective, each of those requirements needs to be fleshed out. Below, we\u2019ve outlined recommendations for building standard components that will make vibe code more secure. It\u2019s important to emphasize that \u201cmore secure\u201d doesn\u2019t mean \u201cperfectly secure\u201d \u2014 these approaches lower the risk, but that risk remains well above zero.<\/p>\n<p><strong>Demand security from the AI.<\/strong> When assigning a task to a neural network, be explicit: \u201cwrite secure code, validate data, encrypt passwords\u201d. Each type of task requires its own security prompt. For instance, don\u2019t just ask to \u201cbuild a login form\u201d. Instead, ask for a \u201csecure login form with credential validation, authentication and authorization (user permissions) controls, brute-force protection, password hashing according to modern standards, transmission strictly over HTTPS, and no hardcoded secrets\u201d. It makes sense to use these secure requirement templates every time. It\u2019s also helpful to keep a short cheat sheet of standard requirements for AI prompts: \u201cvalidate all external data and user input before processing\u201d, \u201cno secrets in code\u201d, \u201cprotect APIs from abuse\u201d, \u201crestrict user permissions\u201d, and \u201csecure default settings\u201d.<\/p>\n<p><strong>Use off-the-shelf solutions.<\/strong> If an app needs a user management system, insist on using a popular, reputable library, such as NextAuth, Auth0, and so on, rather than inventing a new and vulnerable solution. This is the most common cause of data breaches. This applies to more than just login and registration; for other high-risk actions like file uploads and API call processing, it\u2019s better to use established frameworks and libraries with built-in protections rather than building everything from scratch.<\/p>\n<p><strong>Don\u2019t trust the AI blindly; verify open-source components.<\/strong> Neural networks often try to inject non-existent components and libraries into a project or suggest outdated versions. Always search for the suggested names online to ensure they are real, widely used, and secure \u2014 and make sure the latest versions are used.<\/p>\n<p><strong>Demand robust encryption.<\/strong> Explicitly state that modern industry standards must be used for both data transmission and storage: TLS 1.3 based on OpenSSL for network traffic; argon2 or bcrypt for hashing credentials; and so on.<\/p>\n<p><strong>Never trust user input.<\/strong> Always instruct the AI to include validation for any data entered by users, whether in forms or search bars. Use terms like \u201cparameterization\u201d and \u201csanitization\u201d to emphasize that the app needs protection against malicious actors, not just users\u2019 typos.<\/p>\n<p><strong>Set limits on user actions.<\/strong> Require the AI to implement rate limiting for login attempts or general requests. This will protect a project from automated attacks like DoS and brute-force password guessing.<\/p>\n<p><strong>Hide the system\u2019s inner workings.<\/strong> If the site crashes, users should see a simple apology page rather than a detailed error report containing snippets of the code. That kind of information is a goldmine for hackers.<\/p>\n<p><strong>Remember that you\u2019re a developer, and you need to protect development-related digital assets.<\/strong> All related accounts \u2014 such as access to GitHub, project hosting, and other resources \u2014 are prime targets for attackers. Be sure to enable two-factor authentication (2FA) on all work accounts.<\/p>\n<p><strong>Make backups.<\/strong> Regularly back up a project both locally and to the cloud to protect it against critical AI errors as well as cyberattacks. These backups should include both the application\u2019s source code and its databases.<\/p>\n<p><strong>Set up a sandbox.<\/strong> Test new features and app versions in a secure environment using a clone of an active site or app and a copy of a database. Always run thorough tests before pushing an update live. This allows catching issues without putting users or their data at risk.<\/p>\n<p><strong>Update dependencies and scan them for vulnerabilities.<\/strong> A vibe-coded app will almost certainly rely on third-party libraries and components, known as dependencies. It\u2019s wise to update these regularly by rebuilding an app with the latest versions, even if app\u2019s code itself has not been changed. This process helps patch known security flaws in the used packages.<\/p>\n<p><strong>Check for secrets leaking into the repository.<\/strong> Use secrets scanners like TruffleHog to audit resulting code. Even with instructions, AI might slip up and include an API key or password in the source code. A scanner ensures that files containing keys and passwords don\u2019t end up in Git or get published alongside the project.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Building a functional app without programming skills is now a possibility, but maintaining it and ensuring cybersecurity remains a challenge. Here are several protective measures that even non-technical creators can implement. <\/p>\n","protected":false},"author":2722,"featured_media":35845,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1999,3021],"tags":[1140,3778,1876,97,3833],"class_list":{"0":"post-35844","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"category-smb","9":"tag-ai","10":"tag-llm","11":"tag-machine-learning","12":"tag-security-2","13":"tag-vibe-coding"},"hreflang":[{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/safer-vibe-coding-2026\/35844\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/safer-vibe-coding-2026\/30462\/"},{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/safer-vibe-coding-2026\/25508\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/safer-vibe-coding-2026\/30306\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/safer-vibe-coding-2026\/41778\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/safer-vibe-coding-2026\/55677\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/safer-vibe-coding-2026\/30613\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/safer-vibe-coding-2026\/36193\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.co.za\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/posts\/35844","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/users\/2722"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/comments?post=35844"}],"version-history":[{"count":0,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/posts\/35844\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/media\/35845"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/media?parent=35844"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/categories?post=35844"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/tags?post=35844"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}