Use Jinja2 SandboxedEnvironment to prevent SSTI/RCE in azure-ai-evaluation#46182
Use Jinja2 SandboxedEnvironment to prevent SSTI/RCE in azure-ai-evaluation#46182w-javed wants to merge 2 commits intoAzure:mainfrom
Conversation
…ation Add SandboxedEnvironment from jinja2.sandbox to both vulnerable files identified in MSRC-110257: - _legacy/prompty/_utils.py: render_jinja_template() - simulator/_conversation/__init__.py: ConversationBot template creation Sandbox is enabled by default (matching PromptFlow behavior). Set PF_USE_SANDBOX_FOR_JINJA=false to opt out if needed. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
19 tests covering: - render_jinja_template sandbox (SSTI blocked, normal renders, opt-out) - _create_jinja_template sandbox (SSTI blocked, StrictUndefined preserved) - ConversationBot integration (template + starter both sandboxed) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR mitigates Jinja2 Server-Side Template Injection (SSTI) / potential RCE in azure-ai-evaluation by switching template creation to Jinja2’s SandboxedEnvironment by default (with an env-var opt-out for compatibility).
Changes:
- Added a sandboxed Jinja2 template factory in the simulator conversation module and replaced direct
jinja2.Template(...)usage. - Updated legacy prompty Jinja rendering to use
SandboxedEnvironment.from_string()by default, gated byPF_USE_SANDBOX_FOR_JINJA.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/simulator/_conversation/__init__.py |
Introduces _create_jinja_template() using SandboxedEnvironment by default and routes conversation/starter templates through it. |
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_legacy/prompty/_utils.py |
Updates render_jinja_template() to use SandboxedEnvironment by default with an env-var opt-out. |
| def _create_jinja_template(template_content: str) -> jinja2.Template: | ||
| """Create a Jinja2 template, using SandboxedEnvironment by default to prevent SSTI attacks. | ||
|
|
||
| Set env var PF_USE_SANDBOX_FOR_JINJA=false to opt out (not recommended). | ||
| """ | ||
| use_sandbox = os.environ.get("PF_USE_SANDBOX_FOR_JINJA", "true") | ||
| if use_sandbox.lower() == "false": | ||
| return jinja2.Template(template_content, undefined=jinja2.StrictUndefined) | ||
| sandbox_env = SandboxedEnvironment(undefined=jinja2.StrictUndefined) | ||
| return sandbox_env.from_string(template_content) |
There was a problem hiding this comment.
Consider adding a focused regression test for the sandboxed Jinja path (default) to ensure a known SSTI payload (e.g., accessing class/subclasses) raises jinja2.exceptions.SecurityError, and that PF_USE_SANDBOX_FOR_JINJA=false opts out as expected. This helps prevent accidental reintroduction of unsandboxed rendering in the simulator conversation templates.
| def _create_jinja_template(template_content: str) -> jinja2.Template: | ||
| """Create a Jinja2 template, using SandboxedEnvironment by default to prevent SSTI attacks. | ||
|
|
||
| Set env var PF_USE_SANDBOX_FOR_JINJA=false to opt out (not recommended). | ||
| """ | ||
| use_sandbox = os.environ.get("PF_USE_SANDBOX_FOR_JINJA", "true") | ||
| if use_sandbox.lower() == "false": |
There was a problem hiding this comment.
The PF_USE_SANDBOX_FOR_JINJA parsing/behavior toggle is now duplicated here and in _legacy/prompty/_utils.py. To reduce the chance of the two implementations drifting (default value, accepted falsey values, StrictUndefined config), consider centralizing this into a shared helper or constant in a common module.
| def _create_jinja_template(template_content: str) -> jinja2.Template: | |
| """Create a Jinja2 template, using SandboxedEnvironment by default to prevent SSTI attacks. | |
| Set env var PF_USE_SANDBOX_FOR_JINJA=false to opt out (not recommended). | |
| """ | |
| use_sandbox = os.environ.get("PF_USE_SANDBOX_FOR_JINJA", "true") | |
| if use_sandbox.lower() == "false": | |
| _PF_USE_SANDBOX_FOR_JINJA_ENV_VAR = "PF_USE_SANDBOX_FOR_JINJA" | |
| _PF_USE_SANDBOX_FOR_JINJA_DEFAULT = "true" | |
| _PF_USE_SANDBOX_FOR_JINJA_FALSEY_VALUES = {"false"} | |
| def _use_sandbox_for_jinja() -> bool: | |
| """Return whether sandboxed Jinja rendering is enabled.""" | |
| use_sandbox = os.environ.get(_PF_USE_SANDBOX_FOR_JINJA_ENV_VAR, _PF_USE_SANDBOX_FOR_JINJA_DEFAULT) | |
| return use_sandbox.lower() not in _PF_USE_SANDBOX_FOR_JINJA_FALSEY_VALUES | |
| def _create_jinja_template(template_content: str) -> jinja2.Template: | |
| """Create a Jinja2 template, using SandboxedEnvironment by default to prevent SSTI attacks. | |
| Set env var PF_USE_SANDBOX_FOR_JINJA=false to opt out (not recommended). | |
| """ | |
| if not _use_sandbox_for_jinja(): |
| sanitized_template = sandbox_env.from_string(template_str) | ||
| return sanitized_template.render(**kwargs) |
There was a problem hiding this comment.
Variable name "sanitized_template" is misleading here: SandboxedEnvironment.from_string() returns a compiled template but does not sanitize the template content. Renaming to something like "sandboxed_template"/"template" would better reflect what the value represents and avoid confusion when reading security-related code.
| sanitized_template = sandbox_env.from_string(template_str) | |
| return sanitized_template.render(**kwargs) | |
| sandboxed_template = sandbox_env.from_string(template_str) | |
| return sandboxed_template.render(**kwargs) |
| def render_jinja_template(template_str: str, *, trim_blocks=True, keep_trailing_newline=True, **kwargs) -> str: | ||
| try: | ||
| template = Template(template_str, trim_blocks=trim_blocks, keep_trailing_newline=keep_trailing_newline) | ||
| return template.render(**kwargs) | ||
| use_sandbox_env = os.environ.get("PF_USE_SANDBOX_FOR_JINJA", "true") | ||
| if use_sandbox_env.lower() == "false": | ||
| template = Template(template_str, trim_blocks=trim_blocks, keep_trailing_newline=keep_trailing_newline) | ||
| return template.render(**kwargs) | ||
| else: | ||
| sandbox_env = SandboxedEnvironment( | ||
| trim_blocks=trim_blocks, keep_trailing_newline=keep_trailing_newline | ||
| ) | ||
| sanitized_template = sandbox_env.from_string(template_str) | ||
| return sanitized_template.render(**kwargs) |
There was a problem hiding this comment.
Consider adding a unit/regression test that verifies render_jinja_template uses SandboxedEnvironment by default and blocks a representative SSTI payload with jinja2.exceptions.SecurityError. Given this is a security fix, a test would help catch future regressions (including the PF_USE_SANDBOX_FOR_JINJA opt-out behavior).
Problem
ICM: 31000000565443 (MSRC-110257)
Unsandboxed Jinja2 template rendering in
azure-ai-evaluationallows Server-Side Template Injection (SSTI) that can escalate to arbitrary Remote Code Execution (RCE). Two files usejinja2.Template()directly without sandboxing:_legacy/prompty/_utils.py—render_jinja_template()simulator/_conversation/__init__.py—ConversationBottemplate creation (2 call sites)An attacker can exploit this via crafted template strings containing
__class__.__base__.__subclasses__()chains to execute arbitrary OS commands.Fix
Replace raw
jinja2.Template()/jinja2.Environmentwithjinja2.sandbox.SandboxedEnvironment, matching the existing pattern in PromptFlow (promptflow-coreandpromptflow-tools)._legacy/prompty/_utils.pyrender_jinja_template()now checksPF_USE_SANDBOX_FOR_JINJAenv var (defaults to"true") and usesSandboxedEnvironment.from_string()when enabled.simulator/_conversation/__init__.pyAdded
_create_jinja_template()helper that conditionally creates templates viaSandboxedEnvironment. Replaced bothjinja2.Template()call sites:ConversationBot.__init__()— conversation templateConversationBot.__init__()— conversation starter templateBehavior
PF_USE_SANDBOX_FOR_JINJAtrue(default)SandboxedEnvironmentblocks unsafe attribute access (__class__,__subclasses__, etc.)falseTemplate(opt-out, not recommended)Tests
{{ ().__class__.__base__.__subclasses__() }}is blocked withSecurityError