OPA is purpose built for reasoning about information represented in structured documents. The data that your service and its users publish can be inspected and transformed using OPA’s native query language Rego.
What is Rego?
Rego was inspired by Datalog, which is a well understood, decades old query language. Rego extends Datalog to support structured document models such as JSON.
Rego queries are assertions on data stored in OPA. These queries can be used to define policies that enumerate instances of data that violate the expected state of the system.
Why use Rego?
Use Rego for defining policy that is easy to read and write.
Rego focuses on providing powerful support for referencing nested documents and ensuring that queries are correct and unambiguous.
Rego is declarative so policy authors can focus on what queries should return rather than how queries should be executed. These queries are simpler and more concise than the equivalent in an imperative language.
Like other applications which support declarative query languages, OPA is able to optimize queries to improve performance.
Learning Rego
In while reviewing the examples below, you might find it helpful to follow along using the online OPA playground. The playground also allows sharing of examples via URL which can be helpful when asking questions on the OPA Slack. In addition to these official resources, you may also be interested to check out the community learning materials and tools. These 8 projects are listed on the OPA Ecosystem page as related to learning Rego.
The Basics
This section introduces the main aspects of Rego.
The simplest rule is a single expression and is defined in terms of a Scalar Value:
pi := 3.14159
Rules define the content of documents. We can query for the content of the pi
document generated by the rule above:
pi
3.14159
Rules can also be defined in terms of Composite Values:
rect := {"width": 2, "height": 4}
The result:
rect
{
"height": 4,
"width": 2
}
You can compare two scalar or composite values, and when you do so you are checking if the two values are the same JSON value.
rect == {"height": 4, "width": 2}
true
You can define a new concept using a rule. For example, v
below is true if the equality expression is true.
v if "hello" == "world"
If we evaluate v
, the result is undefined
because the body of the rule never
evaluates to true
. As a result, the document generated by the rule is not
defined.
undefined decision
Expressions that refer to undefined values are also undefined. This includes comparisons such as !=
.
v == true
undefined decision
v != true
undefined decision
We can define rules in terms of Variables as well:
t if { x := 42; y := 41; x > y }
The formal syntax uses the semicolon character ;
to separate expressions. Rule
bodies can separate expressions with newlines and omit the semicolon:
t2 if {
x := 42
y := 41
x > y
}
Note that the future keyword if
is optional. We could have written v
and t2
like this:
v { "hello" == "world" }
t2 {
x := 42
y := 41
x > y
}
When evaluating rule bodies, OPA searches for variable bindings that make all of the expressions true. There may be multiple sets of bindings that make the rule body true. The rule body can be understood intuitively as:
expression-1 AND expression-2 AND ... AND expression-N
The rule itself can be understood intuitively as:
rule-name IS value IF body
If the value is omitted, it defaults to true.
When we query for the value of t2
we see the obvious result:
true
Rego References help you refer to nested documents. For example, with:
sites := [{"name": "prod"}, {"name": "smoke1"}, {"name": "dev"}]
And
r if {
some site in sites
site.name == "prod"
}
The rule r
above asserts that there exists (at least) one document within sites
where the name
attribute equals "prod"
.
The result:
true
We can generalize the example above with a rule that defines a set document instead of a boolean document:
q contains name if {
some site in sites
name := site.name
}
The value of q
is a set of names
[
"dev",
"prod",
"smoke1"
]
We can re-write the rule r
from above to make use of q
. We will call the new rule p
:
p if q["prod"]
Querying p
will have the same result:
true
As you can see, rules which have arguments can be queried with input values:
q["smoke2"]
undefined decision
If you made it this far, congratulations!
This section introduced the main aspects of Rego. The rest of this document walks through each part of the language in more detail.
For a concise reference, see the Policy Reference document.
Scalar Values
Scalar values are the simplest type of term in Rego. Scalar values can be Strings, numbers, booleans, or null.
Documents can be defined solely in terms of scalar values. This is useful for defining constants that are referenced in multiple places. For example:
greeting := "Hello"
max_height := 42
pi := 3.14159
allowed := true
location := null
These documents can be queried like any other:
[greeting, max_height, pi, allowed, location]
[
"Hello",
42,
3.14159,
true,
null
]
Strings
Rego supports two different types of syntax for declaring strings. The first is likely to be the most familiar: characters surrounded by double quotes. In such strings, certain characters must be escaped to appear in the string, such as double quotes themselves, backslashes, etc. See the Policy Reference for a formal definition.
The other type of string declaration is a raw string declaration. These are made of characters surrounded by backticks (`
), with the exception
that raw strings may not contain backticks themselves. Raw strings are what they sound like: escape sequences are not interpreted, but instead taken
as the literal text inside the backticks. For example, the raw string `hello\there`
will be the text “hello\there”, not “hello” and “here”
separated by a tab. Raw strings are particularly useful when constructing regular expressions for matching, as it eliminates the need to double
escape special characters.
A simple example is a regex to match a valid Rego variable. With a regular string, the regex is "[a-zA-Z_]\\w*"
, but with raw strings, it becomes `[a-zA-Z_]\w*`
.
Composite Values
Composite values define collections. In simple cases, composite values can be treated as constants like Scalar Values:
cube := {"width": 3, "height": 4, "depth": 5}
The result:
cube.width
3
Composite values can also be defined in terms of Variables or References. For example:
a := 42
b := false
c := null
d := {"a": a, "x": [b, c]}
+----+-------+------+---------------------------+
| a | b | c | d |
+----+-------+------+---------------------------+
| 42 | false | null | {"a":42,"x":[false,null]} |
+----+-------+------+---------------------------+
By defining composite values in terms of variables and references, rules can define abstractions over raw data and other rules.
Objects
Objects are unordered key-value collections. In Rego, any value type can be used as an object key. For example, the following assignment maps port numbers to a list of IP addresses (represented as strings).
ips_by_port := {
80: ["1.1.1.1", "1.1.1.2"],
443: ["2.2.2.1"],
}
ips_by_port[80]
[
"1.1.1.1",
"1.1.1.2"
]
some port; ips_by_port[port][_] == "2.2.2.1"
+------+
| port |
+------+
| 443 |
+------+
When Rego values are converted to JSON non-string object keys are marshalled as strings (because JSON does not support non-string object keys).
ips_by_port
{
"443": [
"2.2.2.1"
],
"80": [
"1.1.1.1",
"1.1.1.2"
]
}
Sets
In addition to arrays and objects, Rego supports set values. Sets are unordered collections of unique values. Just like other composite values, sets can be defined in terms of scalars, variables, references, and other composite values. For example:
s := {cube.width, cube.height, cube.depth}
+---------+
| s |
+---------+
| [3,4,5] |
+---------+
Set documents are collections of values without keys. OPA represents set documents as arrays when serializing to JSON or other formats that do not support a set data type. The important distinction between sets and arrays or objects is that sets are unkeyed while arrays and objects are keyed, i.e., you cannot refer to the index of an element within a set.
When comparing sets, the order of elements does not matter:
{1,2,3} == {3,1,2}
true
Because sets are unordered, variables inside sets must be unified with a ground value outside of the set. If the variable is not unified with a ground value outside the set, OPA will complain:
{1,2,3} == {3,x,2}
1 error occurred: 1:1: rego_unsafe_var_error: var x is unsafe
Because sets share curly-brace syntax with objects, and an empty object is
defined with {}
, an empty set has to be constructed with a different syntax:
count(set())
0
Variables
Variables are another kind of term in Rego. They appear in both the head and body of rules.
Variables appearing in the head of a rule can be thought of as input and output of the rule. Unlike many programming languages, where a variable is either an input or an output, in Rego a variable is simultaneously an input and an output. If a query supplies a value for a variable, that variable is an input, and if the query does not supply a value for a variable, that variable is an output.
For example:
sites := [
{"name": "prod"},
{"name": "smoke1"},
{"name": "dev"}
]
q contains name if {
some site in sites
name := site.name
}
In this case, we evaluate q
with a variable x
(which is not bound to a value). As a result, the query returns all of the values for x
and all of the values for q[x]
, which are always the same because q
is a set.
q[x]
+----------+----------+
| x | q[x] |
+----------+----------+
| "dev" | "dev" |
| "prod" | "prod" |
| "smoke1" | "smoke1" |
+----------+----------+
On the other hand, if we evaluate q
with an input value for name
we can determine whether name
exists in the document defined by q
:
q["dev"]
"dev"
Variables appearing in the head of a rule must also appear in a non-negated equality expression within the same rule. This property ensures that if the rule is evaluated and all of the expressions evaluate to true for some set of variable bindings, the variable in the head of the rule will be defined.
References
References are used to access nested documents.
The examples in this section use the data defined in the Examples section.
The simplest reference contains no variables. For example, the following reference returns the hostname of the second server in the first site document from our example data:
sites[0].servers[1].hostname
"helium"
References are typically written using the “dot-access” style. The canonical form does away with .
and closely resembles dictionary lookup in a language such as Python:
sites[0]["servers"][1]["hostname"]
"helium"
Both forms are valid, however, the dot-access style is typically more readable. Note that there are four cases where brackets must be used:
- String keys containing characters other than
[a-z]
,[A-Z]
,[0-9]
, or_
(underscore). - Non-string keys such as numbers, booleans, and null.
- Variable keys which are described later.
- Composite keys which are described later.
The prefix of a reference identifies the root document for that reference. In
the example above this is sites
. The root document may be:
- a local variable inside a rule.
- a rule inside the same package.
- a document stored in OPA.
- a documented temporarily provided to OPA as part of a transaction.
- an array, object or set, e.g.
[1, 2, 3][0]
. - a function call, e.g.
split("a.b.c", ".")[1]
. - a comprehension.
Variable Keys
References can include variables as keys. References written this way are used to select a value from every element in a collection.
The following reference will select the hostnames of all the servers in our example data:
sites[i].servers[j].hostname
+---+---+------------------------------+
| i | j | sites[i].servers[j].hostname |
+---+---+------------------------------+
| 0 | 0 | "hydrogen" |
| 0 | 1 | "helium" |
| 0 | 2 | "lithium" |
| 1 | 0 | "beryllium" |
| 1 | 1 | "boron" |
| 1 | 2 | "carbon" |
| 2 | 0 | "nitrogen" |
| 2 | 1 | "oxygen" |
+---+---+------------------------------+
Conceptually, this is the same as the following imperative (Python) code:
def hostnames(sites):
result = []
for site in sites:
for server in site.servers:
result.append(server.hostname)
return result
In the reference above, we effectively used variables named i
and j
to iterate the collections. If the variables are unused outside the reference, we prefer to replace them with an underscore (_
) character. The reference above can be rewritten as:
sites[_].servers[_].hostname
+------------------------------+
| sites[_].servers[_].hostname |
+------------------------------+
| "hydrogen" |
| "helium" |
| "lithium" |
| "beryllium" |
| "boron" |
| "carbon" |
| "nitrogen" |
| "oxygen" |
+------------------------------+
The underscore is special because it cannot be referred to by other parts of the rule, e.g., the other side of the expression, another expression, etc. The underscore can be thought of as a special iterator. Each time an underscore is specified, a new iterator is instantiated.
Under the hood, OPA translates the
_
character to a unique variable name that does not conflict with variables and rules that are in scope.
Composite Keys
References can include Composite Values as keys if the key is being used to refer into a set. Composite keys may not be used in refs for base data documents, they are only valid for references into virtual documents.
This is useful for checking for the presence of composite values within a set, or extracting all values within a set matching some pattern. For example:
s := {[1, 2], [1, 4], [2, 6]}
s[[1, 2]]
[
1,
2
]
s[[1, x]]
+---+-----------+
| x | s[[1, x]] |
+---+-----------+
| 2 | [1,2] |
| 4 | [1,4] |
+---+-----------+
Multiple Expressions
Rules are often written in terms of multiple expressions that contain references to documents. In the following example, the rule defines a set of arrays where each array contains an application name and a hostname of a server where the application is deployed.
apps_and_hostnames[[name, hostname]] {
some i, j, k
name := apps[i].name
server := apps[i].servers[_]
sites[j].servers[k].name == server
hostname := sites[j].servers[k].hostname
}
The result:
apps_and_hostnames[x]
+----------------------+-----------------------+
| x | apps_and_hostnames[x] |
+----------------------+-----------------------+
| ["mongodb","oxygen"] | ["mongodb","oxygen"] |
| ["mysql","carbon"] | ["mysql","carbon"] |
| ["mysql","lithium"] | ["mysql","lithium"] |
| ["web","beryllium"] | ["web","beryllium"] |
| ["web","boron"] | ["web","boron"] |
| ["web","helium"] | ["web","helium"] |
| ["web","hydrogen"] | ["web","hydrogen"] |
| ["web","nitrogen"] | ["web","nitrogen"] |
+----------------------+-----------------------+
Don’t worry about understanding everything in this example right now. There are just two important points:
- Several variables appear more than once in the body. When a variable is used in multiple locations, OPA will only produce documents for the rule with the variable bound to the same value in all expressions.
- The rule is joining the
apps
andsites
documents implicitly. In Rego (and other languages based on Datalog), joins are implicit.
Self-Joins
Using a different key on the same array or object provides the equivalent of self-join in SQL. For example, the following rule defines a document containing apps deployed on the same site as "mysql"
:
same_site[apps[k].name] {
some i, j, k
apps[i].name == "mysql"
server := apps[i].servers[_]
server == sites[j].servers[_].name
other_server := sites[j].servers[_].name
server != other_server
other_server == apps[k].servers[_]
}
The result:
same_site[x]
+-------+--------------+
| x | same_site[x] |
+-------+--------------+
| "web" | "web" |
+-------+--------------+
Comprehensions
Comprehensions provide a concise way of building Composite Values from sub-queries.
Like Rules, comprehensions consist of a head and a body. The body of a comprehension can be understood in exactly the same way as the body of a rule, that is, one or more expressions that must all be true in order for the overall body to be true. When the body evaluates to true, the head of the comprehension is evaluated to produce an element in the result.
The body of a comprehension is able to refer to variables defined in the outer body. For example:
region := "west"
names := [name | sites[i].region == region; name := sites[i].name]
+-----------------+--------+
| names | region |
+-----------------+--------+
| ["smoke","dev"] | "west" |
+-----------------+--------+
In the above query, the second expression contains an Array Comprehension that refers to the region
variable. The region variable will be bound in the outer body.
When a comprehension refers to a variable in an outer body, OPA will reorder expressions in the outer body so that variables referred to in the comprehension are bound by the time the comprehension is evaluated.
Comprehensions are similar to the same constructs found in other languages like Python. For example, we could write the above comprehension in Python as follows:
# Python equivalent of Rego comprehension shown above.
names = [site.name for site in sites if site.region == "west"]
Comprehensions are often used to group elements by some key. A common use case for comprehensions is to assist in computing aggregate values (e.g., the number of containers running on a host).
Array Comprehensions
Array Comprehensions build array values out of sub-queries. Array Comprehensions have the form:
[ <term> | <body> ]
For example, the following rule defines an object where the keys are application names and the values are hostnames of servers where the application is deployed. The hostnames of servers are represented as an array.
app_to_hostnames[app_name] := hostnames if {
app := apps[_]
app_name := app.name
hostnames := [hostname | name := app.servers[_]
s := sites[_].servers[_]
s.name == name
hostname := s.hostname]
}
The result:
app_to_hostnames[app]
+-----------+------------------------------------------------------+
| app | app_to_hostnames[app] |
+-----------+------------------------------------------------------+
| "mongodb" | ["oxygen"] |
| "mysql" | ["lithium","carbon"] |
| "web" | ["hydrogen","helium","beryllium","boron","nitrogen"] |
+-----------+------------------------------------------------------+
Object Comprehensions
Object Comprehensions build object values out of sub-queries. Object Comprehensions have the form:
{ <key>: <term> | <body> }
We can use Object Comprehensions to write the rule from above as a comprehension instead:
app_to_hostnames := {app.name: hostnames |
app := apps[_]
hostnames := [hostname |
name := app.servers[_]
s := sites[_].servers[_]
s.name == name
hostname := s.hostname]
}
The result is the same:
app_to_hostnames[app]
+-----------+------------------------------------------------------+
| app | app_to_hostnames[app] |
+-----------+------------------------------------------------------+
| "mongodb" | ["oxygen"] |
| "mysql" | ["lithium","carbon"] |
| "web" | ["hydrogen","helium","beryllium","boron","nitrogen"] |
+-----------+------------------------------------------------------+
Object comprehensions are not allowed to have conflicting entries, similar to rules:
{"foo": y | z := [1, 2, 3]; y := z[_] }
1 error occurred: "foo": eval_conflict_error: object keys must be unique
Set Comprehensions
Set Comprehensions build set values out of sub-queries. Set Comprehensions have the form:
{ <term> | <body> }
For example, to construct a set from an array:
a := [1, 2, 3, 4, 3, 4, 3, 4, 5]
b := {x | x = a[_]}
+---------------------+-------------+
| a | b |
+---------------------+-------------+
| [1,2,3,4,3,4,3,4,5] | [1,2,3,4,5] |
+---------------------+-------------+
Rules
Rules define the content of Virtual Documents in OPA. When OPA evaluates a rule, we say OPA generates the content of the document that is defined by the rule.
The sample code in this section make use of the data defined in Examples.
Generating Sets
The following rule defines a set containing the hostnames of all servers:
hostnames contains name if {
name := sites[_].servers[_].hostname
}
Note that the (future) keywords contains
and if
are optional here.
If future keywords are not available to you, you can define the same rule as follows:
hostnames[name] {
name := sites[_].servers[_].hostname
}
When we query for the content of hostnames
we see the same data as we would if we queried using the sites[_].servers[_].hostname
reference directly:
hostnames[name]
+-------------+-----------------+
| name | hostnames[name] |
+-------------+-----------------+
| "beryllium" | "beryllium" |
| "boron" | "boron" |
| "carbon" | "carbon" |
| "helium" | "helium" |
| "hydrogen" | "hydrogen" |
| "lithium" | "lithium" |
| "nitrogen" | "nitrogen" |
| "oxygen" | "oxygen" |
+-------------+-----------------+
This example introduces a few important aspects of Rego.
First, the rule defines a set document where the contents are defined by the variable name
. We know this rule defines a set document because the head only includes a key. All rules have the following form (where key, value, and body are all optional):
<name> <key>? <value>? <body>?
For a more formal definition of the rule syntax, see the Policy Reference document.
Second, the sites[_].servers[_].hostname
fragment selects the hostname
attribute from all of the objects in the servers
collection. From reading the fragment in isolation we cannot tell whether the fragment refers to arrays or objects. We only know that it refers to a collections of values.
Third, the name := sites[_].servers[_].hostname
expression binds the value of the hostname
attribute to the variable name
, which is also declared in the head of the rule.
Generating Objects
Rules that define objects are very similar to rules that define sets.
apps_by_hostname[hostname] := app if {
some i
server := sites[_].servers[_]
hostname := server.hostname
apps[i].servers[_] == server.name
app := apps[i].name
}
The rule above defines an object that maps hostnames to app names. The main difference between this rule and one which defines a set is the rule head: in addition to declaring a key, the rule head also declares a value for the document.
The result:
apps_by_hostname["helium"]
"web"
Using the (future) keyword if
is optional here.
The same rule can be defined as follows:
apps_by_hostname[hostname] := app {
some i
server := sites[_].servers[_]
hostname := server.hostname
apps[i].servers[_] == server.name
app := apps[i].name
}
Incremental Definitions
A rule may be defined multiple times with the same name. When a rule is defined this way, we refer to the rule definition as incremental because each definition is additive. The document produced by incrementally defined rules is the union of the documents produced by each individual rule.
For example, we can write a rule that abstracts over our servers
and
containers
data as instances
:
instances contains instance if {
server := sites[_].servers[_]
instance := {"address": server.hostname, "name": server.name}
}
instances contains instance if {
container := containers[_]
instance := {"address": container.ipaddress, "name": container.name}
}
If the head of the rule is same, we can chain multiple rule bodies together to obtain the same result. We don’t recommend using this form anymore.
instances contains instance if {
server := sites[_].servers[_]
instance := {"address": server.hostname, "name": server.name}
} {
container := containers[_]
instance := {"address": container.ipaddress, "name": container.name}
}
An incrementally defined rule can be intuitively understood as <rule-1> OR <rule-2> OR ... OR <rule-N>
.
The result:
instances[x]
+-----------------------------------------------+-----------------------------------------------+
| x | instances[x] |
+-----------------------------------------------+-----------------------------------------------+
| {"address":"10.0.0.1","name":"big_stallman"} | {"address":"10.0.0.1","name":"big_stallman"} |
| {"address":"10.0.0.2","name":"cranky_euclid"} | {"address":"10.0.0.2","name":"cranky_euclid"} |
| {"address":"beryllium","name":"web-1000"} | {"address":"beryllium","name":"web-1000"} |
| {"address":"boron","name":"web-1001"} | {"address":"boron","name":"web-1001"} |
| {"address":"carbon","name":"db-1000"} | {"address":"carbon","name":"db-1000"} |
| {"address":"helium","name":"web-1"} | {"address":"helium","name":"web-1"} |
| {"address":"hydrogen","name":"web-0"} | {"address":"hydrogen","name":"web-0"} |
| {"address":"lithium","name":"db-0"} | {"address":"lithium","name":"db-0"} |
| {"address":"nitrogen","name":"web-dev"} | {"address":"nitrogen","name":"web-dev"} |
| {"address":"oxygen","name":"db-dev"} | {"address":"oxygen","name":"db-dev"} |
+-----------------------------------------------+-----------------------------------------------+
Note that the (future) keywords contains
and if
are optional here.
If future keywords are not available to you, you can define the same rule as follows:
instances[instance] {
server := sites[_].servers[_]
instance := {"address": server.hostname, "name": server.name}
}
instances[instance] {
container := containers[_]
instance := {"address": container.ipaddress, "name": container.name}
}
Complete Definitions
In addition to rules that partially define sets and objects, Rego also supports so-called complete definitions of any type of document. Rules provide a complete definition by omitting the key in the head. Complete definitions are commonly used for constants:
pi := 3.14159
Documents produced by rules with complete definitions can only have one value at a time. If evaluation produces multiple values for the same document, an error will be returned.
For example:
# Define user "bob" for test input.
user := "bob"
# Define two sets of users: power users and restricted users. Accidentally
# include "bob" in both.
power_users := {"alice", "bob", "fred"}
restricted_users := {"bob", "kim"}
# Power users get 32GB memory.
max_memory := 32 if power_users[user]
# Restricted users get 4GB memory.
max_memory := 4 if restricted_users[user]
Error:
1 error occurred: module.rego:16: eval_conflict_error: complete rules must not produce multiple outputs
OPA returns an error in this case because the rule definitions are in conflict. The value produced by max_memory cannot be 32 and 4 at the same time.
The documents produced by rules with complete definitions may still be undefined:
max_memory with user as "johnson"
undefined decision
In some cases, having an undefined result for a document is not desirable. In those cases, policies can use the Default Keyword to provide a fallback value.
Note that the (future) keyword if
is optional here.
If future keywords are not available to you, you can define complete rules like this:
max_memory := 32 { power_users[user] }
max_memory := 4 { restricted_users[user] }
Rule Heads containing References
As a shorthand for defining nested rule structures, it’s valid to use references as rule heads:
fruit.apple.seeds = 12
fruit.orange.color = "orange"
This module defines two complete rules, data.example.fruit.apple.seeds
and data.example.fruit.orange.color
:
data.example
{
"fruit": {
"apple": {
"seeds": 12
},
"orange": {
"color": "orange"
}
}
}
Variables in Rule Head References
Any term, except the very first, in a rule head’s reference can be a variable. These variables can be assigned within the rule, just as for any other partial rule, to dynamically construct a nested collection of objects.
Example
Input:
{
"users": [
{
"id": "alice",
"role": "employee",
"country": "USA"
},
{
"id": "bob",
"role": "customer",
"country": "USA"
},
{
"id": "dora",
"role": "admin",
"country": "Sweden"
}
],
"admins": [
{
"id": "charlie"
}
]
}
Module:
package example
import future.keywords
# A partial object rule that converts a list of users to a mapping by "role" and then "id".
users_by_role[role][id] := user if {
some user in input.users
id := user.id
role := user.role
}
# Partial rule with an explicit "admin" key override
users_by_role.admin[id] := user if {
some user in input.admins
id := user.id
}
# Leaf entries can be partial sets
users_by_country[country] contains user.id if {
some user in input.users
country := user.country
}
Output:
{
"users_by_country": {
"Sweden": [
"dora"
],
"USA": [
"alice",
"bob"
]
},
"users_by_role": {
"admin": {
"charlie": {
"id": "charlie"
},
"dora": {
"country": "Sweden",
"id": "dora",
"role": "admin"
}
},
"customer": {
"bob": {
"country": "USA",
"id": "bob",
"role": "customer"
}
},
"employee": {
"alice": {
"country": "USA",
"id": "alice",
"role": "employee"
}
}
}
}
Conflicts
The first variable declared in a rule head’s reference divides the reference in a leading constant portion and a trailing dynamic portion. Other rules are allowed to overlap with the dynamic portion (dynamic extent) without causing a compile-time conflict.
package example
# R1
p[x].r := y {
x := "q"
y := 1
}
# R2
p.q.r := 2
Error:
1 error occurred: module.rego:10: eval_conflict_error: object keys must be unique
In the above example, rule R2
overlaps with the dynamic portion of rule R1
’s reference ([x].r
), which is allowed at compile-time, as these rules aren’t guaranteed to produce conflicting output.
However, as R1
defines x
as "q"
and y
as 1
, a conflict will be reported at evaluation-time.
Conflicts are detected at compile-time, where possible, between rules even if they are within the dynamic extent of another rule.
package example
# R1
p[x].r := y {
x := "foo"
y := 1
}
# R2
p.q.r := 2
# R3
p.q.r.s := 3
Error:
1 error occurred: module.rego:10: rego_type_error: rule data.example.p.q.r conflicts with [data.example.p.q.r.s]
Above, R2
and R3
are within the dynamic extent of R1
, but are in conflict with each other, which is detected at compile-time.
Rules are not allowed to overlap with object values of other rules.
package example
# R1
p.q.r := {"s": 1}
# R2
p[x].r.t := 2 {
x := "q"
}
Error:
1 error occurred: module.rego:4: eval_conflict_error: object keys must be unique
In the above example, R1
is within the dynamic extent of R2
and a conflict cannot be detected at compile-time. However, at evaluation-time R2
will attempt to inject a value under key t
in an object value defined by R1
. This is a conflict, as rules are not allowed to modify or replace values defined by other rules.
We won’t get a conflict if we update the policy to the following:
package example
# R1
p.q.r.s := 1
# R2
p[x].r.t := 2 {
x := "q"
}
As R1
is now instead defining a value within the dynamic extent of R2
’s reference, which is allowed:
{
"p": {
"q": {
"r": {
"s": 1,
"t": 2
}
}
}
}
Functions
Rego supports user-defined functions that can be called with the same semantics as Built-in Functions. They have access to both the the data Document and the input Document.
For example, the following function will return the result of trimming the spaces from a string and then splitting it by periods.
trim_and_split(s) := x if {
t := trim(s, " ")
x := split(t, ".")
}
trim_and_split(" foo.bar ")
[
"foo",
"bar"
]
Note that the (future) keyword if
is optional here.
If future keywords are not available to you, you can define the same function as follows:
trim_and_split(s) := x {
t := trim(s, " ")
x := split(t, ".")
}
Functions may have an arbitrary number of inputs, but exactly one output. Function arguments may be any kind of term. For example, suppose we have the following function:
foo([x, {"bar": y}]) := z if {
z := {x: y}
}
The following calls would produce the logical mappings given:
Call | x | y |
---|---|---|
z := foo(a) | a[0] | a[1].bar |
z := foo(["5", {"bar": "hello"}]) | "5" | "hello" |
z := foo(["5", {"bar": [1, 2, 3, ["foo", "bar"]]}]) | "5" | [1, 2, 3, ["foo", "bar"]] |
If you need multiple outputs, write your functions so that the output is an array, object or set
containing your results. If the output term is omitted, it is equivalent to having the output term
be the literal true
. Furthermore, if
can be used to write shorter definitions. That is, the
function declarations below are equivalent:
f(x) { x == "foo" }
f(x) if { x == "foo" }
f(x) if x == "foo"
f(x) := true { x == "foo" }
f(x) := true if { x == "foo" }
f(x) := true if x == "foo"
The outputs of user functions have some additional limitations, namely that they must resolve to a single value. If you write a function that has multiple possible bindings for an output variable, you will get a conflict error:
p(x) := y if {
y := x[_]
}
p([1, 2, 3])
1 error occurred: module.rego:4: eval_conflict_error: functions must not produce multiple outputs for same inputs
It is possible in Rego to define a function more than once, to achieve a conditional selection of which function to execute:
Functions can be defined incrementally.
q(1, x) := y if {
y := x
}
q(2, x) := y if {
y := x*4
}
q(1, 2)
2
q(2, 2)
8
A given function call will execute all functions that match the signature given. If a call matches multiple functions, they must produce the same output, or else a conflict error will occur:
r(1, x) := y if {
y := x
}
r(x, 2) := y if {
y := x*4
}
r(1, 2)
1 error occurred: module.rego:4: eval_conflict_error: functions must not produce multiple outputs for same inputs
On the other hand, if a call matches no functions, then the result is undefined.
s(x, 2) := y if {
y := x * 4
}
s(5, 2)
20
s(5, 3)
undefined decision
Function overloading
Rego does not currently support the overloading of functions by the number of parameters. If two function definitions are given with the same function name but different numbers of parameters, a compile-time type error is generated.
r(x) := result if {
result := 2*x
}
r(x, y) := result if {
result := 2*x + 3*y
}
1 error occurred: module.rego:4: rego_type_error: conflicting rules data.example.r found
The error can be avoided by using different function names.
r_1(x) := result if {
result := 2*x
}
r_2(x, y) := result if {
result := 2*x + 3*y
}
[
r_1(10),
r_2(10, 1)
]
[
20,
23
]
In the unusual case that it is critical to use the same name, the function could be made to take the list of parameters as a single array. However, this approach is not generally recommended because it sacrifices some helpful compile-time checking and can be quite error-prone.
r(params) := result if {
count(params) == 1
result := 2*params[0]
}
r(params) := result if {
count(params) == 2
result := 2*params[0] + 3*params[1]
}
[
r([10]),
r([10, 1])
]
[
20,
23
]
Negation
To generate the content of a Virtual Document, OPA attempts to bind variables in the body of the rule such that all expressions in the rule evaluate to True.
This generates the correct result when the expressions represent assertions about what states should exist in the data stored in OPA. In some cases, you want to express that certain states should not exist in the data stored in OPA. In these cases, negation must be used.
For safety, a variable appearing in a negated expression must also appear in another non-negated equality expression in the rule.
OPA will reorder expressions to ensure that negated expressions are evaluated after other non-negated expressions with the same variables. OPA will reject rules containing negated expressions that do not meet the safety criteria described above.
The simplest use of negation involves only scalar values or variables and is equivalent to complementing the operator:
t if {
greeting := "hello"
not greeting == "goodbye"
}
The result:
t
true
Negation is required to check whether some value does not exist in a collection. That is, complementing the operator in an expression such as p[_] == "foo"
yields p[_] != "foo"
. However, this is not equivalent to not p["foo"]
.
For example, we can write a rule that defines a document containing names of apps not deployed on the "prod"
site:
prod_servers contains name if {
some site in sites
site.name == "prod"
some server in site.servers
name := server.name
}
apps_in_prod contains name if {
some site in sites
some app in apps
name := app.name
some server in app.servers
prod_servers[server]
}
apps_not_in_prod contains name if {
some app in apps
name := app.name
not apps_in_prod[name]
}
The result:
apps_not_in_prod[name]
+-----------+------------------------+
| name | apps_not_in_prod[name] |
+-----------+------------------------+
| "mongodb" | "mongodb" |
+-----------+------------------------+
Universal Quantification (FOR ALL)
Rego allows for several ways to express universal quantification.
For example, imagine you want to express a policy that says (in English):
There must be no apps named "bitcoin-miner".
The most expressive way to state this in Rego is using the every
keyword:
import future.keywords.every
no_bitcoin_miners_using_every if {
every app in apps {
app.name != "bitcoin-miner"
}
}
Variables in Rego are existentially quantified by default: when you write
array := ["one", "two", "three"]; array[i] == "three"
The query will be satisfied if there is an i
such that the query’s
expressions are simultaneously satisfied.
+-----------------------+---+
| array | i |
+-----------------------+---+
| ["one","two","three"] | 2 |
+-----------------------+---+
Therefore, there are other ways to express the desired policy.
For this policy, you can also define a rule that finds if there exists a bitcoin-mining
app (which is easy using the some
keyword). And then you use negation to check
that there is NO bitcoin-mining app. Technically, you’re using 2 negations and
an existential quantifier, which is logically the same as a universal
quantifier.
For example:
no_bitcoin_miners_using_negation if not any_bitcoin_miners
any_bitcoin_miners if {
some app in apps
app.name == "bitcoin-miner"
}
no_bitcoin_miners_using_negation with apps as [{"name": "web"}]
true
no_bitcoin_miners_using_negation with apps as [{"name": "bitcoin-miner"}, {"name": "web"}]
undefined decision
A common mistake is to try encoding the policy with a rule named no_bitcoin_miners
like so:
no_bitcoin_miners if {
app := apps[_]
app.name != "bitcoin-miner" # THIS IS NOT CORRECT.
}
It becomes clear that this is incorrect when you use the some
keyword, because the rule is true whenever there is SOME app that is not a
bitcoin-miner:
no_bitcoin_miners if {
some app in apps
app.name != "bitcoin-miner"
}
You can confirm this by querying the rule:
no_bitcoin_miners with apps as [{"name": "bitcoin-miner"}, {"name": "web"}]
true
The reason the rule is incorrect is that variables in Rego are existentially
quantified. This means that rule bodies and queries express FOR ANY and not FOR
ALL. To express FOR ALL in Rego complement the logic in the rule body (e.g.,
!=
becomes ==
) and then complement the check using negation (e.g.,
no_bitcoin_miners
becomes not any_bitcoin_miners
).
Alternatively, we can implement the same kind of logic inside a single rule using Comprehensions.
no_bitcoin_miners_using_comprehension if {
bitcoin_miners := {app | some app in apps; app.name == "bitcoin-miner"}
count(bitcoin_miners) == 0
}
Modules
In Rego, policies are defined inside modules. Modules consist of:
Modules are typically represented in Unicode text and encoded in UTF-8.
Comments
Comments begin with the #
character and continue until the end of the line.
Packages
Packages group the rules defined in one or more modules into a particular namespace. Because rules are namespaced they can be safely shared across projects.
Modules contributing to the same package do not have to be located in the same directory.
The rules defined in a module are automatically exported. That is, they can be queried under OPA’s Data API provided the appropriate package is given. For example, given the following module:
package opa.examples
pi := 3.14159
The pi
document can be queried via the Data API:
GET https://example.com/v1/data/opa/examples/pi HTTP/1.1
Valid package names are variables or references that only contain string operands. For example, these are all valid package names:
package foo
package foo.bar
package foo.bar.baz
package foo["bar.baz"].qux
These are invalid package names:
package 1foo # not a variable
package foo[1].bar # contains non-string operand
For more details see the language Grammar.
Imports
Import statements declare dependencies that modules have on documents defined outside the package. By importing a document, the identifiers exported by that document can be referenced within the current module.
All modules contain implicit statements which import the data
and input
documents.
Modules use the same syntax to declare dependencies on Base and Virtual Documents.
package opa.examples
import future.keywords # uses 'in' and 'contains' and 'if'
import data.servers
http_servers contains server if {
some server in servers
"http" in server.protocols
}
Similarly, modules can declare dependencies on query arguments by specifying an import path that starts with input
.
package opa.examples
import future.keywords
import input.user
import input.method
# allow alice to perform any operation.
allow if user == "alice"
# allow bob to perform read-only operations.
allow if {
user == "bob"
method == "GET"
}
# allows users assigned a "dev" role to perform read-only operations.
allow if {
method == "GET"
input.user in data.roles["dev"]
}
# allows user catherine access on Saturday and Sunday
allow if {
user == "catherine"
day := time.weekday(time.now_ns())
day in ["Saturday", "Sunday"]
}
Imports can include an optional as
keyword to handle namespacing issues:
package opa.examples
import future.keywords
import data.servers as my_servers
http_servers contains server if {
some server in my_servers
"http" in server.protocols
}
Future Keywords
To ensure backwards-compatibility, new keywords (like every
) are introduced slowly.
In the first stage, users can opt-in to using the new keywords via a special import:
import future.keywords
introduces all future keywords, andimport future.keywords.x
only introduces thex
keyword – see below for all known future keywords.
At some point in the future, the keyword will become standard, and the import will become a no-op that can safely be removed. This should give all users ample time to update their policies, so that the new keyword will not cause clashes with existing variable names.
Note that some future keyword imports have consequences on pretty-printing:
If contains
or if
are imported, the pretty-printer will use them as applicable
when formatting the modules.
This is the list of all future keywords known to OPA:
future.keywords.in
More expressive membership and existential quantification keyword:
deny {
some x in input.roles # iteration
x == "denylisted-role"
}
deny {
"denylisted-role" in input.roles # membership check
}
in
was introduced in v0.34.0.
See the keywords docs for details.
future.keywords.every
Expressive universal quantification keyword:
allowed := {"customer", "admin"}
allow {
every role in input.roles {
role.name in allowed
}
}
There is no need to also import future.keywords.in
, that is implied by importing future.keywords.every
.
every
was introduced in v0.38.0.
See Every Keyword for details.
future.keywords.if
This keyword allows more expressive rule heads:
deny if input.token != "secret"
if
was introduced in v0.42.0.
future.keywords.contains
This keyword allows more expressive rule heads for partial set rules:
deny contains msg { msg := "forbidden" }
contains
was introduced in v0.42.0.
Some Keyword
The some
keyword allows queries to explicitly declare local variables. Use the
some
keyword in rules that contain unification statements or references with
variable operands if variables contained in those statements are not
declared using :=
.
Statement | Example | Variables |
---|---|---|
Unification | input.a = [["b", x], [y, "c"]] | x and y |
Reference with variable operands | data.foo[i].bar[j] | i and j |
For example, the following rule generates tuples of array indices for servers in the “west” region that contain “db” in their name. The first element in the tuple is the site index and the second element is the server index.
tuples contains [i, j] if {
some i, j
sites[i].region == "west"
server := sites[i].servers[j] # note: 'server' is local because it's declared with :=
contains(server.name, "db")
}
If we query for the tuples we get two results:
[
[
1,
2
],
[
2,
1
]
]
Since we have declared i
, j
, and server
to be local, we can introduce
rules in the same package without affecting the result above:
# Define a rule called 'i'
i := 1
If we had not declared i
with the some
keyword, introducing the i
rule
above would have changed the result of tuples
because the i
symbol in the
body would capture the global value. Try removing some i, j
and see what happens!
The some
keyword is not required but it’s recommended to avoid situations like
the one above where introduction of a rule inside a package could change
behaviour of other rules.
For using the some
keyword with iteration, see
the documentation of the in
operator.
Every Keyword
import future.keywords.every
names_with_dev if {
some site in sites
site.name == "dev"
every server in site.servers {
endswith(server.name, "-dev")
}
}
names_with_dev
true
The every
keyword takes an (optional) key argument, a value argument, a domain, and a
block of further queries, its “body”.
The keyword is used to explicitly assert that its body is true for any element in the domain. It will iterate over the domain, bind its variables, and check that the body holds for those bindings. If one of the bindings does not yield a successful evaluation of the body, the overall statement is undefined.
If the domain is empty, the overall statement is true.
Evaluating every
does not introduce new bindings into the rule evaluation.
Used with a key argument, the index, or property name (for objects), comes into the scope of the body evaluation:
import future.keywords.every
array_domain if {
every i, x in [1, 2, 3] { x-i == 1 } # array domain
}
object_domain if {
every k, v in {"foo": "bar", "fox": "baz" } { # object domain
startswith(k, "f")
startswith(v, "b")
}
}
set_domain if {
every x in {1, 2, 3} { x != 4 } # set domain
}
{
"array_domain": true,
"object_domain": true,
"set_domain": true
}
Semantically, every x in xs { p(x) }
is equivalent to, but shorter than, a “not-some-not”
construct using a helper rule:
import future.keywords.every
xs := [2, 2, 4, 8]
larger_than_one(x) := x > 1
rule_every if {
every x in xs { larger_than_one(x) }
}
not_less_or_equal_one if not lte_one
lte_one if {
some x in xs
not larger_than_one(x)
}
{
"not_less_or_equal_one": true,
"rule_every": true,
"xs": [
2,
2,
4,
8
]
}
Negating every
is forbidden. If you desire to express not every x in xs { p(x) }
please use some x in xs; not p(x)
instead.
With Keyword
The with
keyword allows queries to programmatically specify values nested
under the input Document or the
data Document, or built-in functions.
For example, given the simple authorization policy in the Imports section, we can write a query that checks whether a particular request would be allowed:
allow with input as {"user": "alice", "method": "POST"}
true
allow with input as {"user": "bob", "method": "GET"}
true
not allow with input as {"user": "bob", "method": "DELETE"}
true
allow with input as {"user": "charlie", "method": "GET"} with data.roles as {"dev": ["charlie"]}
true
not allow with input as {"user": "charlie", "method": "GET"} with data.roles as {"dev": ["bob"]}
true
allow with input as {"user": "catherine", "method": "GET"}
with data.roles as {"dev": ["bob"]}
with time.weekday as "Sunday"
true
The with
keyword acts as a modifier on expressions. A single expression is
allowed to have zero or more with
modifiers. The with
keyword has the
following syntax:
<expr> with <target-1> as <value-1> [with <target-2> as <value-2> [...]]
The <target>
s must be references to values in the input document (or the input
document itself) or data document, or references to functions (built-in or not).
The with
keyword only affects the attached expression. Subsequent expressions
will see the unmodified value. The exception to this rule is when multiple
with
keywords are in-scope like below:
inner := [x, y] if {
x := input.foo
y := input.bar
}
middle := [a, b] if {
a := inner with input.foo as 100
b := input
}
outer := result if {
result := middle with input as {"foo": 200, "bar": 300}
}
When <target>
is a reference to a function, like http.send
, then
its <value>
can be any of the following:
- a value:
with http.send as {"body": {"success": true }}
- a reference to another function:
with http.send as mock_http_send
- a reference to another (possibly custom) built-in function:
with custom_builtin as less_strict_custom_builtin
- a reference to a rule that will be used as the value.
When the replacement value is a function, its arity needs to match the replaced function’s arity; and the types must be compatible.
Replacement functions can call the function they’re replacing without causing recursion. See the following example:
f(x) := count(x)
mock_count(x) := 0 if "x" in x
mock_count(x) := count(x) if not "x" in x
f([1, 2, 3]) with count as mock_count
3
f(["x", "y", "z"]) with count as mock_count
0
Each replacement function evaluation will start a new scope: it’s valid to use
with <builtin1> as ...
in the body of the replacement function – for example:
f(x) := count(x) if {
rule_using_concat with concat as "foo,bar"
}
mock_count(x) := 0 if "x" in x
mock_count(x) := count(x) if not "x" in x
rule_using_concat if {
concat(",", input.x) == "foo,bar"
}
f(["x", "y", "z"]) with count as mock_count with input.x as ["baz"]
0
Note that function replacement via with
does not affect the evaluation of
the function arguments: if input.x
is undefined, the replacement of concat
does not change the result of the evaluation:
count(input.x) with count as 3 with input.x as ["x"]
3
count(input.x) with count as 3 with input as {}
undefined decision
Default Keyword
The default
keyword allows policies to define a default value for documents
produced by rules with Complete Definitions. The
default value is used when all of the rules sharing the same name are undefined.
For example:
default allow := false
allow if {
input.user == "bob"
input.method == "GET"
}
allow if input.user == "alice"
When the allow
document is queried, the return value will be either true
or false
.
{
"user": "bob",
"method": "POST"
}
false
Without the default definition, the allow
document would simply be undefined for the same input.
When the default
keyword is used, the rule syntax is restricted to:
default <name> := <term>
The term may be any scalar, composite, or comprehension value but it may not be a variable or reference. If the value is a composite then it may not contain variables or references. Comprehensions however may, as the result of a comprehension is never undefined.
Similar to rules, the default
keyword can be applied to functions as well.
For example:
default clamp_positive(_) := 0
clamp_positive(x) = x {
x > 0
}
When clamp_positive
is queried, the return value will be either the argument provided to the function or 0
.
The value of a default
function follows the same conditions as that of a default
rule. In addition, a default
function satisfies the following properties:
- same arity as other functions with the same name
- arguments should only be plain variables ie. no composite values
- argument names should not be repeated
Else Keyword
The else
keyword is a basic control flow construct that gives you control
over rule evaluation order.
Rules grouped together with the else
keyword are evaluated until a match is
found. Once a match is found, rule evaluation does not proceed to rules further
in the chain.
The else
keyword is useful if you are porting policies into Rego from an
order-sensitive system like IPTables.
authorize := "allow" if {
input.user == "superuser" # allow 'superuser' to perform any operation.
} else := "deny" if {
input.path[0] == "admin" # disallow 'admin' operations...
input.source_network == "external" # from external networks.
} # ... more rules
In the example below, evaluation stops immediately after the first rule even though the input matches the second rule as well.
{
"path": [
"admin",
"exec_shell"
],
"source_network": "external",
"user": "superuser"
}
"allow"
In the next example, the input matches the second rule (but not the first) so evaluation continues to the second rule before stopping.
{
"path": [
"admin",
"exec_shell"
],
"source_network": "external",
"user": "alice"
}
"deny"
The else
keyword may be used repeatedly on the same rule and there is no
limit imposed on the number of else
clauses on a rule.
Operators
Membership and iteration: in
The membership operator in
lets you check if an element is part of a collection (array, set, or object). It always evaluates to true
or false
:
import future.keywords.in
p := [x, y, z] if {
x := 3 in [1, 2, 3] # array
y := 3 in {1, 2, 3} # set
z := 3 in {"foo": 1, "bar": 3} # object
}
{
"p": [
true,
true,
true
]
}
When providing two arguments on the left-hand side of the in
operator,
and an object or an array on the right-hand side, the first argument is
taken to be the key (object) or index (array), respectively:
import future.keywords.in
p := [x, y] if {
x := "foo", "bar" in {"foo": "bar"} # key, val with object
y := 2, "baz" in ["foo", "bar", "baz"] # key, val with array
}
{
"p": [
true,
true
]
}
Note that in list contexts, like set or array definitions and function arguments, parentheses are required to use the form with two left-hand side arguments – compare:
import future.keywords.in
p := x if {
x := { 0, 2 in [2] }
}
q := x if {
x := { (0, 2 in [2]) }
}
w := x if {
x := g((0, 2 in [2]))
}
z := x if {
x := f(0, 2 in [2])
}
f(x, y) := sprintf("two function arguments: %v, %v", [x, y])
g(x) := sprintf("one function argument: %v", [x])
{
"p": [
true,
0
],
"q": [
true
],
"w": "one function argument: true",
"z": "two function arguments: 0, true"
}
Combined with not
, the operator can be handy when asserting that an element is not
member of an array:
import future.keywords.in
deny if not "admin" in input.user.roles
test_deny {
deny with input.user.roles as ["operator", "user"]
}
{
"test_deny": true
}
Note that expressions using the in
operator always return true
or false
, even
when called in non-collection arguments:
import future.keywords.in
q := x if {
x := 3 in "three"
}
{
"q": false
}
Using the some
variant, it can be used to introduce new variables based on a collections’ items:
import future.keywords.in
p[x] {
some x in ["a", "r", "r", "a", "y"]
}
q[x] {
some x in {"s", "e", "t"}
}
r[x] {
some x in {"foo": "bar", "baz": "quz"}
}
{
"p": [
"a",
"r",
"y"
],
"q": [
"e",
"s",
"t"
],
"r": [
"bar",
"quz"
]
}
Furthermore, passing a second argument allows you to work with object keys and array indices:
import future.keywords.in
p[x] {
some x, "r" in ["a", "r", "r", "a", "y"] # key variable, value constant
}
q[x] = y if {
some x, y in ["a", "r", "r", "a", "y"] # both variables
}
r[y] = x if {
some x, y in {"foo": "bar", "baz": "quz"}
}
{
"p": [
1,
2
],
"q": {
"0": "a",
"1": "r",
"2": "r",
"3": "a",
"4": "y"
},
"r": {
"bar": "foo",
"quz": "baz"
}
}
Any argument to the some
variant can be a composite, non-ground value:
import future.keywords.in
p[x] = y if {
some x, {"foo": y} in [{"foo": 100}, {"bar": 200}]
}
p[x] = y if {
some {"bar": x}, {"foo": y} in {{"bar": "b"}: {"foo": "f"}}
}
{
"p": {
"0": 100,
"b": "f"
}
}
Equality: Assignment, Comparison, and Unification
Rego supports three kinds of equality: assignment (:=
), comparison (==
), and unification =
. We recommend using assignment (:=
) and comparison (==
) whenever possible for policies that are easier to read and write.
Assignment :=
The assignment operator (:=
) is used to assign values to variables. Variables assigned inside a rule are locally scoped to that rule and shadow global variables.
x := 100
p if {
x := 1 # declare local variable 'x' and assign value 1
x != 100 # true because 'x' refers to local variable
}
Assigned variables are not allowed to appear before the assignment in the query. For example, the following policy will not compile:
p if {
x != 100
x := 1 # error because x appears earlier in the query.
}
q if {
x := 1
x := 2 # error because x is assigned twice.
}
2 errors occurred:
module.rego:6: rego_compile_error: var x referenced above
module.rego:11: rego_compile_error: var x assigned above
A simple form of destructuring can be used to unpack values from arrays and assign them to variables:
address := ["3 Abbey Road", "NW8 9AY", "London", "England"]
in_london if {
[_, _, city, country] := address
city == "London"
country == "England"
}
{
"address": [
"3 Abbey Road",
"NW8 9AY",
"London",
"England"
],
"in_london": true
}
Comparison ==
Comparison checks if two values are equal within a rule. If the left or right hand side contains a variable that has not been assigned a value, the compiler throws an error.
p if {
x := 100
x == 100 # true because x refers to the local variable
}
{
"p": true
}
y := 100
q if {
y == 100 # true because y refers to the global variable
}
{
"q": true,
"y": 100
}
r if {
z == 100 # compiler error because z has not been assigned a value
}
1 error occurred: module.rego:5: rego_unsafe_var_error: var z is unsafe
Unification =
Unification (=
) combines assignment and comparison. Rego will assign variables to values that make the comparison true. Unification lets you ask for values for variables that make an expression true.
# Find values for x and y that make the equality true
[x, "world"] = ["hello", y]
+---------+---------+
| x | y |
+---------+---------+
| "hello" | "world" |
+---------+---------+
sites[i].servers[j].name = apps[k].servers[m]
+---+---+---+---+
| i | j | k | m |
+---+---+---+---+
| 0 | 0 | 0 | 0 |
| 0 | 1 | 0 | 1 |
| 0 | 2 | 1 | 0 |
| 1 | 0 | 0 | 2 |
| 1 | 1 | 0 | 3 |
| 1 | 2 | 1 | 1 |
| 2 | 0 | 0 | 4 |
| 2 | 1 | 2 | 0 |
+---+---+---+---+
As opposed to when assignment (:=
) is used, the order of expressions in a rule does not affect the document’s content.
s if {
x > y
y = 41
x = 42
}
Best Practices for Equality
Here is a comparison of the three forms of equality.
Equality Applicable Compiler Errors Use Case
-------- ----------- ------------------------- ----------------------
:= Everywhere Var already assigned Assign variable
== Everywhere Var not assigned Compare values
= Everywhere Values cannot be computed Express query
Best practice is to use assignment :=
and comparison ==
wherever possible. The additional compiler checks help avoid errors when writing policy, and the additional syntax helps make the intent clearer when reading policy.
Under the hood :=
and ==
are syntactic sugar for =
, local variable creation, and additional compiler checks.
Comparison Operators
The following comparison operators are supported:
a == b # `a` is equal to `b`.
a != b # `a` is not equal to `b`.
a < b # `a` is less than `b`.
a <= b # `a` is less than or equal to `b`.
a > b # `a` is greater than `b`.
a >= b # `a` is greater than or equal to `b`.
None of these operators bind variables contained in the expression. As a result, if either operand is a variable, the variable must appear in another expression in the same rule that would cause the variable to be bound, i.e., an equality expression or the target position of a built-in function.
Built-in Functions
In some cases, rules must perform simple arithmetic, aggregation, and so on. Rego provides a number of built-in functions (or “built-ins”) for performing these tasks.
Built-ins can be easily recognized by their syntax. All built-ins have the following form:
<name>(<arg-1>, <arg-2>, ..., <arg-n>)
Built-ins usually take one or more input values and produce one output value. Unless stated otherwise, all built-ins accept values or variables as output arguments.
If a built-in function is invoked with a variable as input, the variable must be safe, i.e., it must be assigned elsewhere in the query.
Built-ins can include “.” characters in the name. This allows them to be
namespaced. If you are adding custom built-ins to OPA, consider namespacing
them to avoid naming conflicts, e.g., org.example.special_func
.
See the Policy Reference document for details on each built-in function.
Errors
By default, built-in function calls that encounter runtime errors evaluate to
undefined (which can usually be treated as false
) and do not halt policy
evaluation. This ensures that built-in functions can be called with invalid
inputs without causing the entire policy to stop evaluating.
In most cases, policies do not have to implement any kind of error handling logic. If error handling is required, the built-in function call can be negated to test for undefined. For example:
allow if {
io.jwt.verify_hs256(input.token, "secret")
[_, payload, _] := io.jwt.decode(input.token)
payload.role == "admin"
}
reason contains "invalid JWT supplied as input" if {
not io.jwt.decode(input.token)
}
{
"token": "a poorly formatted token"
}
{
"reason": [
"invalid JWT supplied as input"
]
}
If you wish to disable this behaviour and instead have built-in function call errors treated as exceptions that halt policy evaluation enable “strict built-in errors” in the caller:
API | Flag |
---|---|
POST v1/data (HTTP) | strict-builtin-errors query parameter |
GET v1/data (HTTP) | strict-builtin-errors query parameter |
opa eval (CLI) | --strict-builtin-errors |
opa run (REPL) | > strict-builtin-errors |
rego Go module | rego.StrictBuiltinErrors(true) option |
Wasm | Not Available |
Example Data
The rules below define the content of documents describing a simplistic deployment environment. These documents are referenced in other sections above.
sites := [
{
"region": "east",
"name": "prod",
"servers": [
{
"name": "web-0",
"hostname": "hydrogen"
},
{
"name": "web-1",
"hostname": "helium"
},
{
"name": "db-0",
"hostname": "lithium"
}
]
},
{
"region": "west",
"name": "smoke",
"servers": [
{
"name": "web-1000",
"hostname": "beryllium"
},
{
"name": "web-1001",
"hostname": "boron"
},
{
"name": "db-1000",
"hostname": "carbon"
}
]
},
{
"region": "west",
"name": "dev",
"servers": [
{
"name": "web-dev",
"hostname": "nitrogen"
},
{
"name": "db-dev",
"hostname": "oxygen"
}
]
}
]
apps := [
{
"name": "web",
"servers": ["web-0", "web-1", "web-1000", "web-1001", "web-dev"]
},
{
"name": "mysql",
"servers": ["db-0", "db-1000"]
},
{
"name": "mongodb",
"servers": ["db-dev"]
}
]
containers := [
{
"image": "redis",
"ipaddress": "10.0.0.1",
"name": "big_stallman"
},
{
"image": "nginx",
"ipaddress": "10.0.0.2",
"name": "cranky_euclid"
}
]
Metadata
The package and individual rules in a module can be annotated with a rich set of metadata.
# METADATA
# title: My rule
# description: A rule that determines if x is allowed.
# authors:
# - John Doe <john@example.com>
# entrypoint: true
allow {
...
}
Annotations are grouped within a metadata block, and must be specified as YAML within a comment block that must start with # METADATA
.
Also, every line in the comment block containing the annotation must start at Column 1 in the module/file, or otherwise, they will be ignored.
Annotations
Name | Type | Description |
---|---|---|
scope | string; one of package , rule , document , subpackages | The scope on which the schemas annotation is applied. Read more here. |
title | string | A human-readable name for the annotation target. Read more here. |
description | string | A description of the annotation target. Read more here. |
related_resources | list of URLs | A list of URLs pointing to related resources/documentation. Read more here. |
authors | list of strings | A list of authors for the annotation target. Read more here. |
organizations | list of strings | A list of organizations related to the annotation target. Read more here. |
schemas | list of object | A list of associations between value paths and schema definitions. Read more here. |
entrypoint | boolean | Whether or not the annotation target is to be used as a policy entrypoint. Read more here. |
custom | mapping of arbitrary data | A custom mapping of named parameters holding arbitrary data. Read more here. |
Scope
Annotations can be defined at the rule or package level. The scope
annotation in
a metadata block determines how that metadata block will be applied. If the
scope
field is omitted, it defaults to the scope for the statement that
immediately follows the annotation. The scope
values that are currently
supported are:
rule
- applies to the individual rule statement (within the same file). Default, when metadata block precedes rule.document
- applies to all of the rules with the same name in the same package (across multiple files)package
- applies to all of the rules in the package (across multiple files). Default, when metadata block precedes package.subpackages
- applies to all of the rules in the package and all subpackages (recursively, across multiple files)
Since the document
scope annotation applies to all rules with the same name in the same package
and the package
and subpackages
scope annotations apply to all packages with a matching path, metadata blocks with
these scopes are applied over all files with applicable package- and rule paths.
As there is no ordering across files in the same package, the document
, package
, and subpackages
scope annotations
can only be specified once per path.
The document
scope annotation can be applied to any rule in the set (i.e., ordering does not matter.)
Example
# METADATA
# scope: document
# description: A set of rules that determines if x is allowed.
# METADATA
# title: Allow Ones
allow {
x == 1
}
# METADATA
# title: Allow Twos
allow {
x == 2
}
Title
The title
annotation is a string value giving a human-readable name to the annotation target.
Example
# METADATA
# title: Allow Ones
allow {
x == 1
}
# METADATA
# title: Allow Twos
allow {
x == 2
}
Description
The description
annotation is a string value describing the annotation target, such as its purpose.
Example
# METADATA
# description: |
# The 'allow' rule...
# Is about allowing things.
# Not denying them.
allow {
...
}
Related Resources
The related_resources
annotation is a list of related-resource entries, where each links to some related external resource; such as RFCs and other reading material.
A related-resource entry can either be an object or a short-form string holding a single URL.
Object Related-resource Format
When a related-resource entry is presented as an object, it has two fields:
ref
: a URL pointing to the resource (required).description
: a text describing the resource.
String Related-resource Format
When a related-resource entry is presented as a string, it needs to be a valid URL.
Examples
# METADATA
# related_resources:
# - ref: https://example.com
# ...
# - ref: https://example.com/foo
# description: A text describing this resource
allow {
...
}
# METADATA
# related_resources:
# - https://example.com/foo
# ...
# - https://example.com/bar
allow {
...
}
Authors
The authors
annotation is a list of author entries, where each entry denotes an author.
An author entry can either be an object or a short-form string.
Object Author Format
When an author entry is presented as an object, it has two fields:
name
: the name of the authoremail
: the email of the author
At least one of the above fields are required for a valid author
entry.
String Author Format
When an author entry is presented as a string, it has the format { name } [ "<" email ">"]
;
where the name of the author is a sequence of whitespace-separated words.
Optionally, the last word may represent an email, if enclosed with <>
.
Examples
# METADATA
# authors:
# - name: John Doe
# ...
# - name: Jane Doe
# email: jane@example.com
allow {
...
}
# METADATA
# authors:
# - John Doe
# ...
# - Jane Doe <jane@example.com>
allow {
...
}
Organizations
The organizations
annotation is a list of string values representing the organizations associated with the annotation target.
Example
# METADATA
# organizations:
# - Acme Corp.
# ...
# - Tyrell Corp.
allow {
...
}
Schemas
The schemas
annotation is a list of key value pairs, associating schemas to data values.
In-depth information on this topic can be found here.
Schema Reference Format
Schema files can be referenced by path, where each path starts with the schema
namespace, and trailing components specify
the path of the schema file (sans file-ending) relative to the root directory specified by the --schema
flag on applicable commands.
If the --schema
flag is not present, referenced schemas are ignored during type checking.
# METADATA
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["alice"]
access[_] == input.operation
}
Inlined Schema Format
Schema definitions can be inlined by specifying the schema structure as a YAML or JSON map.
Inlined schemas are always used to inform type checking for the eval
, check
, and test
commands;
in contrast to by-reference schema annotations, which require the --schema
flag to be present in order to be evaluated.
# METADATA
# schemas:
# - input.x: {type: number}
allow {
input.x == 42
}
Entrypoint
The entrypoint
annotation is a boolean used to mark rules and packages that should be used as entrypoints for a policy.
This value is false by default, and can only be used at rule
or package
scope.
The build
and eval
CLI commands will automatically pick up annotated entrypoints; you do not have to specify them with
--entrypoint
.
Custom
The custom
annotation is a mapping of user-defined data, mapping string keys to arbitrarily typed values.
Example
# METADATA
# custom:
# my_int: 42
# my_string: Some text
# my_bool: true
# my_list:
# - a
# - b
# my_map:
# a: 1
# b: 2
allow {
...
}
Accessing annotations
Rego
In the example below, you can see how to access an annotation from within a policy.
Given the input:
{
"number": 11,
"subject": {
"name": "John doe",
"role": "customer"
}
}
The following policy
package example
# METADATA
# title: Deny invalid numbers
# description: Numbers may not be higher than 5
# custom:
# severity: MEDIUM
output := decision {
input.number > 5
annotation := rego.metadata.rule()
decision := {
"severity": annotation.custom.severity,
"message": annotation.description,
}
}
will output
{
"output": {
"message": "Numbers may not be higher than 5",
"severity": "MEDIUM"
}
}
If you’d like more examples and information on this, you can see more here under the Rego policy reference.
Inspect command
Annotations can be listed through the inspect
command by using the -a
flag:
opa inspect -a
Go API
The ast.AnnotationSet
is a collection of all ast.Annotations
declared in a set of modules.
An ast.AnnotationSet
can be created from a slice of compiled modules:
var modules []*ast.Module
...
as, err := ast.BuildAnnotationSet(modules)
if err != nil {
// Handle error.
}
or can be retrieved from an ast.Compiler
instance:
var modules []*ast.Module
...
compiler := ast.NewCompiler()
compiler.Compile(modules)
as := compiler.GetAnnotationSet()
The ast.AnnotationSet
can be flattened into a slice of ast.AnnotationsRef
, which is a complete, sorted list of all
annotations, grouped by the path and location of their targeted package or -rule.
flattened := as.Flatten()
for _, entry := range flattened {
fmt.Printf("%v at %v has annotations %v\n",
entry.Path,
entry.Location,
entry.Annotations)
}
// Output:
// data.foo at foo.rego:5 has annotations {"scope":"subpackages","organizations":["Acme Corp."]}
// data.foo.bar at mod:3 has annotations {"scope":"package","description":"A couple of useful rules"}
// data.foo.bar.p at mod:7 has annotations {"scope":"rule","title":"My Rule P"}
//
// For modules:
// # METADATA
// # scope: subpackages
// # organizations:
// # - Acme Corp.
// package foo
// ---
// # METADATA
// # description: A couple of useful rules
// package foo.bar
//
// # METADATA
// # title: My Rule P
// p := 7
Given an ast.Rule
, the ast.AnnotationSet
can return the chain of annotations declared for that rule, and its path ancestry.
The returned slice is ordered starting with the annotations for the rule, going outward to the farthest node with declared annotations
in the rule’s path ancestry.
var rule *ast.Rule
...
chain := ast.Chain(rule)
for _, link := range chain {
fmt.Printf("link at %v has annotations %v\n",
link.Path,
link.Annotations)
}
// Output:
// data.foo.bar.p at mod:7 has annotations {"scope":"rule","title":"My Rule P"}
// data.foo.bar at mod:3 has annotations {"scope":"package","description":"A couple of useful rules"}
// data.foo at foo.rego:5 has annotations {"scope":"subpackages","organizations":["Acme Corp."]}
//
// For modules:
// # METADATA
// # scope: subpackages
// # organizations:
// # - Acme Corp.
// package foo
// ---
// # METADATA
// # description: A couple of useful rules
// package foo.bar
//
// # METADATA
// # title: My Rule P
// p := 7
Schema
Using schemas to enhance the Rego type checker
You can provide one or more input schema files and/or data schema files to opa eval
to improve static type checking and get more precise error reports as you develop Rego code.
The -s
flag can be used to upload schemas for input and data documents in JSON Schema format. You can either load a single JSON schema file for the input document or directory of schema files.
-s, --schema string set schema file path or directory path
Passing a single file with -s
When a single file is passed, it is a schema file associated with the input document globally. This means that for all rules in all packages, the input
has a type derived from that schema. There is no constraint on the name of the file, it could be anything.
Example:
opa eval data.envoy.authz.allow -i opa-schema-examples/envoy/input.json -d opa-schema-examples/envoy/policy.rego -s opa-schema-examples/envoy/schemas/my-schema.json
Passing a directory with -s
When a directory path is passed, annotations will be used in the code to indicate what expressions map to what schemas (see below). Both input schema files and data schema files can be provided in the same directory, with different names. The directory of schemas may have any sub-directories. Notice that when a directory is passed the input document does not have a schema associated with it globally. This must also be indicated via an annotation.
Example:
opa eval data.kubernetes.admission -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego -s opa-schema-examples/kubernetes/schemas
Schemas can also be provided for policy and data files loaded via opa eval --bundle
Example:
opa eval data.kubernetes.admission -i opa-schema-examples/kubernetes/input.json -b opa-schema-examples/bundle.tar.gz -s opa-schema-examples/kubernetes/schemas
Samples provided at: https://github.com/aavarghese/opa-schema-examples/
Usage scenario with a single schema file
Consider the following Rego code, which assumes as input a Kubernetes admission review. For resources that are Pods, it checks that the image name starts with a specific prefix.
pod.rego
package kubernetes.admission
deny[msg] {
input.request.kind.kinds == "Pod"
image := input.request.object.spec.containers[_].image
not startswith(image, "hooli.com/")
msg := sprintf("image '%v' comes from untrusted registry", [image])
}
Notice that this code has a typo in it: input.request.kind.kinds
is undefined and should have been input.request.kind.kind
.
Consider the following input document:
input.json
{
"kind": "AdmissionReview",
"request": {
"kind": {
"kind": "Pod",
"version": "v1"
},
"object": {
"metadata": {
"name": "myapp"
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx-frontend"
},
{
"image": "mysql",
"name": "mysql-backend"
}
]
}
}
}
}
Clearly there are 2 image names that are in violation of the policy. However, when we evaluate the erroneous Rego code against this input we obtain:
% opa eval data.kubernetes.admission --format pretty -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego
[]
The empty value returned is indistinguishable from a situation where the input did not violate the policy. This error is therefore causing the policy not to catch violating inputs appropriately.
If we fix the Rego code and change input.request.kind.kinds
to input.request.kind.kind
, then we obtain the expected result:
[
"image 'nginx' comes from untrusted registry",
"image 'mysql' comes from untrusted registry"
]
With this feature, it is possible to pass a schema to opa eval
, written in JSON Schema. Consider the admission review schema provided at:
https://github.com/aavarghese/opa-schema-examples/blob/main/kubernetes/schemas/input.json
We can pass this schema to the evaluator as follows:
% opa eval data.kubernetes.admission --format pretty -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego -s opa-schema-examples/kubernetes/schemas/input.json
With the erroneous Rego code, we now obtain the following type error:
1 error occurred: ../../aavarghese/opa-schema-examples/kubernetes/policy.rego:5: rego_type_error: undefined ref: input.request.kind.kinds
input.request.kind.kinds
^
have: "kinds"
want (one of): ["kind" "version"]
This indicates the error to the Rego developer right away, without having the need to observe the results of runs on actual data, thereby improving productivity.
Schema annotations
When passing a directory of schemas to opa eval
, schema annotations become handy to associate a Rego expression with a corresponding schema within a given scope:
# METADATA
# schemas:
# - <path-to-value>:<path-to-schema>
# ...
# - <path-to-value>:<path-to-schema>
allow {
...
}
See the annotations documentation for general information relating to annotations.
The schemas
field specifies an array associating schemas to data values. Paths must start with input
or data
(i.e., they must be fully-qualified.)
The type checker derives a Rego Object type for the schema and an appropriate entry is added to the type environment before type checking the rule. This entry is removed upon exit from the rule.
Example:
Consider the following Rego code which checks if an operation is allowed by a user, given an ACL data document:
package policy
import data.acl
default allow := false
# METADATA
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["alice"]
access[_] == input.operation
}
allow {
access := data.acl["bob"]
access[_] == input.operation
}
Consider a directory named mySchemasDir
with the following structure, provided via opa eval --schema opa-schema-examples/mySchemasDir
mySchemasDir/
├── input.json
└── acl-schema.json
For actual code samples, see https://github.com/aavarghese/opa-schema-examples/tree/main/acl.
In the first allow
rule above, the input document has the schema input.json
, and data.acl
has the schema acl-schema.json
. Note that we use the relative path inside the mySchemasDir
directory to identify a schema, omit the .json
suffix, and use the global variable schema
to stand for the top-level of the directory.
Schemas in annotations are proper Rego references. So schema.input
is also valid, but schema.acl-schema
is not.
If we had the expression data.acl.foo
in this rule, it would result in a type error because the schema contained in acl-schema.json
only defines object properties "alice"
and "bob"
in the ACL data document.
On the other hand, this annotation does not constrain other paths under data
. What it says is that we know the type of data.acl
statically, but not that of other paths. So for example, data.foo
is not a type error and gets assigned the type Any
.
Note that the second allow
rule doesn’t have a METADATA comment block attached to it, and hence will not be type checked with any schemas.
On a different note, schema annotations can also be added to policy files part of a bundle package loaded via opa eval --bundle
along with the --schema
parameter for type checking a set of *.rego
policy files.
The scope of the schema
annotation can be controlled through the scope annotation
In case of overlap, schema annotations override each other as follows:
rule overrides document
document overrides package
package overrides subpackages
The following sections explain how the different scopes affect schema
annotation
overriding for type checking.
Rule and Document Scopes
In the example above, the second rule does not include an annotation so type checking of the second rule would not take schemas into account. To enable type checking on the second (or other rules in the same file) we could specify the annotation multiple times:
# METADATA
# scope: rule
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["alice"]
access[_] == input.operation
}
# METADATA
# scope: rule
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["bob"]
access[_] == input.operation
}
This is obviously redundant and error-prone. To avoid this problem, we can
define the annotation once on a rule with scope document
:
# METADATA
# scope: document
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["alice"]
access[_] == input.operation
}
allow {
access := data.acl["bob"]
access[_] == input.operation
}
In this example, the annotation with document
scope has the same affect as the
two rule
scoped annotations in the previous example.
Package and Subpackage Scopes
Annotations can be defined at the package
level and then applied to all rules
within the package:
# METADATA
# scope: package
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
package example
allow {
access := data.acl["alice"]
access[_] == input.operation
}
allow {
access := data.acl["bob"]
access[_] == input.operation
}
package
scoped schema annotations are useful when all rules in the same
package operate on the same input structure. In some cases, when policies are
organized into many sub-packages, it is useful to declare schemas recursively
for them using the subpackages
scope. For example:
# METADTA
# scope: subpackages
# schemas:
# - input: schema.input
package kubernetes.admission
This snippet would declare the top-level schema for input
for the
kubernetes.admission
package as well as all subpackages. If admission control
rules were defined inside packages like kubernetes.admission.workloads.pods
,
they would be able to pick up that one schema declaration.
Overriding
JSON Schemas are often incomplete specifications of the format of data. For example, a Kubernetes Admission Review resource has a field object
which can contain any other Kubernetes resource. A schema for Admission Review has a generic type object
for that field that has no further specification. To allow more precise type checking in such cases, we support overriding existing schemas.
Consider the following example:
package kubernetes.admission
# METADATA
# scope: rule
# schemas:
# - input: schema.input
# - input.request.object: schema.kubernetes.pod
deny[msg] {
input.request.kind.kind == "Pod"
image := input.request.object.spec.containers[_].image
not startswith(image, "hooli.com/")
msg := sprintf("image '%v' comes from untrusted registry", [image])
}
In this example, the input
is associated with an Admission Review schema, and furthermore input.request.object
is set to have the schema of a Kubernetes Pod. In effect, the second schema annotation overrides the first one. Overriding is a schema transformation feature and combines existing schemas. In this case, we are combining the Admission Review schema with that of a Pod.
Notice that the order of schema annotations matter for overriding to work correctly.
Given a schema annotation, if a prefix of the path already has a type in the environment, then the annotation has the effect of merging and overriding the existing type with the type derived from the schema. In the example above, the prefix input
already has a type in the type environment, so the second annotation overrides this existing type. Overriding affects the type of the longest prefix that already has a type. If no such prefix exists, the new path and type are added to the type environment for the scope of the rule.
In general, consider the existing Rego type:
object{a: object{b: object{c: C, d: D, e: E}}}
If we override this type with the following type (derived from a schema annotation of the form a.b.e: schema-for-E1
):
object{a: object{b: object{e: E1}}}
It results in the following type:
object{a: object{b: object{c: C, d: D, e: E1}}}
Notice that b
still has its fields c
and d
, so overriding has a merging effect as well. Moreover, the type of expression a.b.e
is now E1
instead of E
.
We can also use overriding to add new paths to an existing type, so if we override the initial type with the following:
object{a: object{b: object{f: F}}}
we obtain the following type:
object{a: object{b: object{c: C, d: D, e: E, f: F}}}
We use schemas to enhance the type checking capability of OPA, and not to validate the input and data documents against desired schemas. This burden is still on the user and care must be taken when using overriding to ensure that the input and data provided are sensible and validated against the transformed schemas.
Multiple input schemas
It is sometimes useful to have different input schemas for different rules in the same package. This can be achieved as illustrated by the following example:
package policy
import data.acl
default allow := false
# METADATA
# scope: rule
# schemas:
# - input: schema["input"]
# - data.acl: schema["acl-schema"]
allow {
access := data.acl[input.user]
access[_] == input.operation
}
# METADATA for whocan rule
# scope: rule
# schemas:
# - input: schema["whocan-input-schema"]
# - data.acl: schema["acl-schema"]
whocan[user] {
access := acl[user]
access[_] == input.operation
}
The directory that is passed to opa eval
is the following:
mySchemasDir/
├── input.json
└── acl-schema.json
└── whocan-input-schema.json
In this example, we associate the schema input.json
with the input document in the rule allow
, and the schema whocan-input-schema.json
with the input document for the rule whocan
.
Translating schemas to Rego types and dynamicity
Rego has a gradual type system meaning that types can be partially known statically. For example, an object could have certain fields whose types are known and others that are unknown statically. OPA type checks what it knows statically and leaves the unknown parts to be type checked at runtime. An OPA object type has two parts: the static part with the type information known statically, and a dynamic part, which can be nil (meaning everything is known statically) or non-nil and indicating what is unknown.
When we derive a type from a schema, we try to match what is known and unknown in the schema. For example, an object
that has no specified fields becomes the Rego type Object{Any: Any}
. However, currently additionalProperties
and additionalItems
are ignored. When a schema is fully specified, we derive a type with its dynamic part set to nil, meaning that we take a strict interpretation in order to get the most out of static type checking. This is the case even if additionalProperties
is set to true
in the schema. In the future, we will take this feature into account when deriving Rego types.
When overriding existing types, the dynamicity of the overridden prefix is preserved.
Supporting JSON Schema composition keywords
JSON Schema provides keywords such as anyOf
and allOf
to structure a complex schema. For anyOf
, at least one of the subschemas must be true, and for allOf
, all subschemas must be true. The type checker is able to identify such keywords and derive a more robust Rego type through more complex schemas.
anyOf
Specifically, anyOf
acts as an Rego Or type where at least one (can be more than one) of the subschemas is true. Consider the following Rego and schema file containing anyOf
:
policy-anyOf.rego
package kubernetes.admission
# METADATA
# scope: rule
# schemas:
# - input: schema["input-anyOf"]
deny {
input.request.servers.versions == "Pod"
}
input-anyOf.json
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "object",
"properties": {
"kind": {"type": "string"},
"request": {
"type": "object",
"anyOf": [
{
"properties": {
"kind": {
"type": "object",
"properties": {
"kind": {"type": "string"},
"version": {"type": "string" }
}
}
}
},
{
"properties": {
"server": {
"type": "object",
"properties": {
"accessNum": {"type": "integer"},
"version": {"type": "string"}
}
}
}
}
]
}
}
}
We can see that request
is an object with two options as indicated by the choices under anyOf
:
- contains property
kind
, which has propertieskind
andversion
- contains property
server
, which has propertiesaccessNum
andversion
The type checker finds the first error in the Rego code, suggesting that servers
should be either kind
or server
.
input.request.servers.versions
^
have: "servers"
want (one of): ["kind" "server"]
Once this is fixed, the second typo is highlighted, prompting the user to choose between accessNum
and version
.
input.request.server.versions
^
have: "versions"
want (one of): ["accessNum" "version"]
allOf
Specifically, allOf
keyword implies that all conditions under allOf
within a schema must be met by the given data. allOf
is implemented through merging the types from all of the JSON subSchemas listed under allOf
before parsing the result to convert it to a Rego type. Merging of the JSON subSchemas essentially combines the passed in subSchemas based on what types they contain. Consider the following Rego and schema file containing allOf
:
policy-allOf.rego
package kubernetes.admission
# METADATA
# scope: rule
# schemas:
# - input: schema["input-allof"]
deny {
input.request.servers.versions == "Pod"
}
input-allof.json
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "object",
"properties": {
"kind": {"type": "string"},
"request": {
"type": "object",
"allOf": [
{
"properties": {
"kind": {
"type": "object",
"properties": {
"kind": {"type": "string"},
"version": {"type": "string" }
}
}
}
},
{
"properties": {
"server": {
"type": "object",
"properties": {
"accessNum": {"type": "integer"},
"version": {"type": "string"}
}
}
}
}
]
}
}
}
We can see that request
is an object with properties as indicated by the elements listed under allOf
:
- contains property
kind
, which has propertieskind
andversion
- contains property
server
, which has propertiesaccessNum
andversion
The type checker finds the first error in the Rego code, suggesting that servers
should be server
.
input.request.servers.versions
^
have: "servers"
want (one of): ["kind" "server"]
Once this is fixed, the second typo is highlighted, informing the user that versions
should be one of accessNum
or version
.
input.request.server.versions
^
have: "versions"
want (one of): ["accessNum" "version"]
Because the properties kind
, version
, and accessNum
are all under the allOf
keyword, the resulting schema that the given data must be validated against will contain the types contained in these properties children (string and integer).
Remote references in JSON schemas
It is valid for JSON schemas to reference other JSON schemas via URLs, like this:
{
"description": "Pod is a collection of containers that can run on a host.",
"type": "object",
"properties": {
"metadata": {
"$ref": "https://kubernetesjsonschema.dev/v1.14.0/_definitions.json#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta",
"description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata"
}
}
}
OPA’s type checker will fetch these remote references by default.
To control the remote hosts schemas will be fetched from, pass a capabilities
file to your opa eval
or opa check
call.
Starting from the capabilities.json of your OPA version (which can be found in the
repository), add
an allow_net
key to it: its values are the IP addresses or host names that OPA is
supposed to connect to for retrieving remote schemas.
{
"builtins": [ ... ],
"allow_net": [ "kubernetesjsonschema.dev" ]
}
Note
To forbid all network access in schema checking, set
allow_net
to[]
Host names are checked against the list as-is, so adding
127.0.0.1
toallow_net
, and referencing a schema fromhttp://localhost/
will fail.Metaschemas for different JSON Schema draft versions are not subject to this constraint, as they are already provided by OPA’s schema checker without requiring network access. These are:
http://json-schema.org/draft-04/schema
http://json-schema.org/draft-06/schema
http://json-schema.org/draft-07/schema
Limitations
Currently this feature admits schemas written in JSON Schema but does not support every feature available in this format. In particular the following features are not yet supported:
- additional properties for objects
- pattern properties for objects
- additional items for arrays
- contains for arrays
- oneOf, not
- enum
- if/then/else
A note of caution: overriding is a powerful capability that must be used carefully. For example, the user is allowed to write:
# METADATA
# scope: rule
# schema:
# - data: schema["some-schema"]
In this case, we are overriding the root of all documents to have some schema. Since all Rego code lives under data
as virtual documents, this in practice renders all of them inaccessible (resulting in type errors). Similarly, assigning a schema to a package name is not a good idea and can cause problems. Care must also be taken when defining overrides so that the transformation of schemas is sensible and data can be validated against the transformed schema.
References
For more examples, please see https://github.com/aavarghese/opa-schema-examples
This contains samples for Envoy, Kubernetes, and Terraform including corresponding JSON Schemas.
For a reference on JSON Schema please see: http://json-schema.org/understanding-json-schema/reference/index.html
For a tool that generates JSON Schema from JSON samples, please see: https://jsonschema.net/home
Strict Mode
The Rego compiler supports strict mode
, where additional constraints and safety checks are enforced during compilation.
Compiler rules that will be enforced by future versions of OPA, but will be a breaking change once introduced, are incubated in strict mode.
This creates an opportunity for users to verify that their policies are compatible with the next version of OPA before upgrading.
Compiler Strict mode is supported by the check
command, and can be enabled through the -S
flag.
-S, --strict enable compiler strict mode
Strict Mode Constraints and Checks
Name | Description | Enforced by default in OPA version |
---|---|---|
Duplicate imports | Duplicate imports, where one import shadows another, are prohibited. | 1.0 |
Unused local assignments | Unused arguments or assignments local to a rule, function or comprehension are prohibited | 1.0 |
Unused imports | Unused imports are prohibited. | 1.0 |
input and data reserved keywords | input and data are reserved keywords, and may not be used as names for rules and variable assignment. | 1.0 |
Use of deprecated built-ins | Use of deprecated functions is prohibited, and these will be removed in OPA 1.0. Deprecated built-in functions: any , all , re_match , net.cidr_overlap , set_diff , cast_array , cast_set , cast_string , cast_boolean , cast_null , cast_object | 1.0 |
Ecosystem Projects
The 8 ecosystem projects related to this page can be found in the corresponding OPA Ecosystem section.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.