# TASK

Config path: `/<database>/<schema>/task/<name>.yaml`

Example:

```yaml
body: |-
  CALL test_procedure_1(1)

schedule: 5 MINUTE
warehouse: task_wh
```

```yaml
body: |-
  CALL test_procedure_1(2)

after:
  - test_task_1
warehouse: task_wh
```

## Schema

* <mark style="background-color:red;">**body**</mark> (str) - SQL statement to be executed by task
* **schedule** (str) - schedule for period running tasks
* **after** (list)
  * *{items}* (ident) - one or more predecessor tasks for the current task
* **finalize** (str) **-** name of root task associated with finalizer (this task)
* **when** (str) - SQL expression returning `BOOLEAN`
* **warehouse** (ident) - warehouse used to execute task
* **user\_task\_managed\_initial\_warehouse\_size** (str) - initial warehouse size for serverless task execution
* **config** (str) - Specifies a string representation of key value pairs that can be accessed by all tasks in the DAG, must be in JSON format
* **allow\_overlapping\_execution** (bool) - allow multiple instances of the task tree to run concurrently
* **session\_params** (dict)
  * *{key}* (ident) - session parameter name
  * *{value}* (bool, float, int, str) - session parameter value
* **user\_task\_timeout\_ms** (int) - time limit on a single run of the task before it times out
* **suspend\_task\_after\_num\_failures** (int) - number of consecutive failed task runs after which the current task is suspended automatically
* **error\_integration** (ident) - notification integration to monitor task errors
* **success\_integration** (ident) - notification integration to monitor task executions
* **log\_level** (str)
* **task\_auto\_retry\_attempts** (int)
* **user\_task\_minimum\_trigger\_interval\_in\_seconds** (int)
* **target\_completion\_interval** (str)
* **serverless\_task\_min\_statement\_size** (str)
* **serverless\_task\_max\_statement\_size** (str)
* **comment** (str)

## Usage notes

1. SnowDDL only creates tasks. Tasks are initially suspended. You should execute `ALTER TASK ... RESUME` via different means to enable execution.
2. Tasks should be suspended via `ALTER TASK ... SUSPEND` before they can be altered by SnowDDL. Tasks are not suspended automatically.
3. Task is executed with privileges of task owner, which is `schema_owner` role. It will have full access to all objects in the same schema, but no access to objects in other schemas. This behaviour may improve in future.
4. Notification integration mentioned in **error\_integration** parameter must be additionally specified in [schema](https://docs.snowddl.com/basic/yaml-configs/schema) parameter **owner\_integration\_usage**. Otherwise schema owner role will not be able to send notifications.

## Additional privileges

In order for `TASK` objects to operate properly, the following additional grants should be added to OWNER role in [schema config](https://docs.snowddl.com/basic/yaml-configs/schema):

* `owner_warehouse_usage` - list warehouses used to execute tasks
* `owner_integration_usage` - if your tasks require integration objects to operate (e.g. via `PROCEDURE`), add names of  integrations here
* `owner_account_grants` - Snowflake requires `EXECUTE TASK` privilege to run tasks

## Links

* [CREATE TASK](https://docs.snowflake.com/en/sql-reference/sql/create-task.html)
* [ALTER TASK](https://docs.snowflake.com/en/sql-reference/sql/alter-task.html)
* [SHOW TASKS](https://docs.snowflake.com/en/sql-reference/sql/show-tasks.html)
* [Introduction to Tasks](https://docs.snowflake.com/en/user-guide/tasks-intro.html)
* [Parser & JSON Schema (GitHub)](https://github.com/littleK0i/SnowDDL/blob/master/snowddl/parser/task.py)
