PIPE
Last updated
Last updated
Config path: /<database>/<schema>/pipe/<name>.yaml
Example:
copy (dict)
table (ident) - target table for COPY INTO
stage (ident) - source stage for COPY INTO
path (str) - path prefix for files in stage
pattern (str) - regular expression to filter files in stage
file_format (ident) - for files in stage
match_by_column_name (str) - case_sensitive / case_insensitive
include_metadata (dict)
{key} (ident) - column name in target table
{value} (ident) - column name in metadata, e.g. METADATA$FILENAME
transform (dict)
{key} (ident) - column name in target table
{value} (str) - SQL expression to extract column value from source stage
options (dict)
{key} (ident) - COPY option name
{value} (bool, float, int, list, str) - COPY option value
auto_ingest (bool) - enable AUTO_INGEST
aws_sns_topic (str) - SNS topic for S3 bucket
integration (ident) - notification name for Azure
error_integration (ident) - notification integration to monitor ingestion errors
comment (str)
file_format can only be specified by name referencing to FILE_FORMAT
object.
owner_integration_usage
- please specify names of INTEGRATION objects used by pipes. Pipes may work without explicit INTEGRATION USAGE grant to OWNER role, but it is not guaranteed for Snowflake keep it this way forever.
Re-creating pipes correctly is complicated. Make sure you read & fully understand . SnowDDL can only suggest or apply DDL queries to change pipes, but it cannot pause, monitor and refresh pipes for you.
Maintenance of active pipes implicitly incurs .
Notification integration mentioned in error_integration parameter must be additionally specified in parameter owner_integration_usage. Otherwise schema owner role will not be able to send notifications.
In order for PIPE
objects to operate properly, the following additional grants should be added to OWNER role in :