Resolvers are used to compare blueprints with existing metadata in Snowflake account and generate DDL commands using engine to apply changes, one resolver per object type.
All standard resolvers are located in /resolver/ directory.


All resolvers are derived from AbstractResolver class.
Resolvers for schema objects (TABLE, VIEW, etc.) are derived from AbstractSchemaObjectResolver, which has additional logic related to "sandbox" schemas and parallel metadata fetching.

Resolver workflow

Internally each resolver goes through a certain workflow.
  1. 1.
    Get object blueprints (desired state);
  2. 2.
    Load existing objects from Snowflake metadata (current state);
  3. 3.
    Compare full names of blueprints VS full names of existing objects;
    • "create" new objects;
    • "compare" existing objects;
    • "drop" existing objects without blueprints;
  4. 4.
    Execute "create" / "compare" / "drop" operations in parallel using ThreadPoolExecutor.
  5. 5.
    Update caches (if necessary).

Resolve result

Each object may "resolve" in one of the following ways:
  • CREATE - object was created, it did not exist before;
  • ALTER - existing object was updated;
  • DROP - existing object was dropped;
  • REPLACE - existing object was replaced entirely;
  • SKIP - object was not changed, it was skipped;
  • GRANT - grants were updated (used for various types of ROLES);
  • NOCHANGE - object was not changed, did not require any change;
  • ERROR - something went wrong while resolving this object, check logs;
  • UNSUPPORTED - object should be updated, but it is not possible due to lack of Snowflake support for such operation (e.g. converting TRANSIENT schema to normal schema is not possible without full data rewrite);
Resolve result by object name is available in property .resolved_objects.
Possible exception by object name is available in property .errors.

Methods (base)

  • __init__(engine: SnowDDLEngine) Initialize resolver with engine.
  • get_object_type() Abstract method. Return object type, which is processed by this resolver.
  • get_blueprints() Abstract method. Return blueprints to be processed by resolver. Normally it reads blueprints from config, but it may also generate blueprints on the fly based on some other blueprints. For example, "schema roles" are generated automatically based on schema blueprints.
  • get_existing_objects() Abstract method. Return dict with objects currently existing in Snowflake account. Normally this method calls for SHOW ... metadata commands.
  • create_object(self, bp: AbstractBlueprint) Abstract method. Accept blueprint. Create a new object which currently does not exist in Snowflake.
  • compare_object(self, bp: AbstractBlueprint, row: Dict) Abstract method. Accept blueprint and metadata of existing object. Compare blueprint with existing object and update it or recreate it. Alternatively, "do nothing" if object blueprint matches the existing metadata precisely.
  • drop_object(self, row: Dict) Abstract method. Accept metadata of existing object, which does not have a corresponding blueprint. Drop this object.

Methods (schema objects)

  • get_existing_objects_in_schema(schema: dict) Abstract method. Use instead of get_existing_objects(). Accept dict describing schema. Return dict in the same format as get_existing_objects().


  • .resolved_objects (dict) - resolve result for each processed object;
    • {key} (str) - full name of object;
    • {value} (ResolveResult) - enum value;
  • .errors (dict) - exception for each processed object;
    • {key} (str) - full name of object;
    • {value} (Exception) - exception thrown while processing object;
Last modified 6mo ago
Copy link
On this page
Resolver workflow
Resolve result
Methods (base)
Methods (schema objects)