astro.sql.operators.cleanup

Module Contents

Functions

filter_for_temp_tables(task_outputs)

Classes

CleanupOperator

Clean up temporary tables at the end of a DAG run. Temporary tables are the ones that are

astro.sql.operators.cleanup.filter_for_temp_tables(task_outputs)
Parameters

task_outputs (List[Any]) –

Return type

List[astro.sql.table.Table]

class astro.sql.operators.cleanup.CleanupOperator(*, tables_to_cleanup=None, task_id='', retries=3, retry_delay=timedelta(seconds=10), run_sync_mode=False, **kwargs)

Bases: airflow.models.baseoperator.BaseOperator

Clean up temporary tables at the end of a DAG run. Temporary tables are the ones that are generated by the SDK (where you do not pass a name arg to Table) or the ones that has the name that starts with _tmp.

By default if no tables are placed, the task will wait for all other tasks to run before deleting all temporary tables.

If using a synchronous executor (e.g. SequentialExecutor and DebugExecutor), this task will initially fail on purpose, so the executor is unblocked and can run other tasks. Users may have to define custom values for retries and retry_delay if they intend to use one of these executors.

Parameters
  • tables_to_cleanup (Optional[List[astro.sql.table.Table]]) – List of tables to drop at the end of the DAG run

  • task_id (str) – Optional custom task id

  • retries (int) – The number of retries that should be performed before failing the task. Very relevant if using a synchronous executor. The default is 3.

  • retry_delay (datetime.timedelta) – Delay between running retries. Very relevant if using a synchronous executor. The default is 10s.

  • run_sync_mode (bool) –

    Whether to wait for the DAG to finish or not. Set to False if you want to immediately clean all DAGs. Note that if you supply anything to tables_to_cleanup

    this argument is ignored.

template_fields = ['tables_to_cleanup']
execute(context)
Parameters

context (airflow.utils.context.Context) –

Return type

None

drop_table(table)
Parameters

table (astro.sql.table.Table) –

Return type

None

_is_dag_running(task_instances)

Given a list of task instances, determine whether the DAG (minus the current cleanup task) is still running.

Parameters

task_instances (List[airflow.models.taskinstance.TaskInstance]) –

Returns

boolean to show if all tasks besides this one have completed

Return type

bool

wait_for_dag_to_finish(context)

In the event that we are not given any tables, we will want to wait for all other tasks to finish before we delete temporary tables. This prevents a scenario where either a) we delete temporary tables that are still in use, or b) we run this function too early and then there are temporary tables that don’t get deleted.

Eventually this function should be made into an asynchronous function s.t. this operator does not take up a worker slot.

Parameters

context (airflow.utils.context.Context) – TI’s Context dictionary

Return type

None

classmethod _is_single_worker_mode(current_dagrun)
Parameters

current_dagrun (airflow.models.dagrun.DagRun) –

Return type

bool

static _get_executor_from_job_id(job_id)
Parameters

job_id (int) –

Return type

Optional[str]

get_all_task_outputs(context)

In the scenario where we are not given a list of tasks to follow, we will want to gather all temporary tables To prevent scenarios where we grab objects that are not tables, we try to only follow up on SQL operators or the dataframe operator, as these are the operators that return temporary tables.

Parameters

context (airflow.utils.context.Context) – Context of the DAGRun so we can resolve against the XCOM table

Return type

List[astro.sql.table.Table]

resolve_tables_from_tasks(tasks, context)

For the moment, these are the only two classes that create temporary tables. This function allows us to only resolve xcom for those objects (to reduce how much data is brought into the worker).

We also process these values one at a time so the system can garbage collect non-table objects (otherwise we might run into a situation where we pull in a bunch of dataframes and overwhelm the worker). :param tasks: A list of operators from airflow that we can resolve :param context: Context of the DAGRun so we can resolve against the XCOM table :return: List of tables

Parameters
  • tasks (List[airflow.models.baseoperator.BaseOperator]) –

  • context (airflow.utils.context.Context) –

Return type

List[astro.sql.table.Table]