Bug fixes

  • Skip folders while processing paths in load_file operator when file pattern is passed. #733


  • Limit Google Protobuf for compatibility with bigquery client. #742


Bug fixes

  • Added a check to create table only when if_exists is replace in aql.load_file for snowflake. #729

  • Fix the file type for NDJSON file in Data transfer job in AWS S3 to Google BigQuery. #724

  • Create a new version of imdb.csv with lowercase column names and update the examples to use it, so this change is backwards-compatible. #721, #727

  • Skip folders while processing paths in load_file operator when file patterns is passed. #733


  • Updated the Benchmark docs for GCS to Snowflake and S3 to Snowflake of aql.load_file #712#707

  • Restructured the documentation in the project.toml, quickstart, readthedocs and #698, #704, #706

  • Make astro-sdk-python compatible with major version of Google Providers. #703


  • Consolidate the documentation requirements for sphinx. #699

  • Add CI/CD triggers on release branches with dependency on tests. #672



  • Improved the performance of aql.load_file by supporting database-specific (native) load methods. This is now the default behaviour. Previously, the Astro SDK Python would always use Pandas to load files to SQL databases which passed the data to worker node which slowed the performance. #557, #481

    Introduced new arguments to aql.load_file:

    • use_native_support for data transfer if available on the destination (defaults to use_native_support=True)

    • native_support_kwargs is a keyword argument to be used by method involved in native support flow.

    • enable_native_fallback can be used to fall back to default transfer(defaults to enable_native_fallback=True).

    Now, there are three modes:

    • Native: Default, uses Bigquery Load Job in the case of BigQuery and Snowflake COPY INTO using external stage in the case of Snowflake.

    • Pandas: This is how datasets were previously loaded. To enable this mode, use the argument use_native_support=False in aql.load_file.

    • Hybrid: This attempts to use the native strategy to load a file to the database and if native strategy(i) fails , fallback to Pandas (ii) with relevant log warnings. #557

  • Allow users to specify the table schema (column types) in which a file is being loaded by using table.columns. If this table attribute is not set, the Astro SDK still tries to infer the schema by using Pandas (which is previous behaviour).#532

  • Add Example DAG for Dynamic Map Task with Astro-SDK. #377,airflow-2.3.0

Breaking Change

  • The aql.dataframe argument identifiers_as_lower (which was boolean, with default set to False) was replaced by the argument columns_names_capitalization (string within possible values ["upper", "lower", "original"], default is lower).#564

  • The aql.load_file before would change the capitalization of all column titles to be uppercase, by default, now it makes them lowercase, by default. The old behaviour can be achieved by using the argument columns_names_capitalization="upper". #564

  • aql.load_file attempts to load files to BigQuery and Snowflake by using native methods, which may have pre-requirements to work. To disable this mode, use the argument use_native_support=False in aql.load_file. #557, #481

  • aql.dataframe will raise an exception if the default Airflow XCom backend is being used. To solve this, either use an external XCom backend, such as S3 or GCS or set the configuration AIRFLOW__ASTRO_SDK__DATAFRAME_ALLOW_UNSAFE_STORAGE=True. #444

  • Change the declaration for the default Astro SDK temporary schema from using AIRFLOW__ASTRO__SQL_SCHEMA to AIRFLOW__ASTRO_SDK__SQL_SCHEMA #503

  • Renamed aql.truncate to aql.drop_table #554

Bug fixes

  • Fix missing airflow’s task terminal states to CleanupOperator #525

  • Allow chaining aql.drop_table (previously truncate) tasks using the Task Flow API syntax. #554, #515


  • Improved the performance of aql.load_file for files for below:

    • From AWS S3 to Google BigQuery up to 94%. #429, #568

    • From Google Cloud Storage to Google BigQuery up to 93%. #429, #562

    • From AWS S3/Google Cloud Storage to Snowflake up to 76%. #430, #544

    • From GCS to Postgres in K8s up to 93%. #428, #531

  • Get configurations via Airflow Configuration manager. #503

  • Change catching ValueError and AttributeError to DatabaseCustomError #595

  • Unpin pandas upperbound dependency #620

  • Remove markupsafe from dependencies #623

  • Added extend_existing to Sqla Table object #626

  • Move config to store DF in XCom to settings file #537

  • Make the operator names consistent #634

  • Use exc_info for exception logging #643

  • Use lazy evaluated Type Annotations from PEP 563 #650

  • Provide Google Cloud Credentials env var for bigquery #679

  • Handle breaking changes for Snowflake provide version 3.2.0 and 3.1.0 #686


  • Allow running tests on PRs from forks + label #179

  • Standardize language in docs files #678



  • Added Cleanup operator to clean temporary tables #187 #436


  • Added a Pull Request template #205

  • Added sphinx documentation for readthedocs #276 #472


  • Fail LoadFileOperator operator when input_file does not exist #467

  • Create scripts to launch benchmark testing to Google cloud #432

  • Bump Google Provider for google extra #294



  • Allow list and tuples as columns names in Append & Merge Operators #343, #435

Breaking Change:

  • aql.merge interface changed. Argument merge_table changed to target_table, target_columns and merge_column combined to column argument, merge_keys is changed to target_conflict_columns, conflict_strategy is changed to if_conflicts. More details can be found at 422, #466


  • Document (new) load_file benchmark datasets #449

  • Made improvement to benchmark scripts and configurations #458, #434, #461, #460, #437, #462

  • Performance evaluation for loading datasets with Astro Python SDK 0.9.2 into BigQuery #437


Bug fix:

  • Change export_file to return File object #454.


Bug fix:

  • Table unable to have Airflow templated names #413



  • Introduction of the user-facing Table, Metadata and File classes

Breaking changes:

  • The operator save_file became export_file

  • The tasks load_file, export_file (previously save_file) and run_raw_sql should be used with use Table, Metadata and File instances

  • The decorators dataframe, run_raw_sql and transform should be used with Table and Metadata instances

  • The operators aggregate_check, boolean_check, render and stats_check were temporarily removed

  • The class TempTable was removed. It is possible to declare temporary tables by using Table(temp=True). All the temporary tables names are prefixed with _tmp_. If the user decides to name a Table, it is no longer temporary, unless the user enforces it to be.

  • The only mandatory property of a Table instance is conn_id. If no metadata is given, the library will try to extract schema and other information from the connection object. If it is missing, it will default to the AIRFLOW__ASTRO__SQL_SCHEMA environment variable.


  • Major refactor introducing Database, File, FileType and FileLocation concepts.



  • Add support for Airflow 2.3 #367.

Breaking change:

  • We have renamed the artifacts we released to astro-sdk-python from astro-projects. 0.8.4 is the last version for which we have published both astro-sdk-python and astro-projects.


Bug fix:

  • Do not attempt to create a schema if it already exists #329.


Bug fix:

  • Support dataframes from different databases in dataframe operator #325


  • Add integration testcase for SqlDecoratedOperator to test execution of Raw SQL #316


Bug fix:

  • Snowflake transform without input_table #319



*load_file support for nested NDJSON files #257

Breaking change:

  • aql.dataframe switches the capitalization to lowercase by default. This behaviour can be changed by using identifiers_as_lower #154


  • Fix commands in #242

  • Add scripts to auto-generate Sphinx documentation


  • Improve type hints coverage

  • Improve Amazon S3 example DAG, so it does not rely on pre-populated data #293

  • Add example DAG to load/export from BigQuery #265

  • Fix usages of mutable default args #267

  • Enable DeepSource validation #299

  • Improve code quality and coverage

Bug fixes:

  • Support gcpbigquery connections #294

  • Support params argument in aql.render to override SQL Jinja template values #254

  • Fix aql.dataframe when table arg is absent #259




  • load_file to a Pandas dataframe, without SQL database dependencies #77


  • Simplify README #101

  • Add Release Guidelines #160

  • Add Code of Conduct #101

  • Add Contribution Guidelines #101


  • Add SQLite example #149

  • Allow customization of task_id when using dataframe #126

  • Use standard AWS environment variables, as opposed to AIRFLOW__ASTRO__CONN_AWS_DEFAULT #175

Bug fixes:

  • Fix merge XComArg support #183

  • Fixes to load_file:

    • file_conn_id support #137

    • sqlite_default connection support #158

  • Fixes to render:

    • conn_id are optional in SQL files #117

    • database and schema are optional in SQL files #124

  • Fix transform, so it works with SQLite #159


  • Remove transform_file #162

  • Improve integration tests coverage #174



  • Support SQLite #86

  • Support users who can’t create schemas #121

  • Ability to install optional dependencies (amazon, google, snowflake) #82


  • Change render so it creates a DAG as opposed to a TaskGroup #143

  • Allow users to specify a custom version of snowflake_sqlalchemy #127

Bug fixes:

  • Fix tasks created with dataframe so they inherit connection id #134

  • Fix snowflake URI issues #102


  • Run example DAGs as part of the CI #114

  • Benchmark tooling to validate performance of load_file #105