..
  This file is autogenerated by `docs/scripts/generate_mappings.py`. Do not edit by hand.


AthenaAccessKey
===============



    Uses the Airflow AWS Connection provided to get_credentials() to generate the profile for dbt.



    https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/connections/aws.html





    This behaves similarly to other provider operators such as the AWS Athena Operator.

    Where you pass the aws_conn_id and the operator will generate the credentials for you.



    https://registry.astronomer.io/providers/amazon/versions/latest/modules/athenaoperator



    Information about the dbt Athena profile that is generated can be found here:



    https://github.com/dbt-athena/dbt-athena?tab=readme-ov-file#configuring-your-profile

    https://docs.getdbt.com/docs/core/connect-data-platform/athena-setup

    

This profile mapping translates Airflow connections with the type ``aws``
into dbt profiles. To use this profile, import it from ``cosmos.profiles``:

.. code-block:: python

    from cosmos.profiles import AthenaAccessKeyProfileMapping

    profile = AthenaAccessKeyProfileMapping(
        conn_id = 'my_aws_connection',
        profile_args = { ... },
    )

While the profile mapping pulls fields from Airflow connections, you may need to supplement it
with additional ``profile_args``. The below table shows which fields are required, along with those
not required but pulled from the Airflow connection if present. You can also add additional fields
to the ``profile_args`` dict.

.. list-table::
   :header-rows: 1

   * - dbt Field Name
     - Required
     - Airflow Field Name

   
   * - ``aws_profile_name``
     - False
    
     - ``extra.aws_profile_name``
    
    
   * - ``database``
     - True
    
     - ``extra.database``
    
    
   * - ``debug_query_state``
     - False
    
     - ``extra.debug_query_state``
    
    
   * - ``lf_tags_database``
     - False
    
     - ``extra.lf_tags_database``
    
    
   * - ``num_retries``
     - False
    
     - ``extra.num_retries``
    
    
   * - ``poll_interval``
     - False
    
     - ``extra.poll_interval``
    
    
   * - ``region_name``
     - True
    
     - ``extra.region_name``
    
    
   * - ``s3_data_dir``
     - False
    
     - ``extra.s3_data_dir``
    
    
   * - ``s3_data_naming``
     - False
    
     - ``extra.s3_data_naming``
    
    
   * - ``s3_staging_dir``
     - True
    
     - ``extra.s3_staging_dir``
    
    
   * - ``schema``
     - True
    
     - ``extra.schema``
    
    
   * - ``seed_s3_upload_args``
     - False
    
     - ``extra.seed_s3_upload_args``
    
    
   * - ``work_group``
     - False
    
     - ``extra.work_group``
    
    
   * - ``aws_access_key_id``
     - True
    
     -
    
    
   * - ``aws_secret_access_key``
     - True
    
     -
    
    


Some notes about the table above:

- This table doesn't necessarily show the full list of fields you *can* pass to the dbt profile. To
  see the full list of fields, see the link to the dbt docs at the top of this page.
- If the Airflow field name starts with an ``extra.``, this means that the field is nested under
  the ``extra`` field in the Airflow connection. For example, if the Airflow field name is
  ``extra.token``, this means that the field is nested under ``extra`` in the Airflow connection,
  and the field name is ``token``.
- If there are multiple Airflow field names, the profile mapping looks at those fields in order.
  For example, if the Airflow field name is ``['password', 'extra.token']``, the profile mapping
  will first look for a field named ``password``. If that field is not present, it will look for
  ``extra.token``.