Connecting to Snowflake (Native Programmatic Interfaces)

Snowflake facilitates the development of applications using a variety of popular programming languages and development platforms. By leveraging the native clients (connectors, drivers, etc.) offered by Snowflake, you have the flexibility to develop applications through any of the following programmatic interfaces:

- microsoft.net
- ODBC
- php
- Python
- SQLAcademy

Connecting to Snowflake (SQL Development & Management)

Snowflake offers the following native SQL development and data querying interfaces:

1. Snowsight Worksheets

- Browser-based SQL development and editing.
- Requires no installation or configuration.
- Supports multiple, independent working environments that can be opened, closed, named, and reused across multiple sessions, with all work automatically saved.

2. SnowSQL

- Python-based client for executing all tasks in Snowflake, including querying, executing DDL/DML commands, and bulk loading/unloading of data.
- Download the installer from the SnowSQL Download page.

3. Snowflake SQL Extension for Visual Studio Code:

- Snowflake provides an extension for Visual Studio Code, allowing users to write and execute Snowflake SQL statements directly in VSC.
- Install the extension directly from within Visual Studio Code or indirectly by downloading a specific version.

Moreover, Snowflake seamlessly integrates with various third-party SQL tools for managing the modeling, development, and deployment of SQL code in Snowflake applications. This includes, but is not limited to:

- Dataops (We are partners!)
- aginity
- seekwell
- Statsig

Please be aware:

The provided list does not encompass all SQL management tools compatible with Snowflake; rather, it includes validated tools known to work effectively with Snowflake. While other tools may be employed alongside Snowflake, we cannot assure seamless interoperability of all features and functionalities in these third-party tools with Snowflake.

Connecting to Snowflake (Security, Governance & Observability)

Security and governance tools play a crucial role in safeguarding an organization's sensitive data, preventing unauthorized access, tampering, and ensuring compliance with regulatory standards. These tools are often utilized alongside observability solutions or services to grant organizations insight into the status, quality, and integrity of their data, helping identify potential issues.

Collectively, these tools facilitate a diverse array of operations, including risk assessment, intrusion detection, monitoring, notification, data masking, data cataloging, data health and quality checks, as well as issue identification, troubleshooting, resolution, and more.

The subsequent list highlights security, governance, and observability tools and technologies acknowledged for their native connectivity to Snowflake:

- Acryl Data
- Alation
- atlan
- Data Dog
- Informatica
- OKERA

Connecting to Snowflake (Machine Learning & Data Science)

Also known as advanced analytics, artificial intelligence (AI), and "Big Data," machine learning and data science encompass a wide range of vendors, tools, and technologies that offer sophisticated capabilities for statistical and predictive modeling.

While these tools and technologies may share some features and functionality with BI tools, their emphasis is less on analyzing and reporting past data. Instead, they concentrate on scrutinizing extensive datasets to identify patterns and extract valuable business insights that can be utilized to forecast future trends.

The subsequent list highlights machine learning and data science platforms and technologies that are recognized for their native connectivity to Snowflake:

- Alteryx
- databricks
- tellius
- zepl
- HEX

Connecting to Snowflake (Business Intelligence (BI)

BI tools empower executives and managers to analyze, discover, and report on data, facilitating more informed business decision-making. A crucial feature of any BI tool is its capability to present data visually through dashboards, charts, and other graphical outputs.

While business intelligence occasionally intersects with technologies like data integration/transformation and advanced analytics, we have opted to categorize these technologies separately.

The subsequent list highlights BI tools and technologies that are recognized for their native connectivity to Snowflake:

- Adobe
- Tableau
- Sigma
- Oracle
- SAP

Connecting to Snowflake (Data Integration)

Often referred to as ETL, data integration involves the following three core operations:

1. Extract:
Retrieving data from specified data sources.

2. Transform:
Adjusting the source data, as required, through rules, merges, lookup tables, or other conversion methods to align with the target.

3. Load:
Incorporating the resulting transformed data into a target database.

More recently, the term ELT has gained prominence, highlighting that the transformation operation doesn't necessarily have to occur before loading. This is particularly relevant in systems like Snowflake, which support transformation during or after loading.

Furthermore, the scope of data integration has broadened to encompass a wider array of operations, including:

- Data preparation.
- Data migration, movement, and management.
- Data warehouse automation.

Connect to Snowflake

Snowflake seamlessly integrates with a diverse range of cutting-edge tools and technologies, providing you with extensive access to Snowflake through a robust network of connectors, drivers, programming languages, and utilities. This includes:

1. Certified partners who have created both cloud-based and on-premises solutions specifically designed for connecting to Snowflake.

2. Various third-party tools and technologies that have been verified to be compatible and effective when used in conjunction with Snowflake.

3. Snowflake's own suite of clients, such as SnowSQL (a command-line interface), connectors tailored for Python and Spark, and drivers supporting Node.js, JDBC, ODBC, and more.

The following sections provide a more in-depth exploration of the solutions. These solutions are presented in alphabetical order and organized based on the categories illustrated in the diagram above.

Tip:

If you do not find a suitable solution here, our broad network of partners is available to assist you in seamlessly integrating with Snowflake. For additional information, refer to Solutions Partners on the Snowflake website.

Key Concepts & Architecture of Snowflake

Snowflake's Data Cloud operates on a sophisticated data platform offered as a self-managed service. It empowers faster, more user-friendly, and highly flexible data storage, processing, and analytic solutions compared to traditional alternatives.

Unlike other existing database technologies or "big data" software platforms like Hadoop, Snowflake doesn't rely on pre-existing frameworks. Instead, it integrates a groundbreaking SQL query engine with a cloud-native architecture, specifically designed for efficiency. To end-users, Snowflake offers the complete functionality of an enterprise analytic database, coupled with numerous additional special features and distinctive capabilities.

Data Platform as a Self-managed Service:

Snowflake operates as a fully self-managed service, which implies:

- No hardware (virtual or physical) needs to be selected, installed, configured, or managed.
- Virtually no software requires installation, configuration, or management on the user's part.
- Continuous maintenance, management, upgrades, and tuning are seamlessly handled by Snowflake.

The entirety of Snowflake's service operates on cloud infrastructure, with all components—excluding optional command line clients, drivers, and connectors—running within public cloud infrastructures. Snowflake relies on virtual compute instances for computation and a storage service for the persistent storage of data. It is not designed for operation on private cloud infrastructures, whether on-premises or hosted.

Snowflake stands apart from traditional packaged software offerings, as users are not responsible for software installation or updates; Snowflake manages all aspects of these processes.

Snowflake Architecture:

Snowflake's architecture seamlessly blends elements of traditional shared-disk and shared-nothing database architectures. In alignment with shared-disk architectures, Snowflake employs a central data repository where persisted data is accessible from all compute nodes within the platform. However, akin to shared-nothing architectures, Snowflake executes queries through MPP (massively parallel processing) compute clusters. In this configuration, each node in the cluster locally stores a segment of the complete dataset. This innovative approach provides the data management simplicity characteristic of shared-disk architectures while delivering the performance and scale-out advantages associated with shared-nothing architectures.

Snowflake's distinctive architecture comprises three fundamental layers:

Database Storage
Query Processing
Cloud Services

Database Storage:

Upon loading data into Snowflake, the platform systematically restructures the data into an internally optimized, compressed, and columnar format. This optimized data is then stored in cloud storage.

Snowflake takes charge of every aspect of data storage, encompassing organization, file size, structure, compression, metadata, statistics, and other pertinent elements. The data objects stored by Snowflake remain discreet and are not directly visible or accessible to customers. Access to this stored data is exclusively facilitated through SQL query operations conducted using Snowflake.

Query Processing:

Queries are processed in the execution layer using "virtual warehouses," which are MPP compute clusters comprised of multiple nodes allocated by Snowflake. Each virtual warehouse is independent, avoiding any impact on the performance of others. Refer to the documentation on Virtual Warehouses for more details.

Cloud Services:

The cloud services layer orchestrates activities across Snowflake, managing authentication, infrastructure, metadata, query optimization, and access control. These services run on compute instances provisioned by Snowflake from the cloud provider.

Connecting to Snowflake:

Snowflake offers various connection methods:

- Web-based User Interface: Access all aspects of Snowflake management and usage.

- Command Line Clients (e.g., SnowSQL): Comprehensive access to Snowflake management and usage.

- ODBC and JDBC Drivers: Enable other applications (e.g., Tableau) to connect to Snowflake.

- Native Connectors (e.g., Python, Spark): Develop applications connecting to Snowflake.

- Third-party Connectors: Link applications like ETL tools (e.g., Informatica) and BI tools (e.g., ThoughtSpot) to Snowflake.

Getting Started with Snowflake (Summary, clean up, and additional resources)

Congratulations on successfully finishing this introductory tutorial!

Take a moment to review a brief summary and key highlights covered in the tutorial. Additionally, consider tidying up by dropping any objects created during the tutorial. Further insights can be gained by exploring additional topics in the Snowflake Documentation.

Summary and Key Points:

In summary, the data loading process involves two main steps:

Stage the Data Files:

- Data files are staged for loading, and this can be done either internally within Snowflake or in an external location. This tutorial specifically stages files in an internal stage.
Copy Data to Target Table:

- The staged files are copied into an existing target table. A running warehouse is a prerequisite for this step.

- Key considerations for loading CSV files:

- A CSV file comprises one or more records, each containing one or more fields, and sometimes a header record.

Records and fields in each file are separated by delimiters. The default delimiters are:

Records: newline characters
Fields: commas
In simpler terms, Snowflake anticipates each record in a CSV file to be separated by new lines, and the fields within each record (individual values) to be separated by commas. If different characters serve as record and field delimiters, explicit specification is necessary as part of the file format during loading.

There exists a direct relationship between the fields in the files and the columns in the loading target table, with regard to:

The number of fields in the file and columns in the target table.
The positions of the fields and columns within their respective file/table.
Data types (e.g., string, number, or date) for fields and columns.
If the numbers, positions, and data types don't align with the data, the records won't be loaded.

Tutorial clean up (Optional):
If the objects you created in this tutorial are no longer needed, you can remove them from the system with DROP statements.

DROP DATABASE IF EXISTS sf_tuts;

DROP WAREHOUSE IF EXISTS sf_tuts_wh;

Exit the connection:

To conclude a connection, employ the !exit command in SnowSQL (or its equivalent, !disconnect).

Executing this command drops the existing connection and terminates SnowSQL if it happens to be the last active connection.

What’s next?:

Deepen your understanding of Snowflake with the following resources:

Explore the Getting Started introductory videos and engage in additional tutorials offered by Snowflake:

- Access Tutorials and Other Resources

Getting Started with Snowflake (Query Loaded Data)

You can query the data within the emp_basic table using standard SQL alongside any supported functions and operators. Additionally, standard Data Manipulation Language (DML) commands allow you to perform operations like updating the loaded data or inserting additional data.

Retrieve all data:

Return all rows and columns from the table:

SELECT * FROM emp_basic;

The following is a partial result:

+------------+--------------+---------------------------+-----------------------------+--------------------+------------+
| FIRST_NAME | LAST_NAME | EMAIL | STREETADDRESS | CITY | START_DATE |
|------------+--------------+---------------------------+-----------------------------+--------------------+------------|
| Arlene | Davidovits | adavidovitsk@sf_tuts.com | 7571 New Castle Circle | Meniko | 2017-05-03 |
| Violette | Shermore | vshermorel@sf_tuts.com | 899 Merchant Center | Troitsk | 2017-01-19 |
| Ron | Mattys | rmattysm@sf_tuts.com | 423 Lien Pass | Bayaguana | 2017-11-15 |
...
...
...
| Carson | Bedder | cbedderh@sf_tuts.co.au | 71 Clyde Gallagher Place | Leninskoye | 2017-03-29 |
| Dana | Avory | davoryi@sf_tuts.com | 2 Holy Cross Pass | Wenlin | 2017-05-11 |
| Ronny | Talmadge | rtalmadgej@sf_tuts.co.uk | 588 Chinook Street | Yawata | 2017-06-02 |
+------------+--------------+---------------------------+-----------------------------+--------------------+------------+

Insert additional data rows:

Beyond loading data from staged files into a table, you can insert rows directly into a table using the INSERT Data Manipulation Language (DML) command.

As an illustration, to insert two additional rows into the table:

INSERT INTO emp_basic VALUES
('Clementine','Adamou','cadamou@sf_tuts.com','10510 Sachs Road','Klenak','2017-9-22') ,
('Marlowe','De Anesy','madamouc@sf_tuts.co.uk','36768 Northfield Plaza','Fangshan','2017-1-26');

Query rows based on email address:

Retrieve a list of email addresses containing United Kingdom top-level domains using the LIKE function:

SELECT email FROM emp_basic WHERE email LIKE '%.uk';

The following an example result:

Query rows based on start date:

As an illustration, to determine the potential commencement date for specific employee benefits, add 90 days to the employees' start dates using the DATEADD function. Narrow down the list to include only those employees whose start date precedes January 1, 2017:

SELECT first_name, last_name, DATEADD('day',90,start_date) FROM emp_basic WHERE start_date <= '2017-01-01';

Getting Started with Snowflake (Copy Data into target tables):

To transfer your staged data into the target table, execute the command COPY INTO .

This COPY INTO command leverages the virtual warehouse created in the "Create Snowflake Objects" step to copy the files.

COPY INTO emp_basic
FROM @%emp_basic
FILE_FORMAT = (type = csv field_optionally_enclosed_by='"')
PATTERN = '.*employees0[1-5].csv.gz'
ON_ERROR = 'skip_file';

In the provided context:

- The FROM clause designates the location containing the data files, which is the internal stage designated for the table.

- The FILE_FORMAT clause defines the file type as CSV and designates the double-quote character (") for enclosing strings. Snowflake accommodates various file types and options, detailed in the CREATE FILE FORMAT documentation.

- The PATTERN clause specifies that the command should load data from filenames matching a defined regular expression (.*employees0[1-5].csv.gz).

- The ON_ERROR clause outlines the course of action when the COPY command encounters errors in the files. By default, the command halts data loading upon encountering the first error. However, in this example, any file containing an error is skipped, and the command proceeds to load the next file. It's important to note that none of the files in this tutorial contain errors; this inclusion is for illustrative purposes.

The COPY command offers an option to validate files before loading. Refer to the COPY INTO topic and other data loading tutorials for supplementary instructions on error checking and validation.

Upon execution, the COPY command provides a result displaying the list of copied files along with relevant information:

Getting Started with Snowflake (Stage Data Files)

A Snowflake stage serves as a designated location in cloud storage, facilitating the loading and unloading of data from a table. Snowflake provides support for:

Internal Stages:

Utilized for storing data files internally within Snowflake.
Every user and table in Snowflake is endowed with an internal stage by default, dedicated to staging data files.

External Stages:

Employed for storing data files externally in cloud storage services such as Amazon S3, Google Cloud Storage, or Microsoft Azure.
If your data is already hosted in these cloud storage platforms, external stages can be employed to load data into Snowflake tables.

Within this tutorial, we undertake the process of uploading sample data files (previously downloaded in the prerequisites) to the internal stage associated with the emp_basic table created earlier. The PUT command is employed for this purpose, enabling the upload of the sample data files to the designated internal stage.

Staging sample data files:

Utilize the PUT command in SnowSQL to transfer local data files to the designated table stage associated with the emp_basic table you've previously established.

PUT file://[/\]employees0*.csv @sf_tuts.public.%emp_basic;

For example:

- Linux or macOS
PUT file:///tmp/employees0*.csv @sf_tuts.public.%emp_basic;

- Windows
PUT file://C:\temp\employees0*.csv @sf_tuts.public.%emp_basic;

Now, let's delve into the command:

file://[/\]employees0*.csv specifies the complete directory path and names of the files on your local machine for staging. It's noteworthy that file system wildcards are permitted, and if multiple files match the pattern, they will all be displayed.

@.% denotes the usage of the stage for the specified table, specifically the emp_basic table in this instance.

By default, the PUT command employs gzip compression, as denoted in the TARGET_COMPRESSION column.

Listing the Staged Files (Optional):

You can list the staged files using the LIST command.

LIST @sf_tuts.public.%emp_basic;

Getting Started with Snowflake (Create Snowflake Objects)

In this phase, you will craft the subsequent Snowflake objects:

Database and Table Creation:

Establish a database (sf_tuts) and a table (emp_basic) for loading sample data.
Develop a virtual warehouse (sf_tuts_wh) with X-Small capacity, facilitating data loading and table querying. This specific warehouse is designed for the tutorial.
Upon tutorial completion, these objects will be removed.

Create a Database:
Use the CREATE DATABASE command to generate the sf_tuts database:

CREATE OR REPLACE DATABASE sf_tuts;

For this tutorial, utilize the default schema (public) available for each database instead of creating a new schema.

Verify the active database and schema for your current session using the context functions:

SELECT CURRENT_DATABASE(), CURRENT_SCHEMA();

An example result may resemble:

+--------------------+------------------+
| CURRENT_DATABASE() | CURRENT_SCHEMA() |
|--------------------+------------------|
| SF_TUTS | PUBLIC |
+--------------------+------------------+

Create a Table:
Generate a table named emp_basic within sf_tuts.public using the CREATE TABLE command. The table structure corresponds to the fields in the forthcoming CSV data files:

CREATE OR REPLACE TABLE emp_basic (
first_name STRING,
last_name STRING,
email STRING,
streetaddress STRING,
city STRING,
start_date DATE
);

Create a Virtual Warehouse:
Form an X-Small warehouse called sf_tuts_wh using the CREATE WAREHOUSE command. This warehouse is initially suspended but is configured to automatically resume when SQL statements requiring compute resources are executed:

CREATE OR REPLACE WAREHOUSE sf_tuts_wh WITH
WAREHOUSE_SIZE='X-SMALL'
AUTO_SUSPEND = 180
AUTO_RESUME = TRUE
INITIALLY_SUSPENDED=TRUE;

Verify the active warehouse for your current session:

SELECT CURRENT_WAREHOUSE();

An example result may appear as:

+---------------------+
| CURRENT_WAREHOUSE() |
|---------------------|
| SF_TUTS_WH |
+---------------------+

Getting Started with Snowflake (Prerequisites)

To successfully engage with this tutorial, it is essential to set up a database, table, and virtual warehouse for data loading and querying in Snowflake. The creation of these Snowflake objects necessitates a Snowflake user with a role endowed with the required access control privileges.

Additionally, SnowSQL is indispensable for executing SQL statements throughout the tutorial. Lastly, CSV files containing sample data are required for the data loading process.

While you have the option to use an existing Snowflake warehouse, database, table, and your local data files, it is recommended to utilize the provided Snowflake objects and the accompanying set of data.

Before proceeding, ensure the following setup for Snowflake:

Create a User:

To create the necessary database, table, and virtual warehouse, log in as a Snowflake user with a role granting the requisite privileges for object creation.

If using a 30-day trial account, log in with the user created for the account, equipped with the necessary role for object creation.
If you don’t possess a Snowflake user, you won’t be able to perform the tutorial. Seek assistance from someone with the ACCOUNTADMIN or SECURITYADMIN role to create a user if needed.

Install SnowSQL:

Follow the installation guide for SnowSQL to set it up.
Download Sample Data Files:

Download the provided sample employee data files in CSV format from Snowflake.
Unzip the sample files, preferably into one of the following directories:
Linux/macOS: /tmp
Windows: C:\temp

Each file contains five data records, with fields separated by a comma (,). No extra spaces precede or follow the commas in each record—this conforms to Snowflake’s default expectation when loading CSV data.

By completing these setup steps, you’ll be well-prepared to dive into the tutorial seamlessly.

Getting Started with Snowflake (Long in to SnowSQL)

Once SnowSQL is installed, initiate the connection to Snowflake using the following steps:

1. Open a Command Line Window:

Launch your command line interface.

2. Start SnowSQL:

Enter the following command in the command line window:
php
Copy code
$ snowsql -a -u
Replace with the unique identifier for your Snowflake account, following the preferred format: organization_name-account_name. Refer to Format 1 (Preferred): Account Name in Your Organization for more details.
is your Snowflake user login.
Note:
If your account utilizes an identity provider (IdP), enabling web browser authentication is an option. Use the following command:

css
Copy code
$ snowsql -a -u --authenticator externalbrowser
For additional information, see Using a Web Browser for Federated Authentication/SSO.

Password Entry:

When prompted by SnowSQL, enter the password associated with your Snowflake user.
Successful Login:

Upon successful login, SnowSQL will display a command prompt containing information about your current warehouse, database, and schema.
Note:
If you encounter issues accessing your account and lack the account identifier, refer to the Welcome email sent by Snowflake upon trial account signup or collaborate with your ORGADMIN for account details. The Welcome email also provides values for locator, cloud, and region.

Additional Note:
If your Snowflake user lacks default warehouse, database, and schema settings, or if SnowSQL was not configured with defaults, the prompt will indicate:

bash
Copy code
user-name#(no warehouse)@(no database).(no schema)>
This signifies that no warehouse, database, or schema is selected for the current session. These objects will be created in the subsequent steps of the tutorial, and the prompt will automatically update to reflect their names.

For more comprehensive information, consult Connecting Through SnowSQL.

Getting Started with Snowflake (Snowflake in 20 minutes)

Introduction:

This guide utilizes the Snowflake command line client, SnowSQL, to introduce fundamental concepts and tasks, including:

1. Creating Snowflake Objects: You'll create a database and a table for storing data.

2. Loading Data: Sample CSV data files are provided for you to load into the table.

3. Querying: Finally, you'll explore sample queries.

Note:
To execute data loading and queries, Snowflake requires a virtual warehouse. Keep in mind that a running virtual warehouse consumes Snowflake credits. However, the credit consumption in this tutorial will be minimal as the entire process can be completed in under 30 minutes. Additionally, Snowflake incurs a minimal charge for on-disk storage used by the sample data in this tutorial. Nevertheless, steps are provided to drop the table and minimize storage costs.

If you are using a 30-day trial account, it's worth noting that the account comes with free credits, and you won't incur any costs during the tutorial. For detailed information on warehouse sizes and costs, refer to the Warehouse Size section.

What You Will Learn

Throughout this tutorial, you will gain proficiency in the following tasks:

Creating Snowflake Objects: Establish a database and a table dedicated to storing data.

Installing SnowSQL: Learn to install and utilize SnowSQL, the command line query tool for Snowflake.

Users employing Visual Studio Code may find the Snowflake Extension for Visual Studio Code to be an alternative to SnowSQL.
Loading CSV Data Files: Employ diverse mechanisms to load data from CSV files into tables.

Writing and Executing Sample Queries: Develop the ability to write and execute a variety of queries against recently loaded data.

Getting Started with Snowflake (Continuation)

Trial Accounts:

A trial account with Snowflake provides the opportunity to assess and test the platform's innovative and robust features at no cost or contractual commitments. Signing up for a trial account is simple – just provide a valid email address. No payment or additional qualifying information is necessary.

Signing Up for a Trial Account:

Begin your free trial by completing the self-service form on the Snowflake website.

Upon signing up, you'll make essential selections, including your preferred cloud platform, region, and Snowflake Edition. Keep in mind that certain features in the Enterprise Edition may consume additional credits, impacting the rate at which you utilize your free usage balance.

Your free usage balance decreases as you consume credits for utilizing compute resources and incurring storage-related costs.

The trial period spans 30 days from the sign-up date or until your free usage balance is depleted, whichever comes first. At any point during the trial, you have the flexibility to cancel or convert the account to a paid subscription.

Once the trial concludes, the account enters a suspended state. Although you can still log in, you won't have access to features like running a virtual warehouse, loading data, or performing queries.

To reactivate a suspended trial account, you'll need to provide credit card information, transitioning it into a paid subscription.

Using Compute Resources:

Virtual warehouses play a crucial role in providing the computational power needed for data loading and query execution within Snowflake. These warehouses operate by consuming credits, consequently diminishing your free usage balance. To initiate this process, simply start a warehouse, and any credits utilized will be deducted from your balance. If your credit consumption exhausts your free usage balance entirely, adding a credit card to your account becomes necessary to continue utilizing Snowflake.

It's important to note that free credits are solely consumed by the virtual warehouses you create in your account, and this consumption occurs only when these warehouses are actively running.

Tip:
To avoid unintentional usage of your free credits, consider the following:

Verify Warehouse Size: Before starting or resuming virtual warehouses, check their sizes. Larger warehouses consume more credits while running. In many instances, Small or Medium-sized warehouses are adequate for evaluating Snowflake's loading and querying capabilities.

Auto-Suspend Setting: When creating a warehouse, refrain from disabling auto-suspend. Opting for a short auto-suspend time period, such as 5 minutes or less, can effectively reduce credit consumption.

Using Storage:

When you upload data to your trial account, the associated storage cost is deducted from your free usage balance, calculated according to the standard On-Demand cost of a terabyte (TB) in your cloud platform and region. Beyond storage expenses, the act of loading data also utilizes credits as it engages the compute resources of a warehouse.

Converting to a Paid Account:

To transition your trial account to a paid status, follow these steps:

Log in to Snowsight.
Navigate to Admin » Billing & Terms.
Select Payment Methods.
Click on + Credit Card.
Provide the required information and click Add Card.
Upon entering your credit card details, Snowflake will verify the card's validity by initiating a $1 (USD) charge. No other charges are applied at this stage.

Note that you can also update the credit card associated with a trial account using the same interface. Each time you add a new credit card, a $1 (USD) charge is processed for verification.

It's essential to be aware that adding a credit card to a trial account converts it into a paid account without prematurely ending the trial period. Throughout the remaining trial duration, you can continue utilizing your free credits and storage until the balance is depleted. Afterward, any additional credit consumption and storage costs will be billed.

Unused balances expire at the conclusion of the trial period. At this point, costs for credit consumption and data storage are charged to the credit card on file at the end of each billing cycle, typically on a monthly basis.

For detailed pricing information, please refer to the pricing page on the Snowflake website.

Canceling a Trial Account:

Cancellation of a trial account can be initiated by reaching out to Snowflake Support and requesting the cancellation. It's important to note that, presently, trial accounts cannot be canceled directly through the web interface. To proceed with the cancellation, you must contact Snowflake Support for assistance.

Getting Started with Snowflake is Easy (Simple Guide)

Here's a simple guide to help you get started with Snowflake, including watching a live demo, trying it for free, and participating in a virtual hands-on lab:

1. Watch a Live Demo:

- Visit the Snowflake website: https://www.snowflake.com/.
- Look for the "Resources" or "Learn" section.
- Check for upcoming webinars or live demos. These sessions are usually conducted by Snowflake experts who showcase the platform's features and capabilities.
- Register for a live demo and attend the scheduled session.

2. Try Snowflake for Free:

- Navigate to the Snowflake website.
- Find the "Free Trial" or "Sign Up for Free" button.
- Follow the registration process to create a Snowflake account.
- You may need to provide some basic information and set up a username and password.
- Once registered, log in to the Snowflake platform using your credentials.

3. Explore the Snowflake Interface:

- After logging in, take some time to explore the Snowflake interface.
- Familiarize yourself with the main components, such as the Worksheets for querying data, the Object Browser for managing databases and tables, and the Warehouses for managing compute resources.
- Snowflake has a user-friendly interface, and you can find detailed documentation on their website to help you navigate and understand each feature.

4. Load Sample Data:

- To get hands-on experience, consider loading sample data into Snowflake.
- Snowflake provides sample datasets that you can use for testing and exploration. Look for these datasets in the documentation or within the Snowflake platform.

5. Participate in a Virtual Hands-On Lab:

- Check if Snowflake offers virtual hands-on labs or workshops.
- These labs typically provide a guided, interactive experience where you can work on exercises to understand key concepts and functionalities.
- Look for announcements on the Snowflake website, community forums, or through any communication channels they provide.

6. Join the Snowflake Community:

- Connect with other Snowflake users and experts through the Snowflake Community.
- The community is a valuable resource for asking questions, sharing experiences, and learning from others' use cases.
- Participate in forums, webinars, and discussions to enhance your understanding of Snowflake.

7. Refer to Documentation and Tutorials:

- Snowflake provides extensive documentation and tutorials to help users understand and use the platform effectively.
- Refer to the official documentation for detailed information on specific features, best practices, and troubleshooting.

By following these steps, you'll be well on your way to getting started with Snowflake and gaining hands-on experience with the platform.

Submitting a Listing for Approval

Before you can publish a listing on the Snowflake Marketplace, it's necessary to submit the listing for approval by Snowflake.

If the option to "Submit for Approval" is disabled and you wish to submit your listing, please check the following:

1. Ensure that you have completed the steps to configure the listing.
2. Verify that you are the ACCOUNTADMIN or have the OWNERSHIP privilege for the data product attached to the listing.
3. Confirm that all sample SQL queries attached to the listing pass validation.
To submit a listing for approval, follow these steps:

1. Log in to Snowsight.
2. In the left navigation bar, go to Data » Provider Studio.
3. Select the Listings tab, then choose the draft listing you intend to submit for approval.
4. Click on "Submit for Approval."

After Snowflake reviews the listing, the state will change to either Approved or Denied.

If the listing is denied, update it based on the feedback provided in comments and resubmit it for approval.

Upon approval or denial of the listing, an email notification is sent to both the Business Contact and Technical Contact email addresses specified in the provider profile associated with the listing.

Publishing a Listing for an Application Package:

To make an approved listing accessible on the Snowflake Marketplace, follow these steps:

1. Log in to Snowsight.

2. In the left navigation bar, go to Data » Provider Studio.

3. Select the Listings tab, then choose the listing you wish to publish.

4. Click on "Publish."

After publishing your Snowflake Marketplace listing, you have the option to create a referral link, allowing you to share a direct link to your listing with consumers.

Creating a Listing for an Application Package for the Snowflake Marketplace

To showcase your application package on the Snowflake Marketplace, follow these steps to create a listing:

1. Log in to Snowsight.

2. Navigate to Data » Provider Studio in the left navigation bar.

3. Click on + Listing, opening the Create Listing window.

4. Provide a name for your listing.

5. In the "Who can discover the listing" section, select "Anyone on the Marketplace" to publish the listing on the Snowflake Marketplace.

6. In the "How will consumers access the data product?" section, choose "Free" or "Paid."

7. Click on "Next." A draft listing is generated.

Before publishing your draft listing, you must configure additional required and optional capabilities.

Configuring a Marketplace Listing for an Application Package:

Once you've created a listing for the Snowflake Marketplace, it's essential to configure additional details for your listing to facilitate submission for approval and subsequent publication.

To configure a listing, follow these steps:

Log in to Snowsight.

In the left navigation bar, go to Data » Provider Studio.

Select the Listings tab, then choose the draft listing you wish to configure.

Click on "Add" next to each section displayed on the page, providing the necessary information.

While inputting details for each section, consult Configuring Listings for insights into each field. The specific properties available for editing depend on the type of listing created.

If you intend to monetize your Snowflake Native App, include a pricing plan to receive compensation for your Snowflake Native App.