A Snowflake stage serves as a designated location in cloud storage, facilitating the loading and unloading of data from a table. Snowflake provides support for:
Utilized for storing data files internally within Snowflake.
Every user and table in Snowflake is endowed with an internal stage by default, dedicated to staging data files.
Employed for storing data files externally in cloud storage services such as Amazon S3, Google Cloud Storage, or Microsoft Azure.
If your data is already hosted in these cloud storage platforms, external stages can be employed to load data into Snowflake tables.
Within this tutorial, we undertake the process of uploading sample data files (previously downloaded in the prerequisites) to the internal stage associated with the emp_basic table created earlier. The PUT command is employed for this purpose, enabling the upload of the sample data files to the designated internal stage.
Staging sample data files:
Utilize the PUT command in SnowSQL to transfer local data files to the designated table stage associated with the emp_basic table you’ve previously established.
PUT file://[/\]employees0*.csv @sf_tuts.public.%emp_basic;
– Linux or macOS
PUT file:///tmp/employees0*.csv @sf_tuts.public.%emp_basic;
PUT file://C:\temp\employees0*.csv @sf_tuts.public.%emp_basic;
Now, let’s delve into the command:
file://[/\]employees0*.csv specifies the complete directory path and names of the files on your local machine for staging. It’s noteworthy that file system wildcards are permitted, and if multiple files match the pattern, they will all be displayed.
@.% denotes the usage of the stage for the specified table, specifically the emp_basic table in this instance.
By default, the PUT command employs gzip compression, as denoted in the TARGET_COMPRESSION column.
Listing the Staged Files (Optional):
You can list the staged files using the LIST command.