Streamlit is an open source app framework in Python language. It helps create web apps for data science and machine learning in a short time. It is compatible with major Python libraries such as scikit-learn, Keras, PyTorch, SymPy(latex), NumPy, pandas, Matplotlib etc.
Streamlit was acquired by Snowflake in March 2022 and it enables data scientists and other developers to easily build data applications with Python using its open source framework. Following the acquisition, Snowflake is now advancing its Streamlit integration (in development), so developers can bring their data and machine learning (ML) models to life as secure, interactive applications — all within Snowflake.
Streamlit is an open source app framework in Python language. It helps create web apps for data science and machine learning in a short time. It is compatible with major Python libraries such as scikit-learn, Keras, PyTorch, SymPy(latex), NumPy, pandas, Matplotlib etc.
Streamlit was acquired by Snowflake in March 2022 and it enables data scientists and other developers to easily build data applications with Python using its open source framework. Following the acquisition, Snowflake is now advancing its Streamlit integration (in development), so developers can bring their data and machine learning (ML) models to life as secure, interactive applications — all within Snowflake.
If you want to ADD an IDENTIFY COLUMN in a Snowflake table, you just need to either:\n\n1. CREATE the TABLE with the IDENTITY VALUE LIKE THIS:\n\nCREATE OR REPLACE TABLE EXAMPLE_TABLE(\n YOUR_ID IDENTITY START 1 INCREMENT 1, \n YOUR_NAME VARCHAR\n);\n*Note: you can replace IDENTITY with AUTOINCREMENT - it works exactly the same way.\n\n2. ALTER a TABLE and add the IDENTIFY VALUE COLUMN [REMEMBER - you can only do this if there is NO DATA within the TABLE!]\n\nALTER TABLE EXAMPLE_TABLE ADD COLUMN YOUR_ID INT IDENTITY(1,1);\n\n*Note: All of my examples show IDENTITY(1,1) where we start from the number 1 and increment by 1 for each row. You can use different values here if you want. You can start from any number and you can increment any amount as well. IF you do not specify a START VALUE or an INCREMENT VALUE then the default values for both of them are equal to 1.\n\nYou also have 1 additional method to add auto-numbering. This is by using the SEQUENCE object. This object is separate from the table itself so its great for if you want a unique value ACROSS multiple tables. The syntax to CREATE a SEQUENCE is below. Again, a sequence is a separate object from the table itself but its very similar to an IDENTITY or AUTOINCREMENT functionality but its more versatile I would say.\n\nCREATE SEQUENCE EXAMPLE_SEQUENCE_FOR_YOU START 1 INCREMENT 1;\n// this creates a sequence the same as before with a START VALUE of 1 and an INCREMENT VALUE of 1.\n\nCREATE OR REPLACE TABLE EXAMPLE_TABLE(\n YOUR_AUTO_ID SET DEFAULT EXAMPLE_SEQUENCE_FOR_YOU.nextvalue;\n YOUR_NAME VARCHAR\n);\n//or\nALTER TABLE ALTER COLUMN YOUR_AUTO_ID SET DEFAULT EXAMPLE_SEQUENCE_FOR_YOU.nextvalue;\n\nFYI: The AUTOINCREMENT AND IDENTITY actually use sequences behind the scenes!\n
Hey Charlie - This is definitely an annoying "gotcha". The problem is that on the secondary account itself where the Replica Database will be created has to be ENABLED for REPLICATION currently by ONLY the ROLE of ORGADMIN. So either you need to do this if you have access to ORGADMIN role. Or you need to find your ORGADMIN for your account and execute the following code:
Make sure to use your very own:
ORG_NAME and SECONDARY_ACCOUNT_NAME (where you will put the replica)
USE ROLE ORGADMIN;
SELECT SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER('ORG_NAME.SECONDARY_ACCOUNT_NAME','ENABLE_ACCOUNT_DATABASE_REPLICATION', 'true');
/*. ALSO, RESULT SHOULD BE:
[ "SUCCESS" ]
*/
ONLY after you do that will you be able to create a replica of the primary database on your secondary account.
[I get asked this question all the time so I'm answering it.]
Snowflake Micro-partitions are immutable physical files that are automatically partitioned based on ingestion order unless you setup auto-clustering to define how the partitions should be setup. This is how all OLAP data on snowflake is partitioned. Snowflake divides and groups rows of tables into these compressed micro-partitions of 50 to 500MB of data. Ideally the micro-partitions are clustered (sorted) as efficiently as possible to allow for pruning. These micro-partitions allow the Snowflake engine to easily replicate segments evenly for distribution across compute nodes. These micro-partitions also are part of the architectural design which allows Snowflake to handle datasets of any size (even petabytes) since they cleverly distribute them into these micro-partitions with metadata automatically and continuously updated on them.
YES. Streams are one of the objects you can clone. Its as simple as:\nCREATE STREAM SNOWFLAKE_SOLUTIONS_STREAM_CLONE FROM SNOWFLAKE_SOLUTIONS_STREAM_BEING_CLONED;\nOther objects you can clone are: DATABASE, SCHEMA, TABLE, STAGE, FILE FORMAT, SEQUENCE, and TASKS.
Yes. Secure Views can have full change tracking enabled on them by using the CHANGES clause.
The way Snowflake Zero Copy Cloning works is based on Snowflake's Micro-partition technology. So one of the nice things the snowflake data cloud does is that a CLONE at first is ONLY related to very very tiny metadata changes. Since snowflake has these immutable micro-partions a CLONE object at first is just a new reference to the micro-partitions at the EXACT time of the clone statement execution. So, if you NEVER ever change the clone than there are no additional storage costs for it. Also, a CREATE TABLE statement with a CLONE statement does not use Snowflake compute and has no compute costs. [unless in a somewhat rare case your usage of services like this exceeds Snowflake's 10% rule. See this:
Hah. This is one of the key questions related to Snowflake. It really depends on your workloads and how you use Snowflake. The larger the organization and amount of people using Snowflake then the harder it typically is to estimate. We have tools here at Snowflake Solutions site that let you roughly estimate your Snowflake Monthly Costs.\n\nIf you really really need to maintain a budget and estimate costs very accurately then you need to have very standard workloads. Loading workloads if they are standard can be estimated very accurately. If you other workloads like reporting, CRM, and data science that are more ad hoc then it gets harder and harder to estimate costs. You can use resource monitors to completely control your max costs. Be very careful though to have very very limited access to who can change resource monitor limits. Also, you need to be very careful with time travel as well as any of the additional services with costs such as Materialized Views, Automated Clustering, Snowpipe, Search Optimization Service.
No. Currently - 3/23/2022 - you cannot call an external function in a row access policy.
Snowflake does not clone EXTERNAL tables.
No. Snowflake Directory Tables are ONLY implicit objects tied to a Stage and there is no way to assign privileges to them. The privileges are determined by the Snowflake Stage Object that they are tied too.
Yes. This is easy to do by running the following statement. [Again, this ONLY works if there is NO DATA in the table at all.]
ALTER TABLE EXAMPLE_TABLE ADD COLUMN YOUR_ID INT IDENTITY(1,1);
or
ALTER TABLE EXAMPLE_TABLE ADD COLUMN YOUR_ID INT AUTOINCREMENT(1,1);
or
ALTER TABLE EXAMPLE_TABLE ADD COLUMN YOUR_ID INT AUTOINCREMENT START 1 INCREMENT 1;
*Note: You will notice that the terms/syntax of AUTOINCREMENT is exactly the same as IDENTITY.
Also, if you do not specify a START VALUE or an INCREMENT VALUE then the default values for both of them are equal to 1.
One of the awesome features of Snowflake is NO COPY CLONING. You can utilize the CREAT DATABASE NEW_CLONED_TEST_DATABASE CLONE EXISTING_DATABASE_NAME; It is that easy to create a clone of your production database within the same account.
Next, define a policy with one or more conditions to query the mapping table with a subquery. At query runtime, Snowflake determines whether the user executing the query matches the sales region specified in the mapping table. If a match occurs, the user can see those rows in the query result. Based on the mapping table, the expected query results are as follows:
No. Snowflake Directory Tables are implicit objects layered onto a Snowflake Stage Object. They only exist as a layer on a Stage Object.
Currently, this ONLY tracks inserts and it can ONLY be used on External Tables.
YES. Schemas are one of the Snowflake Objects that can be cloned. It is very easy to do. Its as simple as:\nCREATE SCHEMA SNOWFLAKE_SOLUTIONS_SCHEMA_CLONE FROM SNOWFLAKE_SOLUTIONS_SCHEMA;\nOther objects you can clone are: DATABASE, TABLE, STAGE, FILE FORMAT, SEQUENCE, STREAM, and TASK.
There are two ways to apply a Row Access Policy (RAP) to a table or view. You either add it to the create table or create view statement or you add it by altering the table or view as follows:\n\n* CREATE TABLE statement or a CREATE VIEW statement you add the Row Access Policy like this.\nCREATE TABLE YOUR_EXAMPLE_TABLE (\nexample_id\nexample_customer_fname varchar,\nexample_customer_lname varchar,\nexample_region_name varchar\n)\nwith row access policy REGION_EXAMPLE_POLICY on (example_region_name);\n\nIf you have an existing table or view, you can apply the RAP with the ALTER TABLE or ALTER VIEW statement like this:\n\nALTER TABLE EXAMPLE_TABLE ADD ROW ACCESS POLICY REGION_EXAMPLE_POLICY on (example_region_name);\n
Due to Snowflake's core micro-partition architecture, it also allows the architecture to provide time-travel functionality as a key feature. This allows customers to go back in time to any point of time or timestamp of the object ASSUMING time travel retention period is enabled on the object.