Building Extensible Data Pipelines with Snowflake

Share on facebook
Share on twitter
Share on linkedin
Building Extensible Data Pipelines

Snowflake Cost Saving

We Automate SnowflakeDB Data Cloud Cost Saving. Sign Our Free 7 Days No Risk Trail Now

Presentation Description:

Data pipelines are at the heart of how your organization delivers the data it needs to derive valuable insights. But building robust, reliable pipelines that augment and transform raw inputs into clean, analyzable data sets is hard, often requiring coordination between multiple systems that don’t work well together. This session, will describe how data engineers can use Snowflake’s extensibility features to build simple pipelines that incorporate code and libraries written in a variety of languages, and integrate naturally with third-party services. Snowflake representatives will build a real-life demo using some of its latest features, including external functions, Java UDFs, and more.

Presentation Track: Modernize Your Data Lake, Deliver Data Engineering at Scale
Presentation Date: November 29, 2020
Presentation Speaker(s): Isaac Kunen
Frank’s Comment: I worked with Isaac when the Snowflake Kafka connector was in Beta. He was a nice guy. Seemed to work hard and know his stuff.

Overview ITS:

In this session, speakers will describe how data engineers can use Snowflake’s extensibility features to build simple pipelines that incorporate code and libraries written in a variety of languages. The speaker’s first talk about the challenges of siloed data, reliability and performance, and complex pipeline architectures which yield many headaches. The key features of the Snowflake platform, however, let you easily incorporate services outside of Snowflake, let you bring in code, and with Snowpark, let you write powerful pipelines.
[site_reviews_form assigned_posts="post_id" hide="title,name,email"]
[site_reviews_summary assigned_posts="post_id"]