Are all the LLMs available only for text, or does Snowflake Cortex support models for other types of data such as images?
Snowflake Cortex is expanding its capabilities beyond text-based LLMs! As of recently (March 2024), they've announced support for multimodal LLMs. This means Snowflake is incorporating models that can handle not just text, but also images and potentially other data formats.
Here's a breakdown of what this means:
- Multimodal LLMs: These advanced models go beyond text and can understand the relationship between different data types. For instance, an LLM might analyze an image and its accompanying text description to provide a more comprehensive understanding of the content.
- Snowflake's Partnership: They've partnered with Reka.ai, a company offering powerful multimodal models like Flash and Core [1]. These models can be used within Snowflake Cortex to unlock new possibilities for data analysis.
While the full range of supported data types beyond text might not be explicitly documented yet, the introduction of multimodal LLMs signifies a shift towards handling various data formats within Cortex.
Here are some potential use cases for image-based LLMs in Snowflake Cortex:
- Automated Image Captioning: Generate captions for product images in an e-commerce platform, improving accessibility and searchability.
- Content Moderation: Identify inappropriate content within images based on pre-defined criteria.
- Image Classification and Tagging: Automatically categorize and tag images based on their content, facilitating image organization and retrieval.
Remember, this is a rapidly evolving field. As Snowflake Cortex progresses, we can expect even more capabilities for working with diverse data types using LLMs.