April Product Launch

April brings multiple new metadata collection capabilities to, including Collector enhancements for Snowflake and Databricks, and a new Collector for Amazon Managed Streaming for Kafka.

Read on to learn about these exciting new features!

Catalog Snowflake Streamlit Apps, Databricks Tags, and Amazon Managed Streaming for Kafka (MSK) assets

We’re excited to announce updates to the Snowflake and Databricks collectors to harvest more metadata and collector support for Amazon Managed Stream for Kafka. These updates gather more metadata from these systems and seamlessly bring it into our platform. This metadata helps both technical and non-technical users discover and understand their data quickly, govern their data with greater context, and increase trust in data by providing information about data health and transformations.

All new features are generally available.

The Snowflake Collector harvests metadata from Streamlit in Snowflake

The Snowflake Collector now catalogs metadata from Streamlit in Snowflake, facilitating better governance, discovery, and utilization of Streamlit apps across your organization.

The metadata harvested for Streamlit apps includes comments, owners, creation date, and root location. From, users can discover apps and navigate directly to the app in Snowflake.

An example of a Streamlit app

Databricks Collector harvests Databricks tags

The Databricks Collector now catalogs tags from Databricks catalogs, schemas, tables, and columns. Tags are used in Databricks to simplify the search and discovery of data assets. With these tags now in, users can quickly discover data assets in Databricks. For instance, product teams can now build their data products in Databricks and identify them in

An example of Databricks Tags

Amazon Managed Streaming for Kafka (MSK) Collector

The new Amazon Managed Streaming for Kafka (MSK) Collector catalogs metadata from Amazon MSK, helping maintain a comprehensive inventory of MSK assets, facilitating better governance, discovery, and utilization of data across your organization.

This collector harvests metadata for clusters, brokers, topics, consumers, and producers.

An example collection from Amazon MSK

Start exploring today

These new collector updates help users understand where data in these reports are sourced from, facilitating troubleshooting for analysts and increasing trust for business end users. Learn more in our product documentation: March Product Launch

March brings a host of new capabilities to, including a new Snowflake integration for Tag Syncing, two new collectors (Power BI Report Server, Amazon QuickSight), a highly-requested interface improvement to better understand relationships, and a Chrome Extension for Hoots.

Read on to learn about these exciting new features!

Snowflake Tag Sync Automation [beta]

This automation allows users to edit and create new Snowflake tags within the platform and then sync those Snowflake tags between Snowflake and

Key Features:

  • Easily edit and create new Snowflake tags using’s simple user interface
  • Sync edited/new Snowflake tags back to Snowflake with the push of a button
  • Display Snowflake tags in a new section titled "Snowflake Tags" on resource pages
  • becomes the source of truth for Snowflake tags when this automation is enabled

Why integrate your Snowflake tags in

Creating and editing tags is a breeze within’s UI. Snowflake tags are powerful governance tools that allow users to apply policies, control access, and discover resources. An easier method for users to create and edit tags via the means it’s easier to govern Snowflake resources.

Inside the platform, you can view Snowflake Tags on Snowflake resource pages, like this Column page. You can also Sync the tags back to Snowflake with a simple push of a button.

The Snowflake Tag Sync Automation is currently in beta and is available as part of the Data Governance Premium offering. If you are interested in this feature, please reach out to your Customer Success Director and they will help enable the feature for you. You can read our product documentation here for full details.

Relationships as Fields

You asked, we listened. Our latest improvement is crafted from the desire to streamline the enrichment experience, making it easier to build, manage, and see important relationships. This capability allows metadata fields to be built using custom relationships between resource types, providing a more intelligent way to manage metadata and inspire users to build relationships. For example, if you cataloged your Teams and Data Products, you might want to create a relationship to show which teams govern which products (screenshot below). Your users might see and navigate to Team resources via Data Products, or vice-versa.

These new types of fields help users see, understand, and navigate relationships and show the knowledge graph at work. This enhancement compliments our flexible architecture that allows you to build custom Types, Fields and Relationships - deciding what to catalog and how best your users might want to navigate the related resources. This feature focuses on the philosophy that there are many kinds of relationships - some of which have an "attribute-like" utility, rather than just a related object. 

You can read more about how you might use this feature in our documentation. This is now available to all enterprise customers using either MDP or Catalog Toolkit for configuration. 

We hope you enjoy the new opportunities this enhancement brings to your catalog.

New collectors for Power BI Report Server and Amazon QuickSight

We’re excited to announce two new collectors: Power BI Report Server Collector and Amazon QuickSight Collector, which gather metadata from these systems and seamlessly bring it into the platform. This metadata helps both technical and non-technical users discover and understand their data quickly, govern their data with greater context, and increase trust in data by providing information about data health and transformations.

Both new collectors are available in Private Preview, please contact your Customer Success Director if you are interested in participating in a Private Preview program. More information is available in our product documentation: Power BI Report Server collector, Amazon QuickSight collector.

Hoots Browser Extension for Google Chrome

Now available in the Chrome App Store is the extension for Google Chrome, with the exciting new capability of automatic display of the Hoots badges on the data products where your organization’s users are using data and making decisions.

Now the valuable data trust signals, related glossary terms and additional context from your catalog Hoots configuration are more easily displayed on your BI & analytics applications. With the Hoots Browser Extension capability, Hoots can be shown on Tableau, Power BI, Looker, and any other web-based application, and there’s no integration required to embed the Hoot display.

More information about the Hoots Browser Extension is available in our product documentation: Using Google Chrome Extension for Hoots. January Product Launch

We are excited to announce the launch of new features and latest improvements:

  • Cloud Collectors - configure and run collectors hosted by NEW
  • Support for Snowflake Data Quality - collect and catalog Snowflake Data Metric Functions (DMFs) NEW
  • Bulk operations UX improvements - streamlined bulk enrichment workflow  IMPROVED
  • Enrichment and discovery UX improvements - more context and default sorting  IMPROVED

Read the sections below for full details on each new feature!

NEW Introducing: Cloud Collectors!

We are excited to announce the launch of Cloud Collectors, the newest way to collect metadata on!

Now, you can configure and run collectors that are hosted by with just a few clicks! This feature not only provides a no-code way to start bringing metadata into your catalogs faster, it also has robust functionality around scheduling and monitoring to make setup more transparent and seamless. If you have cloud-accessible data sources that you're ready to bring into your catalog, this feature is for you!

👩‍💼 How can I use Cloud Collectors?

Users with Admin access will see a new option in the collector setup wizard that says "Cloud."

Once you enter your source information, you will be able to set a custom name for your collector configuration, and set a schedule for how frequently the collector should run.

After a collector completes, you will see the metadata and resource types that were collected, as well as the source information you entered while setting up the collector. Here you will also find what might have gone wrong if the collector run failed, and you'll have the ability to cancel the run as well.

You can view all of the collectors you have set up, whether they are from collectors that you host or Cloud Collectors, on the Metadata Collection tab. From here, you can view, edit, and delete collector configurations. And if you're setting up multiple collectors for one source with the same credentials, try the "Duplicate Configuration" button to quickly set all of them up.

For a full list of supported sources and more details on the feature, please refer to the documentation here.

NEW Announcing support for Snowflake Data Quality

We are thrilled to introduce an exciting addition to our existing Snowflake collector – support for Snowflake’s brand-new Data Quality feature, currently available in private preview. This enhancement empowers users to elevate their data quality assessment to new levels.

Key Features:

📊 Collect and catalog Snowflake Data Metric Functions (DMFs): Users can now measure the quality of their data using Snowflake’s powerful "data metric functions" (DMFs) and catalog this context with Example DMFs include Null Count, Unique Count, and Freshness – providing comprehensive insights into the health of your data.

🔍 Find and understand data quality metrics: The DMFs and observations (recorded metrics) are seamlessly integrated into resource pages on your platform and are also presented as Hoots associated with Snowflake tables and views. This user-friendly interface makes it easy for individuals across your organization to discover and understand data quality metrics effortlessly.

Why Snowflake Data Quality &

🌐 Compliance & Consistency: In today's data-driven landscape, ensuring compliance and consistency is paramount. This Data Quality feature & integration help you meet these standards by offering real-time insights into critical data metrics.

🔒 Build Trust: Trust is the foundation of effective data utilization. This Data Quality integration  helps users to trust their data by bringing metrics related to freshness, blank values, and inaccuracies to the catalog and everyday tools, such as Tableau and Power BI, via Hoots.

Who Benefits?

👩‍💼 Data Stewards, Engineers, Admins: Empower your data stewards and technical teams by providing them with a tool that gives immediate insights into the current state of their data based on specific metrics.

🚨 Data Consumers: With Hoots, you can identify and take swift action on tables and views that require attention, ensuring data quality monitoring is seamlessly integrated with considerations for cost, consistency, and performance.

Experience a new era of data quality and reliability with’s support for Snowflake's Data Quality today!

Note: Snowflake Data Quality is an enhancement within the existing Snowflake collector, and is currently available to Snowflake Private Preview customers. ❄️🚀

Using Hoots, users can quickly see data quality issues, like duplicate data, and easily fix the errors.

Improvements and Enhancements

IMPROVED Improvements to Bulk Operations UX

Bulk operations are a crucial part of keeping a catalog updated and accurate. We're excited to announce some improvements that will streamline and accelerate bulk operations such as bulk editing tags and attributes and bulk moving resources between collections.

First, we have consolidated these operations into a single menu for each place you can initiate a bulk operation (the Glossary tab, the Resources tab, and the Collection Contains tab). Now you can Quick edit, Add resources to collections, and Export/Import resources from all three locations.

Next, we've added the granular selection experience, that previously existed only in Quick edit, to the Export/Import spreadsheet flow as well. This is available on all three entry points (Glossary, Resources, Collections), which should significantly reduce the time it takes to make changes via the spreadsheet option.

Finally, we've simplified and clarified the experience around moving resources between collections. Previously this experience only existed within the Quick edit flow, but now you can select 'Add to Collections' or 'Move or Add Collections' to access this functionality. From the Glossary and Resources tab, you'll be able to add resources to one or multiple collections, and from the Collection tab (example below), you'll be able to add resources to one or multiple collections, or move resources from one or all collections to one or multiple collections.

With these improvements, administrators and curators will be able to perform bulk operations on resources much more quickly. For more information, please refer to the documentation for bulk editing resources here and for bulk editing glossary here.

IMPROVED Added context in various search experiences

The suggested search dropdown  now has more context, including the list of collections, owning Organization or User profile, and more. We’ve also added more context to the search experience when a user is relating one resource to another. This added context makes it easier to see and understand what has already been added.

IMPROVED Default sorting improvement + column index sort

We’ve provided a default sort experience that makes scanning the related, contained, and column resources faster. We also added column index as a sort option so users can understand the original column order from the database.

IMPROVED Expansion of the Summary field

The Summary field is now available on all resource types out-of-the-box. The field is available on all Types without the need for configuration.

IMPROVED Rich Text Editing without Markdown

Multi-line fields on catalog resources support Rich Text for more engaging and understandable content, and now these fields can be edited in a What-You-See-Is-What-You-Get (WYSIWYG) user experience rather than users having to create and edit content using Markdown.

Markdown editing is still available for users that prefer it, but now more data owners and users can create compelling rich text content.

Offline Editing for Columns by Table

Catalog administrators and stewards can now more easily operate on Column resources in bulk with the spreadsheet export/import flow. Previously, Columns could only be selected for bulk operations at the Collection level. Now, users can export all Columns of a parent Table, edit attributes in the spreadsheet, and upload changes.

The new functionality is accessible for users with correct access via the 'Columns' tab on a parent Table's resource page (example shown below).

For more information, please refer to the documentation.

Granular Filtering, Search, and Selection for Bulk Operations

We are thrilled to announce new functionality for the Quick Edit feature to support all of the bulk operations necessary to keep catalogs fresh and accurate. 

Previously for Quick Edit, users could only filter a set of resources by Resource Type. But now users can leverage search facets, advanced filtering, and text search capabilities available in other parts of the platform. Users can also perform multiple searches and apply multiple filters to continually add resources to the selection without restarting each time. This will streamline bulk operations by allowing users to more seamlessly select the exact set of resources intended for bulk enrichment and editing.

These capabilities are now available wherever Quick Edit lives: Glossary, Resources, and Collections. They will appear once you select either "Quick Edit" for Glossary, or the "Edit Multiple Resources" entry point for Resources and Collections, shown below:

In a future release, we will enable these capabilities for the Bulk Upload/Edit feature as well, making offline editing more targeted and effective.

For more information, please refer to the documentation for Glossary Quick Edit, Resources Quick Edit, and Collections Quick Edit.

A list of notable enhancements across the data catalog!

We're excited to introduce some powerful improvements and enhancements. Here's a list of our latest releases to the enterprise data catalog

1) Archie Bots - description generator enhancement

Archie Bots can now effortlessly describe all types of catalog resources, including custom resources. This improvement saves you time enriching your catalog, improving discoverability and understandability. You can read more about Archie Bots here.

2) Improvements to UX and increased max character count of descriptions

Enjoy getting wordy! We've increased the maximum character count of the Description field to 5000, allowing for more comprehensive and detailed information. We've also included markdown support in the hover-over view of descriptions and increased the view window size in search results.

3) Improvements to the search and navigation of Glossary terms

Users can now quickly filter by the first letter, making it easier to locate and manage terms. We've also made improvements to how special characters are sorted in the glossary, ensuring a more intuitive and organized experience. 

4) Now you can query the catalog layers

Customers can now query the layers of the graph using a named graph called :current. This feature federates your source data and catalog enrichments into one queryable graph, simplifying data exploration across catalog layers and allowing for easier exploration and analysis of your data assets. You can read more about the catalog layers and how to query them here.

We hope these enhancements empower you to make the most of your enterprise data catalog. Stay tuned for more exciting updates in the future!

Improvements to Metadata Collectors Page and Collector Wizard

To make collector setup faster and easier for catalog administrators, the Metadata Collectors page and Command Builder Wizard now support saving, editing, and deleting collector configurations for on-premise collectors.

Some of the new functionality in this release are:

  • Collectors configured from the UI will be saved and viewable, even before collectors are run. Previously, collector configurations were not saved for later use. 
  • Collector configurations can be edited and deleted.
  • Users can give collector configurations custom names.
  • New table “Catalog metadata sources” shows all collectors that are bringing metadata into the catalog.

For more information, refer to the documentation here.

Announcing Enhanced Email Notification Options

Visit your notifications settings page to customize the transactional emails you receive from

You can choose to:

  • Turn off all non-essential email communications
  • Unsubscribe from a category of email notifications
  • Customize which digests you receive
  • Customize dataset and project activity notifications

Learn more

Improvements to the Metadata Collectors Page and CLI Command Builder

We are thrilled to announce the General Availability of the Metadata Collectors page and CLI Command Builder tool! In addition, we've introduced the ability for users to create, manage, and delete Service Account tokens. These 3 features empower catalog administrators to more quickly set up on-premises collectors so your catalog users can get started discovering and understanding your data faster. In addition, seeing all the collectors (on-premises or cloud) that are bringing metadata into their catalogs allows you to maintain and govern your catalog more effectively.

For more information on these features, continue reading below.

Metadata Collectors Page: found in the Settings tab of an Organization, this page shows all of the collectors that are currently appearing in your catalog and other important information, such as the last time the collector ran. This page also includes cloud collectors set up via Connection Manager. For more information, refer to the documentation.

The CLI Command Builder allows users to step through a wizard to set up on-premises collectors. The wizard generates either a CLI command or a YAML file, so users can more quickly set up collectors during implementation. Since the BETA release, we've streamlined the form fields to more clearly differentiate required fields from optional fields For more information, refer to the documentation (available sources are denoted as "collector wizard available").

Service Accounts: administrators can now create, refresh (edit the expiration date), and delete service accounts from the UI. From the wizard, there is a "Create a service account" link that will take you to the "Service accounts" tab in the Settings page, and clicking on the "Add service account" button will generate an API token. We recommend using service accounts when setting up a collector, so the configurations aren't tied to user accounts. For more information, refer to the documentation.

Announcing Azure Data Lake Storage Gen 2 Collector and Databricks Collector Lineage and Jobs

We’re excited to announce new enhancements to’s Databricks Collector and a brand new Collector for Azure Data Lake Storage Gen 2! With the help of these additional metadata harvesting and lineage capabilities, you can now get more detailed insights into your data than ever before.

Our Databricks Collector allows you to quickly and easily collect metadata from your Databricks environment into Now, with the addition of Jobs harvesting and lineage capabilities, you can get a deeper understanding of where your data is coming from, how it’s being used, and what insights you can discover.

Our new Jobs harvesting feature allows you to collect additional information about your workflows, such as creator, description, success, schedule, and more. This lets you better understand how and why your data was transformed.

The new lineage capabilities let you track your data’s journey, from its source all the way through its transformations. This means you can easily trace your data’s history, identify potential bottlenecks or sources of errors, and quickly gain an understanding of how your data has changed over time.

Our Azure Data Lake Storage Gen 2 Collector allows you to bring insights about your data storage layer into With this Collector, you can efficiently harvest metadata about Blobs and Containers, including the owner, last modified, path, and more. This information is vital for understanding your underlying data, leading to more trust and confidence in your data-driven decision-making.

You can learn more about these Features in our Databricks documentation and our Azure Data Lake Storage documentation. Both these Collectors are Tier 2 for Enterprise Customers.

An image showing am Blob from ADLS in the platform

An example of ADLS Blob metadata in the platform

Show Previous EntriesShow Previous Entries