Offline Editing for Columns by Table

Catalog administrators and stewards can now more easily operate on Column resources in bulk with the spreadsheet export/import flow. Previously, Columns could only be selected for bulk operations at the Collection level. Now, users can export all Columns of a parent Table, edit attributes in the spreadsheet, and upload changes.

The new functionality is accessible for users with correct access via the 'Columns' tab on a parent Table's resource page (example shown below).

For more information, please refer to the documentation.

Granular Filtering, Search, and Selection for Bulk Operations

We are thrilled to announce new functionality for the Quick Edit feature to support all of the bulk operations necessary to keep catalogs fresh and accurate. 

Previously for Quick Edit, users could only filter a set of resources by Resource Type. But now users can leverage search facets, advanced filtering, and text search capabilities available in other parts of the platform. Users can also perform multiple searches and apply multiple filters to continually add resources to the selection without restarting each time. This will streamline bulk operations by allowing users to more seamlessly select the exact set of resources intended for bulk enrichment and editing.

These capabilities are now available wherever Quick Edit lives: Glossary, Resources, and Collections. They will appear once you select either "Quick Edit" for Glossary, or the "Edit Multiple Resources" entry point for Resources and Collections, shown below:

In a future release, we will enable these capabilities for the Bulk Upload/Edit feature as well, making offline editing more targeted and effective.

For more information, please refer to the documentation for Glossary Quick Edit, Resources Quick Edit, and Collections Quick Edit.

A list of notable enhancements across the data catalog!

We're excited to introduce some powerful improvements and enhancements. Here's a list of our latest releases to the enterprise data catalog

1) Archie Bots - description generator enhancement

Archie Bots can now effortlessly describe all types of catalog resources, including custom resources. This improvement saves you time enriching your catalog, improving discoverability and understandability. You can read more about Archie Bots here.

2) Improvements to UX and increased max character count of descriptions

Enjoy getting wordy! We've increased the maximum character count of the Description field to 5000, allowing for more comprehensive and detailed information. We've also included markdown support in the hover-over view of descriptions and increased the view window size in search results.

3) Improvements to the search and navigation of Glossary terms

Users can now quickly filter by the first letter, making it easier to locate and manage terms. We've also made improvements to how special characters are sorted in the glossary, ensuring a more intuitive and organized experience. 

4) Now you can query the catalog layers

Customers can now query the layers of the graph using a named graph called :current. This feature federates your source data and catalog enrichments into one queryable graph, simplifying data exploration across catalog layers and allowing for easier exploration and analysis of your data assets. You can read more about the catalog layers and how to query them here.

We hope these enhancements empower you to make the most of your enterprise data catalog. Stay tuned for more exciting updates in the future!

Improvements to Metadata Collectors Page and Collector Wizard

To make collector setup faster and easier for catalog administrators, the Metadata Collectors page and Command Builder Wizard now support saving, editing, and deleting collector configurations for on-premise collectors.

Some of the new functionality in this release are:

  • Collectors configured from the UI will be saved and viewable, even before collectors are run. Previously, collector configurations were not saved for later use. 
  • Collector configurations can be edited and deleted.
  • Users can give collector configurations custom names.
  • New table “Catalog metadata sources” shows all collectors that are bringing metadata into the catalog.

For more information, refer to the documentation here.

SQL Server Reporting Services (SSRS) support for metadata collection is now Live!

Announcing our newest metadata collector - SQL Server Reporting Services (SSRS)! This collector is designed to provide you with an effective solution for extracting metadata from your SSRS environment into your catalog. Our integration facilitates the automated extraction, organization, and presentation of specific metadata elements from your SSRS system. You'll gain valuable insights into your datasets, data sources, folders, KPIs, reports, and linked reports – all within your easily navigable catalog. 

With the SSRS collector, you can:

  • Learn more about your reports and data, including who created a report or dataset and when they were last updated, helping you understand and trust your data
  • See the lineage of which datasets were used in a report, allowing you a comprehensive view of the data flowing into a report
  • Keep track of KPIs from SSRS and integrate them with business metrics from other source systems, all within one easy-to-use catalog, leading to better data-informed decisions

Are you ready to unlock the potential of your SQL Server Reporting Services? You can read more about how this collector works and all it harvests in the documentationThis collector is Tier 2 for Enterprise customers, and is available in dwcc version 2.151 and later.

An example of metadata from an SSRS Report, including Lineage:

Announcing Enhanced Email Notification Options

Visit your notifications settings page to customize the transactional emails you receive from

You can choose to:

  • Turn off all non-essential email communications
  • Unsubscribe from a category of email notifications
  • Customize which digests you receive
  • Customize dataset and project activity notifications

Learn more

Improvements to the Metadata Collectors Page and CLI Command Builder

We are thrilled to announce the General Availability of the Metadata Collectors page and CLI Command Builder tool! In addition, we've introduced the ability for users to create, manage, and delete Service Account tokens. These 3 features empower catalog administrators to more quickly set up on-premises collectors so your catalog users can get started discovering and understanding your data faster. In addition, seeing all the collectors (on-premises or cloud) that are bringing metadata into their catalogs allows you to maintain and govern your catalog more effectively.

For more information on these features, continue reading below.

Metadata Collectors Page: found in the Settings tab of an Organization, this page shows all of the collectors that are currently appearing in your catalog and other important information, such as the last time the collector ran. This page also includes cloud collectors set up via Connection Manager. For more information, refer to the documentation.

The CLI Command Builder allows users to step through a wizard to set up on-premises collectors. The wizard generates either a CLI command or a YAML file, so users can more quickly set up collectors during implementation. Since the BETA release, we've streamlined the form fields to more clearly differentiate required fields from optional fields For more information, refer to the documentation (available sources are denoted as "collector wizard available").

Service Accounts: administrators can now create, refresh (edit the expiration date), and delete service accounts from the UI. From the wizard, there is a "Create a service account" link that will take you to the "Service accounts" tab in the Settings page, and clicking on the "Add service account" button will generate an API token. We recommend using service accounts when setting up a collector, so the configurations aren't tied to user accounts. For more information, refer to the documentation.

Announcing support for Confluent Kafka metadata

Announcing our newest metadata collector - Confluent Kafka! We know how important it is to have the most up-to-date streaming data, so we’ve created this collector to allow you to easily monitor and collect Kafka metadata from your Confluent streaming platform. 

With Kafka metadata in, you and your teams can: 

  • Easily discover and monitor streaming metadata for real-time applications
  • Understand what is being streamed from on-prem and cloud Confluent
  • Have a single source of truth for your Confluent schemas for better discovery and governance

The Confluent Collector is actually two collectors, one for Confluent Platform (on-prem) and one for Confluent Cloud. With these collectors, you can capture, store, and analyze metadata including Cluster, Consumer, Producer, Broker, Partition, Schema, Consumer Group, Topic, and Environment (for Cloud). The collectors can optionally harvest metadata from Avro, JSON-schema, and Protobuf schemas stored in Confluent Schema Registry.

These Collectors are Tier 2 for Enterprise Customers. You can read the full documentation for Confluent Platform here and for Confluent Cloud here.

Avro Schema example metadata

An example of metadata for an Avro Schema in the platform

Announcing Azure Data Lake Storage Gen 2 Collector and Databricks Collector Lineage and Jobs

We’re excited to announce new enhancements to’s Databricks Collector and a brand new Collector for Azure Data Lake Storage Gen 2! With the help of these additional metadata harvesting and lineage capabilities, you can now get more detailed insights into your data than ever before.

Our Databricks Collector allows you to quickly and easily collect metadata from your Databricks environment into Now, with the addition of Jobs harvesting and lineage capabilities, you can get a deeper understanding of where your data is coming from, how it’s being used, and what insights you can discover.

Our new Jobs harvesting feature allows you to collect additional information about your workflows, such as creator, description, success, schedule, and more. This lets you better understand how and why your data was transformed.

The new lineage capabilities let you track your data’s journey, from its source all the way through its transformations. This means you can easily trace your data’s history, identify potential bottlenecks or sources of errors, and quickly gain an understanding of how your data has changed over time.

Our Azure Data Lake Storage Gen 2 Collector allows you to bring insights about your data storage layer into With this Collector, you can efficiently harvest metadata about Blobs and Containers, including the owner, last modified, path, and more. This information is vital for understanding your underlying data, leading to more trust and confidence in your data-driven decision-making.

You can learn more about these Features in our Databricks documentation and our Azure Data Lake Storage documentation. Both these Collectors are Tier 2 for Enterprise Customers.

An image showing am Blob from ADLS in the platform

An example of ADLS Blob metadata in the platform Usage and Audit Events now available as a Snowflake Marketplace Private Listing

As a Snowflake Powered By partner, is proud to announce that usage and audit event data, previously only available in a dataset, is now available as a Snowflake Marketplace Private Listing.  This allows customers access to their full history of events data via the Snowflake Data Cloud enabling high performance and advanced analytic functions on this data.  It also makes events and logging data available via Snowflake with no ETL required for integration in a wide variety of use cases.  To read more about this capability and how to request access to a Private Listing, please see our documentation here.

Show Previous EntriesShow Previous Entries