Skip to main content

5 posts tagged with "security"

View All Tags

Apache Gravitino 1.1.0 - An AI-native metadata management platform

· 6 min read
Qi Yu
PMC Member

We are glad to announce the release of Apache Gravitino 1.1.0! This release builds upon the solid foundation laid by Apache Gravitino 1.0.0, introducing a range of new features, improvements, and bug fixes that enhance the platform's capabilities, performance, and security.

Highlights

  • Broader catalog support (initial Lance REST service, a reusable lakehouse-generic catalog, and Hive3) to simplify integration with diverse lakehouse deployments.
  • Stronger metadata-level authorization and security hardening for the Iceberg REST surface.
  • Multi-cluster fileset support and Python client improvements for real-world multi-region and migration workflows.
  • Stability, performance and observability work across the entity-store, caches, scan planning, connectors and CI — reducing operational friction and test flakiness.

New Features

  1. Built for the Future of AI Data: Lance REST service. #8889

As AI and ML workflows become central to data platforms, efficient access to vector data is crucial. The new Lance REST service exposes Lance datasets through a managed HTTP interface. This allows remote clients—such as inference services or notebooks—to access vector data with the high performance of the Lance format, all while adhering to Apache Gravitino's centralized security and governance policies.

  1. Generic lakehouse catalog. #8828

The lakehouse ecosystem is diverse and rapidly evolving, with new table formats and engines emerging frequently. To keep pace, we introduced a generic lakehouse catalog framework. This abstraction reduces the boilerplate code required to integrate new engines, standardizing how capabilities are negotiated and how namespaces are handled. This means faster support for new formats and a more consistent experience for developers and users alike.

  1. Access control for Iceberg REST service. #4290

The Iceberg REST catalog is becoming the standard for open table access, but production use demands robust security. We have hardened the Iceberg REST service with comprehensive authentication and authorization checks. This ensures that data accessed via standard Iceberg clients is fully protected, making Apache Gravitino a secure choice for multi-tenant and public-facing data lake deployments.

  1. Hive 3 catalog support. #5912

Many enterprises still rely on Hive 3 for their core data infrastructure, making migration a risky and complex endeavor. This feature allows users to register existing Hive 3 metastores directly as Apache Gravitino catalogs. By doing so, organizations can instantly bring their legacy data under Apache Gravitino's unified governance and management umbrella without moving data or disrupting existing workloads, paving the way for a smoother transition to modern lakehouse architectures.

  1. Multiple HDFS clusters support. #9117, #9288

In large-scale production environments, data is often distributed across multiple HDFS clusters to ensure isolation and disaster recovery. Previously, Apache Gravitino was limited in how it handled these complex topologies. With this release, users can manage filesets across multiple HDFS clusters within a single Apache Gravitino instance. This capability simplifies cross-cluster data management, improves resource isolation, and provides greater flexibility for multi-tenant architectures.

  1. Metadata authorization for IRC, statistics, tags, jobs, and policies. #4361, #8752, #8944, #8943

True governance requires securing every aspect of the metadata platform. We have expanded fine-grained authorization to cover auxiliary resources like tags, statistics, and background jobs. This enhancement closes previous security gaps, ensuring that all user interactions with the system—whether viewing statistics or managing tags—are strictly governed by least-privilege policies.

  1. New Iceberg REST endpoints. #6336

To support the full range of capabilities expected by modern analytics tools, we have implemented additional endpoints from the Iceberg REST specification. This improves compatibility with the latest query engines and clients, ensuring that users can leverage advanced planning and catalog operations without running into compatibility issues.

Improvements

Core & Server

  • Entity store and Cache: Fixed several performance and logic issues to improve stability and speed. #8697, #8743, #8815, #8817, #8710, #9148, #7916, #8546
  • Metrics: Expose more metrics for server and catalogs to enhance observability. #8594
  • Authorization: Refined permission checks. #7942.
  • Resource management: Improved resource release and closure mechanisms to prevent leaks. #8981, #9002, #8999
  • JDBC metric store: Support storing Iceberg metrics in JDBC. #8899
  • Job system enhancement: Support job alteration. #8638, #8814

Catalogs & Connectors

  • Iceberg catalog: Support metadata cache. #8314
  • Upgrade Iceberg to 1.10.0 to support scan planning. #9046
  • Improve dynamic config provider for better usability. #8970
  • Fileset catalog: Prevented filesystem instances from hanging for a long time. #9280
  • Trino connector: Support SQL UPDATE/DELETE/MERGE. #8241
  • Fix getTableStatistics in GravitinoMetadata. #9100

Clients

  • GVFS client: Improved stability and error handling. #8752, #8882, #8948, #8953.
  • Fileset bundle JARs: Refactored for a more detailed delivery strategy. #9106
  • Python client: Added support for relational catalog. #5198

Developer Experience & Operations

  • Helm chart: Enhanced configuration options and stability. #8747, #8174
  • GitHub templates: Added templates to support AI coding. #9227.
  • Tests: Refactoring and enhancement of test suites. #9223, #9107
  • Docker: Changed Apache Gravitino Docker base image. #8817
  • Code Style: Upgrade Google Java Format to support JDK 17. #8792.

Frontend Updates

  • Added pagination for files list. #8987
  • Displayed the index type in UI. #6997
  • Upgraded dependabot affected versions. #9357
  • Fixed routing issue where path '/' may not route to 'metalakes'. #9354

Bug Fixes

  • Create topic encounters NoSuchTopicException when Kafka is deployed with 3 brokers on EKS. #4168
  • Apache Gravitino IRC server returns java.lang.NoSuchMethodError: void org.apache.hadoop.security.HadoopKerberosName.setRuleMechanism. #8754
  • Several bugs in SQL provider. #8659, #9166
  • Unknown error when using fsspec through JNI. #8858

Still, there are many bug fixes that have not been listed due to limited space. Please refer to the full list of issues and pull requests merged since the 1.0.0 release for more details.

Acknowledgements

Thanks to everyone who contributed to the 1.1.0 work — code, reviews, tests, issue triage, design, and feedback. Below is a consolidated list of contributor GitHub IDs extracted from issue and PR activity.

Apache, Apache Flink, Apache Hive, Apache Hudi, Apache Iceberg, Apache Ranger, Apache Spark, Apache Paimon and Apache Gravitino are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

Apache Gravitino 1.0.1 - Release Notes

· 3 min read
Minghuang Li
committer

We are pleased to announce the release of Gravitino 1.0.1. This version introduces comprehensive support for job template alterations, along with significant improvements and bug fixes across the core engine, various catalogs, and clients.

Major Features & Improvements

Job and Job Template

Gravitino Core

  • Refactored tag operations by leveraging the entity store's relation operations. #7916
  • Made several optimizations to the Caffeine cache, including adjusting weight values, resolving a performance issue with reverseIndex, and prioritizing the eviction of tags and policies when the cache is full, and so on. #8697, #8743, #8815, #8871, #8937

Catalogs

  • Kafka: Fixed an issue where topic creation was asynchronous, ensuring the operation is now synchronous. #4168
  • Iceberg: Fixed a failure in starting the Iceberg REST server within a Docker environment. #8733
  • Doris, StarRocks, PostgreSQL: Fixed incorrect parsing of column default values and types for these data sources. #8277

Python Client

  • Added metadata objects to the Python client. #8627
  • Fixed an incorrect credential URL and a fileset test issue on GCS. #8935, #8969

Authorization

  • Authorization is supported for the testCatalogConnection operation. #7893

Web UI

  • Fixed an issue with reconfiguring submission parameters when creating a catalog. #8694
  • Added pagination support for the fileset file list. #8987

Bug Fixes

  • Fixed a Null Pointer Exception (NPE) in TableFormat.java when a user has no roles. #8202
  • Corrected exception handling in the setPolicy operation. #8661
  • Fixed missing policy operations in the OpenAPI entry point. #8706
  • Fixed a build failure in the gvfs-fuse module. #8830
  • Fixed an issue where the hard deletion of statistics would fail. #9038
  • Corrected index names for statistics and job names in the database upgrade script. #8979
  • Fixed deletePolicyAndVersionMetasByLegacyTimeline error. #9031
  • Fixed role didn't update when the table is deleted. #8824

Credits

We would like to thank the following contributors for their valuable contributions to this release:

@dyrnq @yuqi1129 @LauraXia123 @jerryshao @danhuawang @playasim @keepConcentration @KayMas2808 @jerqi @mchades @HugoSalaDev @FANNG1 @diqiu50 @hdygxsj @tsungchih

Apache, Apache Flink, Apache Hive, Apache Hudi, Apache Iceberg, Apache Ranger, Apache Spark, Apache Paimon and Apache Gravitino are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

Apache Gravitino 1.0.0 - From Metadata Management to Contextual Engineering

· 8 min read
Jerry Shao
PMC Member

Apache Gravitino was designed from day one to provide a unified framework for metadata management across heterogeneous sources, regions, and clouds—what we define as the metadata lake (or metalake). Throughout its evolution, Gravitino has extended support to multiple data modalities, including tabular metadata from Apache Hive, Apache Iceberg, MySQL, and PostgreSQL; unstructured assets from HDFS and S3; streaming and messaging metadata from Apache Kafka; and metadata for machine learning models. To further strengthen governance in Gravitino, we have also integrated advanced capabilities, including tagging, audit logging, and end-to-end lineage capture.

After all enterprise metadata has been centralized through Gravitino, it forms a data brain: a structured, queryable, and semantically enriched representation of data assets. This enables not only consistent metadata access but also knowledge grounding, contextual reasoning, tool using and others. As we approach the 1.0 milestone, our focus shifts from pure metadata storage to metadata-driven contextual engineering—a foundation we call the Metadata-driven Action System, to provide the building blocks for the contextual engineering.

The release of Apache Gravitino 1.0.0 marks a significant engineering step forward, with robust APIs, extensible connectors, enhanced governance primitives, improved scalability and reliability in distributed environments. In the following sections, I will dive into the new features and architectural improvements introduced in Gravitino 1.0.0.

Metadata-driven action system

In version 1.0.0, we introduced three new components that enable us to build jobs to accomplish metadata-driven actions, such as table compaction, TTL data management, and PII identification. These three new components are: the statistics system, the policy system, and the job system.

Taking table compaction as an example:

  • Firstly, users can define the table compaction policy in Gravitino and associate this policy with the tables that need to be compacted.
  • Then, users can save the statistics of the table to Gravitino.
  • Also, users can define a job template for the compaction.
  • Lastly, users can use the statistics with the defined policy to generate the compaction parameters and use these parameters to trigger a compaction job based on the defined job templates.

Statistics system

The statistics system is a new component for the statistics store and retrieval. You can define and store the table/partition level statistics in Gravitino, and also fetch them through Gravitino for different purposes.

For the details of how we design this component, please see #7268. For instructions on using the statistics system, refer to the documentation here.

Policy system

The policy system enables you to define action rules in Gravitino, like compaction rules or TTL rules. The defined policy can be associated with the metadata, which means these rules will be enforced on the dedicated metadata. Users can leverage these enforced polices to decide how to trigger an action on the dedicated metadata.

Please refer to the policy system documentation to know how to use it. For more information on the policy system's implementation details, please refer to #7139.

Job system

The job system is another feature that allows you to submit and run jobs through Gravitino. Users can register a job template, then trigger a job based on the specific job template. Gravitino will help submit the job to the dedicated job executor, such as Apache Airflow. Gravitino can manage the job lifecycle and save the job status in it. With the job system, users can run a self-defined job to accomplish a metadata-driven action system.

In version 1.0.0, we have an initial version to support running the jobs as a local process. If you want to know more about the design details, you can follow issue #7154. Also, a user-facing documentation can be found here.

The whole metadata-driven action system is still in an alpha phase for version 1.0.0. The community will continue to evolve the code and take the Iceberg table maintenance as a reference implementation in the next version. Please stay tuned.

Agent-ready through the MCP server

MCP is a powerful protocol to bridge the gap between human languages and machine interfaces. With MCP, users can communicate with the LLM using natural language, and the LLM can understand the context and invoke the appropriate tools.

In version 1.0.0, the community officially delivered the MCP server for Gravitino. Users can launch it as a remote or local MCP server and connect to various MCP applications, such as Cursor and Claude Desktop. Additionally, we exposed all metadata-related interfaces as tools that MCP clients can call.

With the Gravitino MCP server, users can manage and govern metadata, as well as perform metadata-driven actions using natural language. Please follow issue #7483 for more details. Additionally, you can refer to the documentation for instructions on how to start the MCP server locally or in Docker.

Unified access control framework

Gravitino introduced the RBAC system in the previous version, but it only offers users the ability to grant privileges to roles and users, without enforcing access control when manipulating the secure objects. In 1.0.0, we complete this missing piece in Gravitino.

Currently, users can set access control policies through our RBAC system and enforce these controls when accessing secure objects. For details, you can refer to the umbrella issue #6762.

Add support for multiple locations model management

The model management is introduced in Gravitino 0.9.0. Users have since requested support for multiple storage locations within a single model version, allowing them to select a model version with a preferred location.

In 1.0.0, the community added multiple locations for model management. This feature is similar to the fileset’s support for multiple locations. Users can check the document here for more information. For more information on implementation details, please refer to this issue #7363.

Support the latest Apache Iceberg and Paimon versions

In Gravitino 1.0.0, we have upgraded the supported Iceberg version to 1.9.0. With the new version, we will add more feature support in the next release. Additionally, we have upgraded the supported Paimon version to 1.2.0, introducing new features for Paimon support.

You can see the issue #6719 for Iceberg upgrading and issue #8163 for Paimon upgrading.

Various core features

Core:

  • Add the cache system in the Gravitino entity store #7175.
  • Add Marquez integration as a lineage sink in Gravitino #7396.

Server:

  • Add Azure AD login support for OAuth authentication #7538.

Catalogs:

  • Support StarRocks catalog management in Gravitino #3302.

Clients:

Spark connector:

  • Upgrade the supported Kyubbi version #7480.

UI:

  • Add web UI for listing files / directories under a fileset #7477.

Deployment:

  • Add hem char deployment for Iceberg REST catalog #7159.

Behavior changes

Compatible changes:

  • Rename the Hadoop catalog to fileset catalog #7184.
  • Allowing event listener changes Iceberg create table request #6486.
  • Support returning aliases when listing model version #7307.

Breaking changes:

  • Change the supported Java version to JDK 17 for the Gravitino server.
  • Remove the Python 3.8 support for the Gravitino Python client #7491.
  • Fix the unnecessary double encoding and decoding issue for fileset get location and list files interfaces #8335. This change is incompatible with the old version of Java and Python clients. Using old version clients with a new version server will meet a decoding issue in some unexpected scenarios.

Overall

There are still lots of features, improvements, and bug fixes that are not mentioned here. We thank the community for their continued support and valuable contributions.

Apache Gravitino 1.0.0 opens a new chapter from the data catalog to the smart catalog. We will continue to innovate and build, to add more Data and AI features. Please stay tuned!

Credits

This release acknowledges the hard work and dedication of all contributors who have helped make this release possible.

1161623489@qq.com, Aamir, Aaryan Kumar Sinha, Ajax, Akshat Tiwari, Akshat kumar gupta, Aman Chandra Kumar, AndreVale69, Ashwil-Colaco, BIN, Ben Coke, Bharath Krishna, Brijesh Thummar, Bryan Maloyer, Cyber Star, Danhua Wang, Daniel, Daniele Carpentiero, Dentalkart399, Drinkaiii, Edie, Eric Chang, FANNG, Gagan B Mishra, George T. C. Lai, Guilherme Santos, Hatim Kagalwala, Jackeyzhe, Jarvis, JeonDaehong, Jerry Shao, Jimmy Lee, Joonha, Joonseo Lee, Joseph C., Justin Mclean, KWON TAE HEON, Kang, KeeProMise, Khawaja Abdullah Ansar, Kwon Taeheon, Kyle Lin, KyleLin0927, Lord of Abyss, MaAng, Mathieu Baurin, Maxspace1024, Mikshakecere, Mini Yu, Minji Kim, Minji Ryu, Nithish Kumar S, Pacman, Peidian li, Praveen, Qian Xia, Qiang-Liu, Qiming Teng, Raj Gupta, Ratnesh Rastogi, Raveendra Pujari, Reuben George, RickyMa, Rory, Sambhavi Pandey, Sébastien Brochet, Shaofeng Shi, Spiritedswordsman, Sua Bae, Surya B, Tarun, Tian Lu, Tianhang, Timur, Viral Kachhadiya, Will Guo, XiaoZ, Xiaojian Sun, Xun, Yftach Zur, Yuhui, Yujiang Zhong, Yunchi Pang, Zhengke Zhou, _.mung, ankamde, arjun, danielyyang, dependabot[bot], fad, fanng, gavin.wang, guow34, jackeyzhe, kaghatim, keepConcentration, kerenpas, kitoha, lipeidian, liuxian, liuxian131, lsyulong, mchades, mingdaoy, predator4ann, qbhan, raveendra11, roryqi, senlizishi, slimtom95, taylor.fan, taylor12805, teo, tian bao, vishnu, yangyang zhong, youngseojeon, yuhui, yunchi, yuqi, zacsun, zhanghan, zhanghan18, 梁自强, 박용현, 배수아, 신동재, 이승주, 이준하

Apache, Apache Flink, Apache Hive, Apache Hudi, Apache Iceberg, Apache Ranger, Apache Spark, Apache Paimon and Apache Gravitino are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

Apache Gravitino 0.9.1

· 2 min read
Rory Qi
committer

Model Management

Support updating aliases for model versions #6814,#7158

Add file viewer support for Filesets #6860

Implement ListFilesEvent in FilesetEventDispatcher #7314

Support setOwner/getOwner event operations #7646

Trino Connector

Auto-load multiple metalakes in Trino connector #7288

JDBC Validation

Validate JDBC URLs during store initialization #7547

Bug Fixes

Core & Catalogs

Fix H2 backend file lock issues during deletion #7406

Prevent SQL session commit errors #7403

Correct OAuth token refresh in web UI #7426

Validate namespace string conversions #7516

Improve server force-kill shutdown logic #7513

Fix bypass key handling in Hive catalog #7416

Filter empty Hadoop storage locations #7190

Fix model catalog error messages #7346

Connectors

Spark Connector

Remove conflicting slf4j dependency #7287

Fix S3 credential test errors #7432

Trino Connector

Handle unsupported catalog providers #7322

Python Client

Fix storage handler mappings for S3/OSS/ABS #7225

Improve Java client error messages #7344

Filesets

Fix multi-location file paths #7371

Improvements

Core & Catalogs

Optimize column deletion logic (#7415)(https://github.com/apache/gravitino/issues/7415)

Auto-register mappers via SPI #7529

Validate JDBC entity store URLs #7614

Fix catalog index existence checks #7660

CLI & Clients

Remove duplicate owner field in CLI #7639

URL-encode paths in Java client #7686

Testing

Refactor Hadoop catalog test stubbing #7280

Fix precondition message mismatches #7521

Documentation

Add Trino REST catalog example #7121

Iceberg IRC guides for StarRocks/Doris #7368

OpenAPI specs for Fileset/File #6860

Fix access control docs #7195

Update model privilege docs #7555

Typo fixes #7448, #7647

Remove incubating status markers #7492

Add 0.9.1 release notes #7485

Build & Infra

Fix Helm chart versioning #7129, #7134

Upgrade Kyuubi dependency #7480

Credits

FANNG1 Abyss-lord jerqi jerryshao slimtom95 flaming-archer yunchipang KyleLin0927 xiaozcy diqiu50 yuqi1129 ziqiangliang carl239 LauraXia123 guov100 senlizishi fivedragon5 justinmclean Jackeyzhe Spiritedswordsman su8y

Apache Gravitino 0.9.0 - Focus on AI, data governance, and security with multi-dimensional feature upgrade

· 4 min read
Rory Qi
committer

Gravitino 0.9.0 focuses on advancements in AI, data governance, and security. Many of its new features are already being used in production environments. The release has attracted strong interest from users from well-known companies, with AI and security capabilities drawing attention.

In this version, the community optimized the user experience for fileset catalogs and model catalogs, making it easier for users to manage their unstructured AI data and model data.

The community added a new data lineage interface. Users can now implement a custom data lineage plugin to adapt to their own system.

For security, the community has corrected some privilege semantics and fixed authorization plugin corner cases to make the entire system more robust.

Model Catalog

Before 0.9.0, the model catalog was immutable, which was not flexible. In the new version, users can alter models and model versions and add tags #6626 #6222.

Fileset Catalog

Gravitino now supports multiple named storage locations within a single fileset and placeholder-based path generation.

With multiple location support, users can reference data across different file systems (HDFS, S3, GCS, local, etc.) through a unified fileset interface, each with a unique location name.

The placeholder feature allows dynamic storage path generation using the {{placeholder}} syntax, automatically replacing placeholders with corresponding fileset properties.

These enhancements significantly improves the flexibility for multi-cloud environments and complex data organization patterns while maintaining a clean abstraction layer for data assets management #6681.

GVFS (Gravitino Virtual File System)

GVFS has been enhanced to support accessing multiple locations within filesets. Users can now select which location to use through configuration properties, environment variables, or fileset default settings.

GVFS has also been refactored with a pluggable architecture allowing custom operations and hooks. This enables users to extend functionality through operations_class and hook_class configuration options for more flexible integration with their specific infrastructure #6938.

Security

The new version has added privileges for the data model and corrected some privilege semantics. It has also fixed some bugs with the Ranger path-based plugin #6620 #6575 #6821 #6864. All of the user-related, group-related, and role-related events are now supported for the event system #2969.

Data Lineage

The community added a data lineage interface that follows the OpenLineage API specification. Users can implement their custom data lineage plugin to adapt to their system #6617.

Core

The community cared about performance. Performance was improved by reducing the scope of the lock and batch reading data from storage #6744 #6560 #2870.

CLI

Additionally, there is one more change worth mentioning. Users no longer need to rely on the alias command to use the CLI. Instead, the community provided a convenient script located at ./bin/gcli.sh so that a user can directly use the CLI client #5383.

Connector

Both the Flink connector and the Spark connector added JDBC support #6233 #6164.

Chart

Deploying Gravitino on Kubernetes with a fully customizable configuration #6594.

Overall

Gravitino 0.9.0 focuses on advancements in AI, data governance, and security. We thank the Gravitino community for their continued support and valuable contributions. We can continue to innovate and build thanks to all our users' feedback. Thank you for taking the time to read this! To dive deeper into the Gravitino 0.9.0 release, explore the full documentation. Your feedback is greatly valued and helps shape the future of the Gravitino project and community.

Credits

JavedAbdullah AndreVale69 Brijeshthummar02 cool9850311 liuchunhao danhuawang unknowntpo FANNG1 tsungchih jerryshao justinmclean zhoukangcn Abyss-lord amazingLyche yuqi1129 Pranaykarvi puchengy LauraXia123 tengqm rud9192 antony0016 frankvicky TEOTEO520 TungYuChiang sunxiaojian xunliu LuciferYang diqiu50 zhengkezhou1 caican00 granewang yunchipang jerqi mchades rickyma Xander-run flaming-archer waukin lsyulong luoshipeng FourFriends this-user vitamin43 hdygxsj liangyouze

Apache, Apache Fink, Apache Hive, Apache Hudi, Apache Iceberg, Apache Ranger, Apache Spark, Apache Paimon and Apache Gravitino are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.