CUBE-49 - Add blog post on Data Liability to Data Assets with Confidential AI#40
CUBE-49 - Add blog post on Data Liability to Data Assets with Confidential AI#40WashingtonKK wants to merge 5 commits intomainfrom
Conversation
| This allows enterprises to collaborate with AI providers, research partners, and even other organizations without giving up control over their data. This effectively implies that companies can train their models on sensitive datasets without exposing them, multiple parties can jointly compute insights without compromising on the privacy of their sensitive data, LLMs can be used for code completion or insights on confidential code or data. | ||
| With this in place, and hardware that supports trusted execution enviroments (Intel's TDX, Sev SNP, etc), security shifts from a policy promise to a technical guarantee that can be validated and verified. | ||
|
|
||
| ## Cube AI's role in Securing LLMs and vLLMs |
|
|
||
| Cube AI's core competence lies in the protection of LLMs and now even vLLMs in trusted execution environments (TEEs). Cube is designed to enable enterprises to maximize the unlimited potential of LLMs in creating a competitive advantage for themselves. | ||
| The advent of LLMs has significantly bumped up the productivity of developers, with numerous organizations mandating the use of LLMs in product development. Engineers of all kinds are now able to focus on what matters, and leave the rest of the repetitive work to be done by LLMs. Financial institutions are now employing LLMs to derive insights from financial data, medical firms relying on LLMs to perform anomaly detection in medical data - all this conventionally coming with the risk of a security brecah. | ||
| Cube AI allows LLMs to be deployed in secure enclaves, allowing organizations to make use of LLMs for code completion and chat, all while ensuring the security of the data input. Cube currently supports Llama3.2:3b for chat and code completion, StarCoder2:3b for code completion, and Nomic Embed text:137M, with the possibility of adding custom models and vLLMs. |
There was a problem hiding this comment.
cube is not limited to these models, specific models should not be a focus rather the possible applications of cube
There was a problem hiding this comment.
Needs diagrams
Add some use case for enterprises which demonstrates how cube comes in
Blog needs more depth in general
There was a problem hiding this comment.
Pull request overview
This PR adds a new blog post titled "From Data Liability to Data Assets: How Confidential AI Changes the Game for Enterprises" to the Cube AI documentation site. The post discusses how Confidential AI transforms enterprise data handling by protecting data during processing through hardware-backed trusted execution environments.
Changes:
- New blog post content introducing confidential AI concepts and Cube AI's role in securing LLMs
- Updated sitemap.xml with new blog post entry
- Generated HTML pages for the new blog post and updated related blog pages
- Updated blog cache file
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 15 comments.
Show a summary per file
| File | Description |
|---|---|
| content/blogs/from-data-liability-to-data-assets.md | New markdown blog post source file with frontmatter, content about confidential AI, and references |
| blog/from-data-liability-to-data-assets/index.html | Generated HTML page for the new blog post |
| blog/index.html | Updated blog index page to include the new blog post card |
| blog/confidential-computing-meets-ai/index.html | Updated with "Next Read" section linking to the new post |
| sitemap.xml | Added new blog post URL with lastmod date and updated existing entry |
| .blogcache | Updated cache with hash for the new blog post |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|  | ||
|
|
||
| Data has proven to be the most valuable strategic resource for all enterprises. With vast amounts of data, insightful and strategic decisions can be made by leveraging the power of Artificial Intelligence and Machine Learning. While this has been and continues to be the case, the rise of AI has transformed the same data into a potential liability. | ||
| Executives today face a difficult dilemna: the more sensitive and proprietary the data, the greater the potential for AI driven insights. Yet handling more sensitive and proprietary data poses a risk to the executives; potential lawsuits, regulatory barriers, data leaks, reputational damage, and competitive leakage. |
There was a problem hiding this comment.
Spelling error: "dilemna" should be "dilemma"
| Executives today face a difficult dilemna: the more sensitive and proprietary the data, the greater the potential for AI driven insights. Yet handling more sensitive and proprietary data poses a risk to the executives; potential lawsuits, regulatory barriers, data leaks, reputational damage, and competitive leakage. | |
| Executives today face a difficult dilemma: the more sensitive and proprietary the data, the greater the potential for AI driven insights. Yet handling more sensitive and proprietary data poses a risk to the executives; potential lawsuits, regulatory barriers, data leaks, reputational damage, and competitive leakage. |
| When data can be safely used and shared, completely new revenue streams emerge. The potential to unlock additional revenue sources creates a whole new space that was hardly thought of as income generating. | ||
| Some of these include: | ||
| Data-as-a-Service (Secure Data): Organizations can monetize datasets without transferring ownership or exposing raw data to the participating parties. | ||
| Regulated AI Products: Vendors can deliver AI solutions tailored for compliance-heavy sectors that previouslt avoided automation, all with the guarantee of security. |
There was a problem hiding this comment.
Spelling error: "previouslt" should be "previously"
| Regulated AI Products: Vendors can deliver AI solutions tailored for compliance-heavy sectors that previouslt avoided automation, all with the guarantee of security. | |
| Regulated AI Products: Vendors can deliver AI solutions tailored for compliance-heavy sectors that previously avoided automation, all with the guarantee of security. |
|
|
||
| Not all "secure AI" platforms provide true confidentiality. Executeves should evaluate vendors carefully across several dimensions: | ||
| Hardware-Backed security: Enterprises should be able to cryptographically confirm that workloads are running in protected environments. | ||
| Data sovereignity controls: Critical to confidential AI is the location where computations occur and operaitonal visibility. |
There was a problem hiding this comment.
Spelling error: "operaitonal" should be "operational"
| Data sovereignity controls: Critical to confidential AI is the location where computations occur and operaitonal visibility. | |
| Data sovereignity controls: Critical to confidential AI is the location where computations occur and operational visibility. |
|
|
||
| ## The "Data Liability" Problem with Public Cloud AI | ||
|
|
||
| The pubclic cloud AI has significantly accelerated innovation in the field of collaborative AI, but it has introduced several key structural risks that many enterprises underestimated. Traditional AI environments send data to AI workloads while relinquishing meaningful control over where the data is processed, access control, the potential of data retention or model improvement, and how the data is protected in memory during computation. Inasmuch as data at rest and in transit is protected, this data is typically decrypted during processing which creates a window of exposure to malicious attacks. |
There was a problem hiding this comment.
Spelling error: "pubclic" should be "public"
| The pubclic cloud AI has significantly accelerated innovation in the field of collaborative AI, but it has introduced several key structural risks that many enterprises underestimated. Traditional AI environments send data to AI workloads while relinquishing meaningful control over where the data is processed, access control, the potential of data retention or model improvement, and how the data is protected in memory during computation. Inasmuch as data at rest and in transit is protected, this data is typically decrypted during processing which creates a window of exposure to malicious attacks. | |
| The public cloud AI has significantly accelerated innovation in the field of collaborative AI, but it has introduced several key structural risks that many enterprises underestimated. Traditional AI environments send data to AI workloads while relinquishing meaningful control over where the data is processed, access control, the potential of data retention or model improvement, and how the data is protected in memory during computation. Inasmuch as data at rest and in transit is protected, this data is typically decrypted during processing which creates a window of exposure to malicious attacks. |
|
|
||
| The pubclic cloud AI has significantly accelerated innovation in the field of collaborative AI, but it has introduced several key structural risks that many enterprises underestimated. Traditional AI environments send data to AI workloads while relinquishing meaningful control over where the data is processed, access control, the potential of data retention or model improvement, and how the data is protected in memory during computation. Inasmuch as data at rest and in transit is protected, this data is typically decrypted during processing which creates a window of exposure to malicious attacks. | ||
| For regulated industries such as finance, health, and other industries which handle sensitive data - this creates serious concerns, as this data has to be handled securely. Financial institutions must safeguard transaction histories and trading strategies, healthcare providers have to protect patient records, legal firms have to preserve client confidentiality, and generally all data has to be protected against leakage. | ||
| This has led to a quite decline by companies on their AI ambitions, not for the lack of data, but for the risk of using it in public cloud AI which risks leakage. |
There was a problem hiding this comment.
Grammatical error: "quite decline" should be "quiet decline"
| This has led to a quite decline by companies on their AI ambitions, not for the lack of data, but for the risk of using it in public cloud AI which risks leakage. | |
| This has led to a quiet decline by companies on their AI ambitions, not for the lack of data, but for the risk of using it in public cloud AI which risks leakage. |
| While still in early adoption, several sectors are quickly moving to adopt confidential AI to leverage the merits of confidential data, and guaranteed privacy. | ||
| Banks and financial trading firms can run risk models on highly sensitive financial data while ensuring that proprietary strategies remain confidential. This opens the door to secure AI-powered fraud and anomaly detection, insights on tailored actions to perform to increase competitive advantage, collaborative anti-money-laundering analytics, and cross-institution risk modeling without data pooling. | ||
| Healthcare organizations possess some of the richest datasets in the world, which makes it one of the most regulated dataset. Confidential AI enables privacy-preserving clinical research, secure diagnostic model training, multi-hospital data collaboration, and pharmaceutical research without exposing patient records. Confidential AI also allows aided medical practice, allowing healthcare workers to securely infer from trained models about real patient records. | ||
| Law and legal firms have historically been cautious about AI dur to privilege and confidentiality requirements. Confidential AI allows these firms to analyze large case corpora, automate contract intelligence and development, extract insights from sensitive documents, and to build proprietary legal copilots and AI agents. Firms that can move early can create differentiated knowledge banks which competitors can not replicate. |
There was a problem hiding this comment.
Spelling error: "dur" should be "due"
| Law and legal firms have historically been cautious about AI dur to privilege and confidentiality requirements. Confidential AI allows these firms to analyze large case corpora, automate contract intelligence and development, extract insights from sensitive documents, and to build proprietary legal copilots and AI agents. Firms that can move early can create differentiated knowledge banks which competitors can not replicate. | |
| Law and legal firms have historically been cautious about AI due to privilege and confidentiality requirements. Confidential AI allows these firms to analyze large case corpora, automate contract intelligence and development, extract insights from sensitive documents, and to build proprietary legal copilots and AI agents. Firms that can move early can create differentiated knowledge banks which competitors can not replicate. |
| <meta property="og:image" content="https://www.ultraviolet.rs{https://www.ultraviolet.rs/img/header.avif}" /> | ||
|
|
||
|
|
||
| <meta name="twitter:card" content="summary_large_image" /> | ||
| <meta name="twitter:title" content="From Data Liability to Data Assets: How Confidential AI Changes the Game for Enterprises" /> | ||
| <meta name="twitter:description" content="" /> | ||
| <meta name="twitter:image" content="https://www.ultraviolet.rs{https://www.ultraviolet.rs/img/header.avif}" /> |
There was a problem hiding this comment.
The og:image and twitter:image meta tags contain malformed URLs. The content appears to be "https://www.ultraviolet.rs{https://www.ultraviolet.rs/img/header.avif}" which has an incorrect template substitution resulting in a double URL. This should be a properly formatted single URL.
| <meta property="og:image" content="https://www.ultraviolet.rs{https://www.ultraviolet.rs/img/header.avif}" /> | |
| <meta name="twitter:card" content="summary_large_image" /> | |
| <meta name="twitter:title" content="From Data Liability to Data Assets: How Confidential AI Changes the Game for Enterprises" /> | |
| <meta name="twitter:description" content="" /> | |
| <meta name="twitter:image" content="https://www.ultraviolet.rs{https://www.ultraviolet.rs/img/header.avif}" /> | |
| <meta property="og:image" content="https://www.ultraviolet.rs/img/header.avif" /> | |
| <meta name="twitter:card" content="summary_large_image" /> | |
| <meta name="twitter:title" content="From Data Liability to Data Assets: How Confidential AI Changes the Game for Enterprises" /> | |
| <meta name="twitter:description" content="" /> | |
| <meta name="twitter:image" content="https://www.ultraviolet.rs/img/header.avif" /> |
|
|
||
| ## How Confidential AI Enables Safe Data Sharing in AI Workloads | ||
|
|
||
| Confidential AI changes the security model entirely by ensuring that data remains encrypted even when being processed. By leveraging hardware-backed trusted execuiton environments (TEEs), workloads are executed entirely inside isolated enclaves wheredata is decrypted only within protected memory, infrastructure operators can not inspect it, and unauthorized access is cryptographically prevented. Additionally, through attestation, processing the integrity of the hardware can be remotely verified. |
There was a problem hiding this comment.
Spelling error: "execuiton" should be "execution"
| Confidential AI changes the security model entirely by ensuring that data remains encrypted even when being processed. By leveraging hardware-backed trusted execuiton environments (TEEs), workloads are executed entirely inside isolated enclaves wheredata is decrypted only within protected memory, infrastructure operators can not inspect it, and unauthorized access is cryptographically prevented. Additionally, through attestation, processing the integrity of the hardware can be remotely verified. | |
| Confidential AI changes the security model entirely by ensuring that data remains encrypted even when being processed. By leveraging hardware-backed trusted execution environments (TEEs), workloads are executed entirely inside isolated enclaves wheredata is decrypted only within protected memory, infrastructure operators can not inspect it, and unauthorized access is cryptographically prevented. Additionally, through attestation, processing the integrity of the hardware can be remotely verified. |
|
|
||
| Not all "secure AI" platforms provide true confidentiality. Executeves should evaluate vendors carefully across several dimensions: | ||
| Hardware-Backed security: Enterprises should be able to cryptographically confirm that workloads are running in protected environments. | ||
| Data sovereignity controls: Critical to confidential AI is the location where computations occur and operaitonal visibility. |
There was a problem hiding this comment.
Spelling error: "sovereignity" should be "sovereignty"
| Data sovereignity controls: Critical to confidential AI is the location where computations occur and operaitonal visibility. | |
| Data sovereignty controls: Critical to confidential AI is the location where computations occur and operaitonal visibility. |
| Not all "secure AI" platforms provide true confidentiality. Executeves should evaluate vendors carefully across several dimensions: | ||
| Hardware-Backed security: Enterprises should be able to cryptographically confirm that workloads are running in protected environments. | ||
| Data sovereignity controls: Critical to confidential AI is the location where computations occur and operaitonal visibility. | ||
| Performance at Scale: Security can not come at the expence of production readiness. |
There was a problem hiding this comment.
Spelling error: "expence" should be "expense"
| Performance at Scale: Security can not come at the expence of production readiness. | |
| Performance at Scale: Security can not come at the expense of production readiness. |
Signed-off-by: WashingtonKK <washingtonkigan@gmail.com>
Signed-off-by: WashingtonKK <washingtonkigan@gmail.com>
17e0449 to
e92017e
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 5 out of 6 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (7)
content/blogs/from-data-liability-to-data-assets.md:56
- The list items (lines 54-56) should be formatted as a proper markdown list with bullet points or numbers for better readability and consistency. Currently, they appear as plain text paragraphs which makes them harder to scan. Consider formatting as:
- Data-as-a-Service (Secure Data): ...
- Regulated AI Products: ...
- Industry Data Consortia: ...
Data-as-a-Service (Secure Data): Organizations can monetize datasets without transferring ownership or exposing raw data to the participating parties.
Regulated AI Products: Vendors can deliver AI solutions tailored for compliance-heavy sectors that previously avoided automation, all with the guarantee of security.
Industry Data Consortia: Competitors can contribute encrypted datasets to produce a shared intelligence that benefits all participants - the dataset providers, the model owners, and the end users.
content/blogs/from-data-liability-to-data-assets.md:65
- The list items (lines 61-65) should be formatted as a proper markdown list with bullet points for better readability and consistency with markdown best practices. Currently, they appear as plain text paragraphs. Consider formatting as:
- Hardware-Backed security: ...
- Data sovereignty controls: ...
- Performance at Scale: ...
- Integration Flexibility: ...
- Remotely verifiable attestation: ...
Hardware-Backed security: Enterprises should be able to cryptographically confirm that workloads are running in protected environments.
Data sovereignty controls: Critical to confidential AI is the location where computations occur and operational visibility.
Performance at Scale: Security can not come at the expense of production readiness.
Integration Flexibility: The platforms should not limit the users on which models to use or not to use, and what platforms to integrate with.
Remotely verifiable attestation: For secure AI, attestation must be verified remotely with proper attestation reports and audits to give insights on the hardware where the models are running.
content/blogs/from-data-liability-to-data-assets.md:30
- "can not" should be written as "cannot" (one word). This is the standard English usage for the negative form.
Confidential AI changes the security model entirely by ensuring that data remains encrypted even when being processed. By leveraging hardware-backed trusted execution environments (TEEs), workloads are executed entirely inside isolated enclaves where data is decrypted only within protected memory, infrastructure operators can not inspect it, and unauthorized access is cryptographically prevented. Additionally, through attestation, processing the integrity of the hardware can be remotely verified.
content/blogs/from-data-liability-to-data-assets.md:46
- "can not" should be written as "cannot" (one word). This is the standard English usage for the negative form.
Law and legal firms have historically been cautious about AI due to privilege and confidentiality requirements. Confidential AI allows these firms to analyze large case corpora, automate contract intelligence and development, extract insights from sensitive documents, and to build proprietary legal copilots and AI agents. Firms that can move early can create differentiated knowledge banks which competitors can not replicate.
content/blogs/from-data-liability-to-data-assets.md:15
- "AI driven" should be hyphenated as "AI-driven" when used as a compound adjective modifying "insights".
Executives today face a difficult dilemma: the more sensitive and proprietary the data, the greater the potential for AI driven insights. Yet handling more sensitive and proprietary data poses a risk to the executives; potential lawsuits, regulatory barriers, data leaks, reputational damage, and competitive leakage.
content/blogs/from-data-liability-to-data-assets.md:10
- The blog post is missing optional metadata fields that are present in other blog posts: "excerpt", "description", "coverImage", and "ogImage". While not strictly required, these fields improve SEO and social media sharing. Consider adding:
- excerpt: A brief summary for the blog listing page
- description: A meta description for search engines
- coverImage: The cover image path
- ogImage: The Open Graph image for social media sharing
---
slug: from-data-liability-to-data-assets
title: "From Data Liability to Data Assets: How Confidential AI Changes the Game for Enterprises"
author:
name: "Washington Kamadi"
picture: "https://avatars.githubusercontent.com/u/43080232?v=4&size=64"
tags: [confidential-computing, ai, enterprise, security, data-privacy, "cube ai"]
image: /img/from-data-liability-to-data-assets/from_data_liability_to_data_assets_cover.png
date: 2026-02-06
---
content/blogs/from-data-liability-to-data-assets.md:63
- "can not" should be written as "cannot" (one word). This is the standard English usage for the negative form.
Performance at Scale: Security can not come at the expense of production readiness.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| Confidential AI changes the security model entirely by ensuring that data remains encrypted even when being processed. By leveraging hardware-backed trusted execution environments (TEEs), workloads are executed entirely inside isolated enclaves where data is decrypted only within protected memory, infrastructure operators can not inspect it, and unauthorized access is cryptographically prevented. Additionally, through attestation, processing the integrity of the hardware can be remotely verified. | ||
| This allows enterprises to collaborate with AI providers, research partners, and even other organizations without giving up control over their data. This effectively implies that companies can train their models on sensitive datasets without exposing them, multiple parties can jointly compute insights without compromising on the privacy of their sensitive data, LLMs can be used for code completion or insights on confidential code or data. | ||
| With this in place, and hardware that supports trusted execution environments (Intel's TDX, Sev SNP, etc), security shifts from a policy promise to a technical guarantee that can be validated and verified. |
There was a problem hiding this comment.
Missing space after "Sev" - should be "SEV SNP" (AMD Secure Encrypted Virtualization - Secure Nested Paging). Based on industry standard terminology and usage in other blog posts in this repository, this should be formatted as "AMD SEV-SNP" with hyphens for consistency.
| With this in place, and hardware that supports trusted execution environments (Intel's TDX, Sev SNP, etc), security shifts from a policy promise to a technical guarantee that can be validated and verified. | |
| With this in place, and hardware that supports trusted execution environments (Intel's TDX, AMD SEV-SNP, etc), security shifts from a policy promise to a technical guarantee that can be validated and verified. |
| ## Setting yourself apart with Confidential AI & Proprietary Data | ||
|
|
||
| While most AI models are becoming commoditized, access alone to these models is no longer a differentiator in industry - proprietary data and access is! | ||
| Confidential AI allows organizations to fully leverage unique datasets that competitors cannot access, without increasing risk exposure. Having guaranteed and secure access to proprietary data allows a strategic position in industry where organizations and firms can securely use proprietary data, generate superior AI insights, improve products and decision-making, capture market share and collect more differentiated data. These, over time, widens the gap between enterprises that have proprietary data and those that lack. |
There was a problem hiding this comment.
Subject-verb agreement issue: "These, over time, widens" should be "These, over time, widen" (plural verb to match the plural subject "These").
| Confidential AI allows organizations to fully leverage unique datasets that competitors cannot access, without increasing risk exposure. Having guaranteed and secure access to proprietary data allows a strategic position in industry where organizations and firms can securely use proprietary data, generate superior AI insights, improve products and decision-making, capture market share and collect more differentiated data. These, over time, widens the gap between enterprises that have proprietary data and those that lack. | |
| Confidential AI allows organizations to fully leverage unique datasets that competitors cannot access, without increasing risk exposure. Having guaranteed and secure access to proprietary data allows a strategic position in industry where organizations and firms can securely use proprietary data, generate superior AI insights, improve products and decision-making, capture market share and collect more differentiated data. These, over time, widen the gap between enterprises that have proprietary data and those that lack. |
Signed-off-by: WashingtonKK <washingtonkigan@gmail.com>
- Add 1600x840 cover image for the blog - Update og:image, twitter:image, and JSON-LD image to use the actual cover image Signed-off-by: Washington Kamadi <washington@ultraviolet.rs>
| name: "Washington Kamadi" | ||
| picture: "https://avatars.githubusercontent.com/u/43080232?v=4&size=64" | ||
| tags: [confidential-computing, ai, enterprise, security, data-privacy, "cube ai"] | ||
| image: /img/from-data-liability-to-data-assets/from_data_liability_to_data_assets_cover.png |
| @@ -0,0 +1,84 @@ | |||
| --- | |||
| slug: from-data-liability-to-data-assets | |||
| title: "From Data Liability to Data Assets: How Confidential AI Changes the Game for Enterprises" | |||
| picture: "https://avatars.githubusercontent.com/u/43080232?v=4&size=64" | ||
| tags: [confidential-computing, ai, enterprise, security, data-privacy, "cube ai"] | ||
| image: /img/from-data-liability-to-data-assets/from_data_liability_to_data_assets_cover.png | ||
| date: 2026-02-06 |
There was a problem hiding this comment.
| date: 2026-02-06 | |
| date: 2026-02-20 |
| date: 2026-02-06 | ||
| --- | ||
|
|
||
|  |
There was a problem hiding this comment.
remove, use cover image for this
Signed-off-by: Washington Kamadi <washington@ultraviolet.rs>
19c719c to
a7f7ba0
Compare
What type of PR is this?
What does this do?
Which issue(s) does this PR fix/relate to?
Have you included tests for your changes?
Did you document any new/modified features?
Notes