---
title: "Introducing XI Objects — Provenance Infrastructure for the AI Era"
description: "Content doesn't lie when it's signed. XI Objects brings cryptographic provenance, perceptual fingerprinting, and a distributed trust network to digital content, so every image, video, and document can prove where it came from."
author: "I.Livingston and K.D.Cavner - Co-Founders"
published: 2026-02-15T05:00:00+00:00
updated: 2026-02-23T15:37:27.777746+00:00
tags: ["ai", "announcement", "cryptography", "fingerprinting", "provenance", "trust"]
url: https://xiobjects.com/articles/introducing-xi-objects
source: XI Objects
---

<!-- xion:doctype xion+markdown -->
<!-- xion:metadata
{
  "version": "1.0",
  "content_type": "application/xion\u002Bmarkdown",
  "source_type": "xi-content/article",
  "generator": "xio-content-publisher/1.0.0",
  "generated": "2026-02-23T15:37:04.6476045\u002B00:00",
  "encoding": "utf-8",
  "render_intent": "html",
  "title": "Introducing XI Objects \u2014 Provenance Infrastructure for the AI Era",
  "slug": "introducing-xi-objects",
  "author": "I.Livingston and K.D.Cavner - Co-Founders",
  "published_at": "2026-02-15T00:00:00.0000000-05:00",
  "copyright": "\u00A9 2026 XI Objects Inc"
}
-->

# Introducing XI Objects

Every second, AI generates another image. Another video. Another document that looks indistinguishable from something a human made. And every second, the gap between *what's real* and *what's plausible* gets a little wider.

We've been working on that problem. Today, we're introducing **XI Objects**, provenance and attribution infrastructure built for the AI era.

Not a watermark. Not a metadata tag that gets stripped when you upload to social media. A cryptographic proof, embedded in the content itself, that says *who made this, when they made it, and whether anyone has tampered with it since*.

Human-auditable. Machine-readable. Self-verifying.

---

## The Trust Fracture

The internet was built to move information. It was never designed to prove where that information came from.

That gap didn't matter much when content creation required expensive cameras, editing suites, and publishing infrastructure. But we've crossed a threshold. Generative AI can now produce photorealistic images in seconds. Cloned voices are indistinguishable from originals. Synthetic video can place anyone, anywhere, saying anything.

The consequences are already here:

- **Creators** watch their work get scraped, remixed, and published without attribution, with no way to prove origin.
- **Newsrooms** spend hours verifying whether footage is authentic before they can report on it.
- **Enterprises** face liability questions about AI-generated content in their pipelines.
- **AI systems** train on oceans of data with no provenance trail and no way to compensate the humans who created it.

Metadata gets stripped. Watermarks get cropped. Screenshot, recompress, repost. The chain of origin evaporates.

The problem isn't that trust is broken. It's that trust infrastructure *was never built*.

![The trust gap: content moves through platforms and transformations, losing provenance at every step.](https://stxiopublic.blob.core.windows.net/content/introducing-xi-objects/0a00e1616ad09c343a5fd2576d32413f786d59b73801e11b3bb80138993f0902.webp#xi=C3066C969CFD871B521027BA286C0A0F8B28C5A5EE80C26E33B69989B0EDF1F7)

---

## What XI Objects Does

XI Objects is a complete provenance stack, from cryptographic signing to content fingerprinting to a globally distributed verification network. It gives digital content a verifiable identity that survives compression, format conversion, cropping, and redistribution.

Three properties define everything we build:

**Self-Verifying.** Every signed artifact carries its own proof. You don't need to call home, check a database, or trust a third party. The math verifies itself.

**Machine-Readable.** Provenance data is structured for automated pipelines, AI systems, and programmatic verification. Not just a human-readable label. A machine-parseable trust chain.

**Human-Auditable.** Any person can inspect the certificate chain, read the provenance manifest, and trace content back to its origin. No black boxes.

---

## The Trust Chain

At the foundation of XI Objects is a hierarchical certificate system, a purpose-built PKI designed specifically for content attribution.

```
┌──────────────────────────────────────────────┐
│        Institute of Provenance Root CA       │
│            The ultimate trust anchor         │
└────────────────────┬─────────────────────────┘
                     │ certifies
                     ▼
┌──────────────────────────────────────────────┐
│        Organization Intermediates            │
│     Enterprises, studios, newsrooms          │
└────────────────────┬─────────────────────────┘
                     │ issues
                     ▼
┌──────────────────────────────────────────────┐
│           Creator Certificates               │
│    Short-lived, scoped, revocable            │
└──────────────────────────────────────────────┘
```

The **Institute of Provenance** operates the root certificate authority, the trust anchor for the entire ecosystem. The Institute is established as a **501(c)(6) organization**, a neutral non-profit standards body whose sole purpose is to hold and administer the root of trust. It doesn't sell software, run services, or compete with anyone building on the standard. It exists to be the one entity that every participant can trust to never have a commercial motive to compromise the root.

The foundational content format, **XI Object Notation (XION)**, is patent pending, and the XION name carries a pending trademark held by the Institute. These protections exist not to restrict adoption, but to ensure the standard remains coherent and interoperable as the ecosystem grows. The Institute governs the specification; implementations are open.

**XI Objects** is the infrastructure and SDK layer built on top of this foundation. It takes the XION format, the trust hierarchy, and the Orbital protocol and makes them accessible: NuGet packages, Azure-hosted services, APIs, and developer tools. The Institute defines *what* content trust means. XI Objects delivers *how* you use it.

Organizations receive intermediate certificates that let them issue short-lived creator certificates to their people. Every certificate in the chain is cryptographically bound to the one above it.

Short-lived leaf certificates mean that even in a worst-case key compromise, the blast radius is measured in hours, not years.

The cryptographic foundation is modern and uncompromising: **Ed25519** signatures for speed and security, **BLAKE3** hashing for integrity, and **X.509** certificate wrapping for interoperability with existing PKI infrastructure.

<!-- IMAGE SUGGESTION: Diagram of the trust hierarchy — Root CA at top with the Institute of Provenance shield, branching to organization intermediates (with example logos/names), then to individual creator leaf certs. Dark background (#0a0e1a), purple tones for the chain, orange accent for the root. -->

---

## XION: Content That Carries Its Own Proof

When you sign content with XI Objects, the result is a **XION artifact**, XI Object Notation. XION is a content format where the provenance proof is embedded directly in the document, not stored in a sidecar file or external database.

A trust block, embedded right in the content:

```
<!-- xion-example:trust
{
  "v": 1,
  "sig_alg": "ed25519",
  "hash_blake3_hex": "cd9f70ec4b7d7480...",
  "sig_b64": "B42QjM2Ro1UtTAaJ...",
  "key_id": "UA4xFIgM_asAZWa3...",
  "x509_chain_pem": [ "-----BEGIN CERTIFICATE-----\n..." ],
  "created_at": "2026-02-12T18:20:27Z"
}
-->
```

That block contains everything needed to verify the document: the BLAKE3 content hash, the Ed25519 signature, the signer's public key, and the full certificate chain back to the root. The content is canonicalized (BOM-stripped, line endings normalized, Unicode NFC-normalized) before hashing, so the same content always produces the same hash regardless of platform.

XION artifacts are **self-contained**. You can verify a signed document on an air-gapped machine with no network access. The trust block carries the entire chain of proof.

### The `[XION]` Badge: The Attribution Lock

In the early 2000s, the internet had a trust problem with *connections*. You typed in a banking URL and had no way to know if your traffic was being intercepted, modified, or routed through an impersonator. SSL/TLS solved that, and the browser padlock became the universal signal: *this connection is secure, and this server is who it claims to be*.

That lock changed everything. Not because encryption was new, but because the lock made trust *visible*. It gave ordinary people a single, recognizable indicator they could check without understanding cipher suites or certificate chains. Over time, the padlock went from a nice-to-have, to an expectation, to a requirement. Today, browsers actively warn you when a site *doesn't* have one.

We're at the same inflection point, one layer up. HTTPS solved *transport trust*: is my connection to this server secure? The question it never answered is *content trust*: is this content authentic, who created it, and has it been tampered with?

The **`[XION]`** badge is the attribution lock.

Where the HTTPS padlock verifies the *channel*, the XION badge verifies the *content itself*. Where the padlock tells you "you're connected to google.com," the badge tells you "Alice Chen at Acme Corp created this image under CC-BY-4.0, and no one has altered it since." And where the padlock is passive, sitting quietly in your browser chrome, the badge is interactive. Click it and the content goes through full cryptographic verification: signature validation, certificate chain resolution, attribution lookup, and perceptual fingerprint matching against the Orbital index. Not a trust *indicator*. A trust *action*.

This article you're reading right now? It's a XION artifact. It was published as `xion+markdown`. The full text was canonicalized, hashed, and signed with a certificate chain that traces back to the Institute of Provenance root. The trust block is embedded at the bottom of the document. The badge is the visible proof. Every registered image carries a BLAKE3 hash in its URL fragment (`#xi={hash}`), and clicking its badge re-fingerprints the image and matches it against the Orbital index. Signed text, signed images, signed documents, all verifiable with a single click.

The trajectory is the same one the padlock followed. First, attributed content will be *notable*, a signal of quality and credibility that separates verified creators from the noise. Then it will be *expected*. Platforms and consumers will look for the badge the way they look for the lock. Eventually, *unattributed* content will be the anomaly that raises questions, just as an HTTP site without a padlock raises browser warnings today. Not because unsigned content will be banned, but because signed content will be so easy to verify that its absence becomes a signal in itself.

The analogy goes deeper than metaphor. It becomes architectural.

When you navigate to an HTTPS site, the browser doesn't show you the page and *then* check the certificate. It verifies the TLS handshake *before the first byte of content renders*. The padlock is green before you see anything. Trust is evaluated ahead of display.

XION content works the same way. This article is delivered as `xion+markdown`. The trust block travels with the content. That means a browser, extension, or any client with our SDK can evaluate the signature, validate the certificate chain, and resolve attribution *before it ever renders the page*. The content arrives, the proof is checked, and by the time you're reading the first word the badge is already verified. No click required for the initial trust check. The same pre-render verification pattern that HTTPS established, applied to content.

Our **Chrome extension** is the first implementation of this model on the open web, the same way early browsers were the first to implement the padlock. The extension scans any page for trust blocks and attributed images, injects `[XION]` badges using Shadow DOM, and verifies content against the Orbital network in real time. But because the trust block is embedded in the content itself, verification can happen at parse time, before the DOM is painted. Browse any site, spot the badge, and know it was already checked. Click for the full attribution details: creator identity, license terms, AI permissions, and the complete cryptographic proof chain.

The SDK makes this pattern available to any client. A CMS can verify articles at ingest. A feed reader can pre-check every post. A native app can validate content before it hits the view layer. Any system that consumes `xion+markdown`, or any XION content type, can run the same pre-display verification that browsers run for TLS. The trust block is the content's certificate. The SDK is the verification engine. The badge is the lock.

The HTTPS padlock told you: *this connection is safe*.
The `[XION]` badge tells you: *this content is real, and here's who made it*.

---

## Perceptual Fingerprinting: Identity That Survives Transformation

Cryptographic hashes are perfect for detecting byte-level tampering. Change one pixel and SHA-256 produces a completely different output. But the real world doesn't work that way. Content gets resized, compressed, converted, screenshotted, and reformatted every time it moves between platforms.

XI Objects solves this with **perceptual fingerprinting**, compact identifiers that remain stable across visual transformations.

| What happens to the content | Cryptographic hash | Perceptual fingerprint |
|---|:---:|:---:|
| Original file | ✅ Match | ✅ Match |
| Resized to 50% | ❌ | ✅ Match |
| JPEG recompressed | ❌ | ✅ Match |
| Screenshot and repost | ❌ | ✅ Match |
| Color-adjusted | ❌ | ✅ Match |
| Moderate crop | ❌ | ⚠️ Partial match |
| Completely different content | ❌ | ❌ |

The image fingerprinting pipeline uses **Luminance Waveform Analysis (LWA)**, a frequency-domain technique that transforms images into a 176-dimensional feature vector invariant to scaling, rotation, and compression. The resulting fingerprint is compact enough for fast similarity search yet rich enough to distinguish near-identical content from true duplicates.

For deeper verification, a second-phase composite analysis brings in multiple independent signals (structural, textural, and spatial) that must agree for a match to be confirmed.

Video fingerprinting goes further: dual-track visual and audio analysis, temporal alignment through audio landmarks, A/V sync verification, and manipulation detection across the entire timeline.

<!-- IMAGE SUGGESTION: Side-by-side comparison showing an original photo and 4-5 transformed versions (resized, compressed, screenshotted, color-adjusted) with fingerprint match scores. Show the spectral fingerprint as a small visual element (heatmap or vector bar) beneath each, demonstrating stability. -->

---

## The Orbital Network

Signing content is only half the problem. The other half is letting anyone, anywhere, verify it. Fast.

The **Orbital Network** is XI Objects' distributed infrastructure for content attestation and verification. Think of it as DNS for digital provenance, a globally distributed lookup system where you can resolve the identity and history of any signed content.

**Certified Orbitals** are the backbone of the network. These are authorized nodes that hold intermediate certificates and provide:

- Certificate authority functions: issuing and managing creator certificates
- Content registration: recording fingerprints and provenance manifests
- Verification services: resolving trust chains and validating signatures
- Epoch management: cryptographically advancing system state with signed checkpoints

**Mirror Orbitals** extend the network's reach. They provide read-only verification and caching at the edge, delivering sub-100ms lookups without the ability to issue certificates or modify records. Deploy them globally for performance, or deploy them **behind your firewall** for a fully local XI deployment.

That last point matters. An enterprise can run its own Certified Orbital inside its network, issuing certificates to internal teams, registering content, and verifying provenance without any data leaving the building. Internal documents, proprietary media, AI-generated assets: all signed, all attributed, all verifiable within the org's own infrastructure. When those assets need to be shared externally, the trust chain still resolves because the certificates are rooted in the same hierarchy. Local operation, global interoperability.

The result is a verification network where every lookup is backed by **Sparse Merkle Tree proofs**, cryptographic evidence that a record exists (or doesn't exist) in the current state of the system. Not a promise. A proof.

<!-- IMAGE SUGGESTION: Globe visualization with orbital nodes connected by arcs. Core certified orbitals shown as larger nodes with purple glow, mirror orbitals as smaller cyan nodes distributed globally. Connection lines show the mesh network. Dark space background. -->

---

## Attribution: Who Made This, On Their Terms

Verification tells you *whether* content is authentic. Attribution tells you *who* is behind it: their identity, their organization, their license terms, their AI permissions. In XI Objects, attribution is a first-class concept with a unique property: **it's bound to the signing certificate, not to individual pieces of content.**

When a creator sets up their XI Objects profile, they define an **attribution manifest**: display name, organization, contact info, default license, AI training permissions, and any other metadata they choose. That manifest is bound to their certificate Key ID (KID) and published to the Orbital network as a `key-attribution` record. Every piece of content signed with that certificate automatically inherits its attribution.

This design has real consequences:

**Attribution travels with the key, not the file.** Sign a thousand images with the same certificate, and they all resolve to the same attribution manifest. No need to embed metadata in every file. No per-file attribution that can be stripped or faked.

**Attribution updates are forward-only.** A creator can update their attribution manifest at any time. Change their license, update their organization, adjust AI permissions. But that change only takes effect going forward. Content signed under the previous certificate (max 24-hour lifetime) retains whatever attribution was active when it was signed. You can't retroactively rewrite the terms of content you already published. The past is immutable.

**Any manifest format can ride alongside.** XI Objects defines its own `xi-attribution` endorsement type, but the endorsement system is open by design. A signing metadata record can carry multiple endorsements: an XI attribution manifest, a C2PA manifest, a custom enterprise format, all attached to the same signing event. The system doesn't dictate what your attribution looks like. It just guarantees it's cryptographically bound to the content and the signer.

```
Attribution Manifest (bound to KID)
├── displayName: "Alice Chen"
├── organization: "Acme Corp"
├── defaultLicense: "CC-BY-4.0"
├── aiPermissions:
│   ├── inference: "allowed"
│   ├── generativeTraining: "notAllowed"
│   └── dataMining: "constrained"
└── endorsements:
    ├── { type: "xi-attribution", payload: "..." }
    └── { type: "c2pa", payload: "..." }         ← optional
```

The result: when someone verifies a piece of content, they don't just get a boolean. They get the creator's name, their organization, their license, and their explicit permissions for how AI systems can use their work, all cryptographically tied to the certificate that signed it.

<!-- IMAGE SUGGESTION: A profile card / attribution overlay showing creator identity, organization, license badge, and AI permissions toggles — appearing over a verified image. Use the brand card style (#1a1a2e background, 8px radius, sand text). Show the "verified" check with the XI purple accent. -->

---

## Built for AI Pipelines

XI Objects isn't just for human creators. It's infrastructure for the entire content lifecycle, including AI training, generation, and output verification.

**At ingestion**, content entering an AI training pipeline can be verified for provenance. Does this image have a valid creator certificate? Is it licensed for training? Who should be compensated? XI Objects answers these questions programmatically.

**At generation**, AI-produced content can be signed at the point of creation, establishing a clear, cryptographically bound record that the output is AI-generated, by which model, under which organization's authority.

**At verification**, downstream consumers (newsrooms, platforms, end users) can check any piece of content against the Orbital network. The verification pipeline doesn't just answer "was this AI-generated?" It runs a multi-stage analysis: extract a perceptual fingerprint, search the Orbital index for candidates, compare across multiple independent signals, validate the certificate chain against pinned roots, and resolve attribution back to a specific creator and organization. One call. Full provenance.

The **distillation pipeline** takes signed XION content and transforms it into normalized, schema-validated facts with full provenance. These facts can feed search indices, vector databases, analytics platforms, and RAG systems, carrying their trust status all the way through.

---

## Under the Hood

For the engineers: XI Objects is built on .NET with zero-copy performance primitives and a composable, SOLID architecture. The stack is designed for high-throughput, low-latency operation at scale.

**Core libraries**, available as NuGet packages:

- **Xio.Crypto**: Ed25519 signatures, BLAKE3 hashing, Sparse Merkle Tree proof verification
- **Xio.Trust**: Document canonicalization and trust block signing
- **Xio.Verification**: The verification counterpart to Trust. Validates XION document signatures, runs the full media verification pipeline (fingerprint extraction → Orbital search → multi-signal comparison → certificate chain validation), and resolves attribution by certificate Key ID
- **Xio.Serialization**: XION format parsing, flattening, and reconstruction
- **Xio.Client**: SDK for querying the Orbital network, resolving domains, verifying trust blocks
- **Xio.Protocol**: Wire protocol encoding/decoding for the .xio namespace

**Services**, deployed as containers:

- **Xio.Control**: Certificate authority and record management
- **Xio.Orbital**: Content delivery and verification gateway
- **Xio.Monitoring**: Transparency logging, audit, and observability

Every component is interfaced, dependency-injected, and independently testable. The architecture separates the **distribution layer** (how trust propagates through the network) from the **operational layer** (how content is ingested, signed, and processed), so you can use the pieces you need without buying into the whole stack.

---

## Try It Now, Build What's Next

XI Objects has a hands-on **Explore** section where you can put the technology through its paces. Creating an account takes a verified email, because this is an attribution platform, and attribution starts with knowing who you are.

Once signed in, you can create and manage your **XI Attribution profile**: your display name, organization, license preferences, AI permissions, and any other metadata you want attached to your work. This is the identity that follows your content.

Once you're in:

**Lookup.** The simplest way to experience XI Objects. Upload any image and find out if it has attribution. This is the consumer's perspective, the question every person on the internet should be able to ask: *who made this, and can I trust it?* If the image matches a registered original in the Orbital index, you'll see the creator, their organization, license terms, and AI permissions. If it doesn't match, you'll know that too.

**Media Lab.** The science side. Upload images and video to see Luminance Waveform Analysis in action. Compare originals against compressed, cropped, and reformatted copies. Inspect fingerprint vectors, similarity scores, and composite verification results. When you upload media, you're issued a short-lived signing certificate and your attribution profile is bound to the content, so every image you register in the Lab is cryptographically yours, verifiable by anyone, immediately.

**Workflows.** An LLM-powered experience built on XI Objects infrastructure. Generate and interact with content that carries provenance from the moment it's created. *(Coming soon.)*

[Explore XI Objects →](/explore)

On the roadmap:

- **Public SDK**: Xio.Client and Xio.Crypto on NuGet, so you can sign and verify in your own applications
- **Chrome extension**: `[XION]` badge verification on any website, coming soon
- **Certified Orbital program**: Run your own node in the trust network
- **Enterprise integration guides**: Bring XI Objects into your CI/CD and content pipelines

The age of unverifiable content is ending. Not because we'll make it impossible to create fakes. That ship has sailed. But because we can make it trivial to prove what's real.

---

**XI Objects**: Provenance and attribution infrastructure for AI.

[Read the documentation →](/docs/xio/getting-started)  
[Explore the concepts →](/docs/xio/concepts)
<!-- xion:trust
{
  "v": 1,
  "canon_v": 1,
  "ctx": "xiobjects.com/content",
  "hash_blake3_hex": "642beb228fe613a6ef6fc88568ed4dd8f5f264218d2eb645c93d2b9c7e25a7e2",
  "hash_sha256_hex": null,
  "sig_alg": "ed25519",
  "sig_b64": "12dDGaUcrusRXl1jw1Fn3OhREe8x09sGkf5QYNxttkovorqLWHYjokNTqUZTLN4dddl7zAV5T8stBYvn0w3TCA",
  "pubkey_b64": "ff4Npz7sRQH_vUn9FY8Wrc8v_00Z49h15EyQgKVTHR0",
  "x509_chain_pem": [
    "-----BEGIN CERTIFICATE-----\r\nMIIB9TCCAaegAwIBAgIRAM4lRb8aI/FYHOJD5OYqefQwBQYDK2VwMC4xLDAqBgNV\r\nBAMMI1hJIE9iamVjdHMgSW5jIENvbnRyb2wgSW50ZXJtZWRpYXRlMB4XDTI2MDIx\r\nNTIyMDg0OFoXDTI2MDMxNzIyMDg0OFowSzEeMBwGA1UEAwwVeGlvLWNvbnRlbnQt\r\ncHVibGlzaGVyMRcwFQYDVQQKDA5YSSBPYmplY3RzIEluYzEQMA4GA1UECwwHQ29u\r\ndGVudDAqMAUGAytlcAMhAH3\u002BDac\u002B7EUB/71J/RWPFq3PL/9NGePYdeRMkIClUx0d\r\no4G8MIG5MAwGA1UdEwEB/wQCMAAwDgYDVR0PAQH/BAQDAgeAMBMGA1UdJQQMMAoG\r\nCCsGAQUFBwMkMGUGA1UdIwReMFyAFDspt5hZsP6rNX4Cq7owpMYa05OyoS6kLDAq\r\nMSgwJgYDVQQDDB9JbnN0aXR1dGUgb2YgUHJvdmVuYW5jZSBSb290IENBghRSYDf4\r\nsUJ\u002B9h\u002Bod0\u002BZRK/X/JSUBTAdBgNVHQ4EFgQUP5BTxnjCAxVKgMvFhx40ljlGOAkw\r\nBQYDK2VwA0EAjKlSBzHgXpPM2PA\u002BSJ/rMso5OEqtWIHGo/zr2QSuZRXhSWafIbk9\r\nZnl0kKZCqUB2HpCfgnpOGCPK6SlefwQsAQ==\r\n-----END CERTIFICATE-----\r\n",
    "-----BEGIN CERTIFICATE-----\r\nMIIByDCCAXqgAwIBAgIUUmA3\u002BLFCfvYfqHdPmUSv1/yUlAUwBQYDK2VwMCoxKDAm\r\nBgNVBAMMH0luc3RpdHV0ZSBvZiBQcm92ZW5hbmNlIFJvb3QgQ0EwHhcNMjUxMTAy\r\nMDMxNzEyWhcNMzAxMTAxMDMxNzEyWjAuMSwwKgYDVQQDDCNYSSBPYmplY3RzIElu\r\nYyBDb250cm9sIEludGVybWVkaWF0ZTAqMAUGAytlcAMhAFSS/pggSRmTcAMko7uc\r\nATH8OHgxVymd5mBFlPXbJkgio4GtMIGqMBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYD\r\nVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBQ7KbeYWbD\u002BqzV\u002BAqu6MKTGGtOTsjBlBgNV\r\nHSMEXjBcgBQAZRTDswSVORu\u002BkUOKX6WvrOvmQKEupCwwKjEoMCYGA1UEAwwfSW5z\r\ndGl0dXRlIG9mIFByb3ZlbmFuY2UgUm9vdCBDQYIUJqoJlpiSFg\u002B7W5IJLMrLttgR\r\nQp4wBQYDK2VwA0EA5FOht7YOsVRPp/FOKMQ\u002B3Mo9JxrvGR3ylKWAWNm6OUV7N3DB\r\nI9cD62wU5I0d0EKDBy0CX9DnoqUyxv5yguraAA==\r\n-----END CERTIFICATE-----\r\n",
    "-----BEGIN CERTIFICATE-----\r\nMIIBaTCCARugAwIBAgIUJqoJlpiSFg\u002B7W5IJLMrLttgRQp4wBQYDK2VwMCoxKDAm\r\nBgNVBAMMH0luc3RpdHV0ZSBvZiBQcm92ZW5hbmNlIFJvb3QgQ0EwHhcNMjUxMTAy\r\nMDMwNTEyWhcNMzUxMDMxMDMwNTEyWjAqMSgwJgYDVQQDDB9JbnN0aXR1dGUgb2Yg\r\nUHJvdmVuYW5jZSBSb290IENBMCowBQYDK2VwAyEAEWNZl\u002Br3IC7\u002BgBh90Yo1kWk1\r\npZCVzVuFdFT7qBBU8W2jUzBRMB0GA1UdDgQWBBQAZRTDswSVORu\u002BkUOKX6WvrOvm\r\nQDAfBgNVHSMEGDAWgBQAZRTDswSVORu\u002BkUOKX6WvrOvmQDAPBgNVHRMBAf8EBTAD\r\nAQH/MAUGAytlcANBAO6QeydOFNrN75qNyftggYudsxMyl4w9qWkSdZ6hlhrRcbSr\r\niG9Si0kbrIJOwYB/LTBU0RM4Rl\u002Bo9PM3Qp0mPwo=\r\n-----END CERTIFICATE-----\r\n"
  ],
  "key_id": "-GCB4sEBzFethc5Pd0Rzyn_6ySyHB4QaqD9DAoW9ViE",
  "created_at": "2026-02-23T15:37:04Z"
}
-->