Clients submit images, videos, and audio files to DFPN for deepfake detection. The network routes your request to multiple independent workers, aggregates their results through consensus, and returns a verified verdict.
As a client, you:
- Upload media to off-chain storage (IPFS, S3, or any HTTP endpoint)
- Submit an analysis request to the network with the content hash and storage URI
- Pay a fee in SOL
- Receive an aggregated result backed by multi-worker consensus
- Get a full on-chain audit trail for every analysis
You do not need to stake tokens or run any infrastructure.
| Method | Best For | Complexity |
|---|---|---|
| TypeScript SDK | Node.js backends, web apps | Low |
| Python SDK | ML pipelines, data processing | Low |
| REST API | Quick prototypes, language-agnostic | Low |
| Direct RPC | Custom Solana integrations | Medium |
npm install @dfpn/sdk @solana/web3.jsimport { DFPNClient, Modality } from '@dfpn/sdk';
import { Keypair } from '@solana/web3.js';
const client = new DFPNClient({
network: 'devnet',
wallet: Keypair.fromSecretKey(/* your keypair bytes */),
});const request = await client.submitRequest({
mediaPath: './photo.jpg',
modalities: [Modality.FaceManipulation, Modality.ImageAuthenticity],
minWorkers: 3,
maxFee: 0.01, // SOL
deadline: Date.now() + 300_000, // 5 minutes
});
console.log('Request ID:', request.id);const result = await client.waitForResult(request.id, {
timeout: 360_000, // 6 minutes
});
console.log('Verdict:', result.verdict);
console.log('Confidence:', result.confidence);
console.log('Workers:', result.workerResults.length);from dfpn import DFPNClient, Modality
from solders.keypair import Keypair
client = DFPNClient(
network="devnet",
wallet=Keypair.from_bytes(wallet_bytes),
)
request = client.submit_request(
media_path="./photo.jpg",
modalities=[Modality.FACE_MANIPULATION],
min_workers=3,
max_fee=0.01,
deadline_seconds=300,
)
result = client.wait_for_result(request.id, timeout=360)
print(f"Verdict: {result.verdict}")
print(f"Confidence: {result.confidence}%")Fees vary by media type and complexity. These are baseline per-request fees:
| Modality | Base Fee (SOL) | Typical Processing Time |
|---|---|---|
| Image Authenticity | ~0.002 | 30-60 seconds |
| Face Manipulation | ~0.003 | 45-90 seconds |
| AI-Generated Image | ~0.003 | 45-90 seconds |
| Video Authenticity | ~0.008 | 60-180 seconds |
| Voice Cloning | ~0.004 | 30-60 seconds |
!!! info "Fees are dynamic"
Actual fees depend on worker availability, request priority, and the number of workers you require. Use client.estimateFee() to get a current estimate before submitting.
Fees are split across network participants:
| Recipient | Share |
|---|---|
| Workers | 65% |
| Model Developers | 20% |
| Treasury | 10% |
| Insurance Pool | 5% |
Here is what happens after you submit a request:
You submit Workers Workers Network You receive
a request --> analyze --> commit --> aggregates --> a result
the media & reveal consensus
- Submit -- Your request is recorded on-chain with the content hash, storage URI, fee, and deadline.
- Route -- The network matches your request to workers who support the requested modalities.
- Analyze -- Workers download your media and run their detection models.
- Commit -- Each worker submits a cryptographic hash of their result (no one can see others' answers).
- Reveal -- After all commits are in, workers reveal their actual results.
- Aggregate -- The network combines results using reputation-weighted voting.
- Finalize -- The consensus result is recorded on-chain and fees are distributed.
Every completed request returns a structured result:
{
"verdict": "Manipulated",
"confidence": 87,
"consensusType": "Majority",
"workerResults": [
{
"worker": "7xKw...3nPq",
"modelId": "face-forensics-sbi",
"verdict": "Manipulated",
"confidence": 92,
"detections": [
{
"type": "face_swap",
"region": { "x": 120, "y": 80, "w": 200, "h": 200 },
"confidence": 94
}
]
}
],
"audit": {
"requestTx": "5Uj2...kLm9",
"finalizeTx": "8Pq1...wNx4",
"workerCount": 5,
"commitCount": 5,
"revealCount": 5
}
}| Verdict | Meaning |
|---|---|
Authentic |
No manipulation detected; media appears genuine |
Manipulated |
Manipulation detected with supporting evidence |
Inconclusive |
Workers could not reach consensus or confidence is low |
The confidence score ranges from 0 to 100:
- 80-100: High confidence in the verdict
- 50-79: Moderate confidence; consider manual review
- Below 50: Low confidence; treat as inconclusive
The detections array contains specific findings from each worker, including manipulation type, affected region (for images/video), and per-detection confidence.
Automatically screen user-uploaded media before publication. Flag manipulated content for human review.
if (result.verdict === 'Manipulated' && result.confidence > 80) {
flagForReview(mediaId, result);
}Verify the authenticity of photos and videos before publication. Use the on-chain audit trail as evidence of verification.
Detect face-swapped or AI-generated photos in identity documents and selfie verification flows.
Integrate detection into upload pipelines to label or restrict synthetic media, giving users transparency about content authenticity.
import { DFPNError, ErrorCode } from '@dfpn/sdk';
try {
const result = await client.submitRequest({ /* ... */ });
} catch (error) {
if (error instanceof DFPNError) {
switch (error.code) {
case ErrorCode.INSUFFICIENT_FUNDS:
// Not enough SOL for the fee
break;
case ErrorCode.NO_WORKERS_AVAILABLE:
// No workers online for requested modalities
break;
case ErrorCode.DEADLINE_TOO_SHORT:
// Deadline must be at least 60 seconds
break;
case ErrorCode.CONTENT_HASH_MISMATCH:
// Uploaded file does not match the provided hash
break;
}
}
}!!! tip "Set realistic deadlines" Video analysis can take several minutes. Set deadlines of at least 3 minutes for images and 5-10 minutes for video to give workers enough time to process and go through the commit-reveal cycle.