Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
333fbfa
use server connection manager connection count instead of rate limite…
cwilvx Feb 16, 2026
5fdf607
add debug prints
cwilvx Feb 16, 2026
b8002a7
remove connections direct read + bump conns per IP to 20
cwilvx Feb 21, 2026
3309679
debug: set max connections per peer to 1
cwilvx Feb 25, 2026
22314c8
refactor omni protocol to centralize sockets and support biderictiona…
cwilvx Feb 25, 2026
c398aff
logging
cwilvx Mar 2, 2026
ca795e1
handle socket error on pool.acquire
cwilvx Mar 2, 2026
91e216c
logging
cwilvx Mar 2, 2026
a8170ef
logging
cwilvx Mar 2, 2026
ff87e55
fix: incoherent genesis hash
cwilvx Mar 3, 2026
4eedecc
fix: parseInt radix in rateLimit config
cwilvx Mar 7, 2026
73d0c6f
fix: disable nuking older connections
cwilvx Mar 9, 2026
fb01281
fix: locating inflight requests from different connections
cwilvx Mar 10, 2026
6adfa19
fix: rate limit redundant checks
cwilvx Mar 10, 2026
0059896
put back checkOfflinePeers
cwilvx Mar 10, 2026
836d9c5
prevent writing to unwrittable socket in sendErrorResponse
cwilvx Mar 10, 2026
b710db7
try: use alternate socket to send response
cwilvx Mar 13, 2026
14d2ce0
preserve 1 peer connection during cleanup
cwilvx Mar 13, 2026
99a4a49
handle 0xff response opcode
cwilvx Mar 14, 2026
1c53ad3
Merge branch 'testnet' into fixnet-patch-2
cwilvx Mar 14, 2026
72b95f5
handle ERR_SOCKET_CLOSED_BEFORE_CONNECTION
cwilvx Mar 17, 2026
a5a09a4
use sockets for local comms
cwilvx Mar 17, 2026
a827c07
cleanup + update l2ps to use global omniprotocol connection pool
cwilvx Mar 18, 2026
f1ea0ef
Merge 'origin/stabilisation' into fixnet-patch-2
cwilvx Mar 18, 2026
3b2bf09
fix: omni protocol port config
cwilvx Mar 19, 2026
afc52a5
fix: incoherent tx on mempool merge
cwilvx Mar 19, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions run
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ NO_TUI=false
EXTERNAL_DB=false

GIT_PULL=true
TLSNOTARY_DISABLED=false
MONITORING_DISABLED=false
TLSNOTARY_DISABLED=true
MONITORING_DISABLED=true
Comment on lines +10 to +11
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Honor operator overrides for these feature defaults.

These assignments hard-force both stacks off for every run. Because they overwrite inherited env values and there is no positive CLI flag, MONITORING_DISABLED=false ./run and TLSNOTARY_DISABLED=false ./run still leave both features disabled.

💡 Suggested fix
-TLSNOTARY_DISABLED=true
-MONITORING_DISABLED=true
+: "${TLSNOTARY_DISABLED:=true}"
+: "${MONITORING_DISABLED:=true}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
TLSNOTARY_DISABLED=true
MONITORING_DISABLED=true
: "${TLSNOTARY_DISABLED:=true}"
: "${MONITORING_DISABLED:=true}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@run` around lines 10 - 11, The script currently force-sets TLSNOTARY_DISABLED
and MONITORING_DISABLED which overrides any environment/CLI-provided values;
change the assignments for TLSNOTARY_DISABLED and MONITORING_DISABLED so they
only set defaults when the variables are empty (e.g., use shell parameter
expansion or a test like [ -z "$TLSNOTARY_DISABLED" ] &&
TLSNOTARY_DISABLED=true), thereby honoring exported or CLI-provided values while
still providing a default.


# Detect platform for cross-platform compatibility
PLATFORM=$(uname -s)
Expand Down Expand Up @@ -343,7 +343,7 @@ check_system_requirements() {
echo "✅ PostgreSQL ${PG_PORT} is now available (stopped leftover container)"
fi
else
echo "✅ PostgreSQL ${$PG_PORT} is available"
echo "✅ PostgreSQL ${PG_PORT} is available"
fi
fi

Expand Down
23 changes: 11 additions & 12 deletions src/config/loader.ts
Original file line number Diff line number Diff line change
Expand Up @@ -67,19 +67,18 @@ export function loadConfig(): Readonly<AppConfig> {
const d = DEFAULT_CONFIG

const serverPort = envInt(EnvKey.SERVER_PORT, d.server.serverPort)
const serverConfig = {
serverPort,
rpcPort: envInt(EnvKey.RPC_PORT, d.server.rpcPort),
rpcPgPort: envInt(EnvKey.RPC_PG_PORT, d.server.rpcPgPort),
signalingServerPort: envInt(EnvKey.SIGNALING_SERVER_PORT, d.server.signalingServerPort),
rpcSignalingPort: envInt(EnvKey.RPC_SIGNALING_PORT, d.server.rpcSignalingPort),
mcpServerPort: envInt(EnvKey.MCP_SERVER_PORT, d.server.mcpServerPort),
rpcMcpPort: envInt(EnvKey.RPC_MCP_PORT, d.server.rpcMcpPort),
}

const config: AppConfig = {
server: {
serverPort,
rpcPort: envInt(EnvKey.RPC_PORT, d.server.rpcPort),
rpcPgPort: envInt(EnvKey.RPC_PG_PORT, d.server.rpcPgPort),
signalingServerPort: envInt(EnvKey.SIGNALING_SERVER_PORT, d.server.signalingServerPort),
rpcSignalingPort: envInt(EnvKey.RPC_SIGNALING_PORT, d.server.rpcSignalingPort),
mcpServerPort: envInt(EnvKey.MCP_SERVER_PORT, d.server.mcpServerPort),
rpcMcpPort: envInt(EnvKey.RPC_MCP_PORT, d.server.rpcMcpPort),
omniPort: envInt(EnvKey.OMNI_PORT, d.server.omniPort) || serverPort + 1,
},

server: serverConfig,
database: {
host: envStr(EnvKey.PG_HOST, d.database.host),
port: envInt(EnvKey.PG_PORT, d.database.port),
Expand Down Expand Up @@ -126,7 +125,7 @@ export function loadConfig(): Readonly<AppConfig> {

omni: {
enabled: envBool(EnvKey.OMNI_ENABLED, d.omni.enabled),
port: envInt(EnvKey.OMNI_PORT, d.omni.port),
port: envInt(EnvKey.OMNI_PORT, d.omni.port) || serverConfig.rpcPort + 1,
fatal: envBool(EnvKey.OMNI_FATAL, d.omni.fatal),
mode: envStr(EnvKey.OMNI_MODE, d.omni.mode),
tls: {
Expand Down
1 change: 0 additions & 1 deletion src/config/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ export interface ServerConfig {
rpcSignalingPort: number
mcpServerPort: number
rpcMcpPort: number
omniPort: number
}

// --- Database (PostgreSQL) ---
Expand Down
127 changes: 74 additions & 53 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -183,11 +183,11 @@
) {
CategorizedLogger.getInstance().setMinLevel(
level as
| "debug"
| "info"
| "warning"
| "error"
| "critical",
| "debug"
| "info"
| "warning"
| "error"
| "critical",
)
log.info(`[MAIN] Log level set to: ${level}`)
} else {
Expand Down Expand Up @@ -257,7 +257,7 @@
// Allow overriding pg port through RPC_PG_PORT
indexState.PG_PORT = cfg.database.port
// Allow overriding server port through RPC_PORT
indexState.SERVER_PORT = cfg.server.rpcPort || cfg.server.serverPort
indexState.SERVER_PORT = cfg.server.rpcPort
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the default value for rpcPort in defaults.ts
rg -n 'rpcPort' src/config/defaults.ts

Repository: kynesyslabs/node

Length of output: 82


🏁 Script executed:

# Check the full defaults.ts to understand server config structure
head -50 src/config/defaults.ts

Repository: kynesyslabs/node

Length of output: 1281


🏁 Script executed:

# Check if serverPort still exists in defaults
rg -n 'serverPort' src/config/defaults.ts

Repository: kynesyslabs/node

Length of output: 217


🏁 Script executed:

# Look at the surrounding context in index.ts around line 260
sed -n '250,270p' src/index.ts

Repository: kynesyslabs/node

Length of output: 957


🏁 Script executed:

# Check for any env var defaults for rpcPort
rg -n 'RPC_PORT|rpcPort' src/config/ --type ts

Repository: kynesyslabs/node

Length of output: 460


🏁 Script executed:

# Check for any comments or git history about this change
rg -B5 -A5 'SERVER_PORT = cfg.server.rpcPort' src/index.ts

Repository: kynesyslabs/node

Length of output: 572


🏁 Script executed:

# Check if RPC_PORT is documented as required
rg -i 'RPC_PORT' README.md .env.example .env src/config/

Repository: kynesyslabs/node

Length of output: 250


🏁 Script executed:

# Look for any issue or comment about removing serverPort fallback
rg -i 'serverPort.*fallback|remove.*fallback|intentional' src/

Repository: kynesyslabs/node

Length of output: 602


Removed serverPort fallback for SERVER_PORT with inconsistent default behavior.

The assignment changed from cfg.server.rpcPort || cfg.server.serverPort to cfg.server.rpcPort without fallback. Since rpcPort defaults to 0 in src/config/defaults.ts, SERVER_PORT will default to 0, causing the RPC server to bind to a random port unless explicitly configured via the RPC_PORT environment variable.

This is inconsistent with the pattern used for SIGNALING_SERVER_PORT on line 263-265, which includes a fallback: cfg.server.rpcSignalingPort || cfg.server.signalingServerPort.

Clarify whether this behavioral change is intentional and ensure that RPC_PORT is documented as a required configuration if random port binding is not the intended behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` at line 260, Restore the original fallback behavior for
SERVER_PORT so it uses cfg.server.rpcPort with cfg.server.serverPort as a
fallback (matching the SIGNALING_SERVER_PORT pattern), i.e., change the
assignment of indexState.SERVER_PORT to use cfg.server.rpcPort ||
cfg.server.serverPort (or, if intentional to require RPC_PORT, add a
comment/documentation stating that indexState.SERVER_PORT intentionally uses
only cfg.server.rpcPort and that RPC_PORT must be set); update the assignment
near indexState.SERVER_PORT and mirror the same fallback logic used by
SIGNALING_SERVER_PORT for consistency.

// Allow overriding signaling server port through RPC_SIGNALING_PORT
indexState.SIGNALING_SERVER_PORT =
cfg.server.rpcSignalingPort || cfg.server.signalingServerPort
Expand All @@ -269,12 +269,13 @@
)

// MCP Server configuration
indexState.MCP_SERVER_PORT = cfg.server.rpcMcpPort || cfg.server.mcpServerPort
indexState.MCP_SERVER_PORT =
cfg.server.rpcMcpPort || cfg.server.mcpServerPort
indexState.MCP_ENABLED = cfg.core.mcpEnabled

// OmniProtocol TCP Server configuration
indexState.OMNI_ENABLED = cfg.omni.enabled
indexState.OMNI_PORT = cfg.server.omniPort
indexState.OMNI_PORT = await getNextAvailablePort(cfg.omni.port)

// Setting the server port to the shared state
getSharedState.serverPort = indexState.SERVER_PORT
Expand Down Expand Up @@ -305,6 +306,7 @@
// Digest the arguments
await digestArguments()
}

// ANCHOR Preparing the main loop
// ! Simplify this too
async function preMainLoop() {
Expand Down Expand Up @@ -356,16 +358,16 @@
// ANCHOR Looking for the genesis block
log.info("[BOOTSTRAP] Looking for the genesis block")
// INFO Now ensuring we have an initialized chain or initializing the genesis block
await peerBootstrap(indexState.PeerList)
await findGenesisBlock()
await loadGenesisIdentities()
log.info("[CHAIN] 🖥️ Found the genesis block")

log.info("[PEER] 🌐 Bootstrapping peers...")
log.debug(
"[PEER] Peer list: " +
JSON.stringify(indexState.PeerList.map(p => p.identity)),
JSON.stringify(indexState.PeerList.map(p => p.identity)),
)
await peerBootstrap(indexState.PeerList)

// Loading the peers
//PeerList.push(ourselves)
Expand All @@ -378,8 +380,8 @@

log.info(
"[PEER] 🌐 Peers loaded (" +
indexState.peerManager.getPeers().length +
")",
indexState.peerManager.getPeers().length +
")",
)
// INFO: Set initial last block data
const lastBlock = await Chain.getLastBlock()
Expand Down Expand Up @@ -453,29 +455,10 @@
// Start OmniProtocol TCP server (optional)
if (indexState.OMNI_ENABLED) {
try {
const omniServer = await startOmniProtocolServer({
enabled: true,
port: indexState.OMNI_PORT,
maxConnections: 1000,
authTimeout: 5000,
connectionTimeout: 600000, // 10 minutes
// TLS configuration
tls: {
enabled: Config.getInstance().omni.tls.enabled,
mode: (Config.getInstance().omni.tls.mode as "self-signed" | "ca") || "self-signed",
certPath: Config.getInstance().omni.tls.certPath,
keyPath: Config.getInstance().omni.tls.keyPath,
caPath: Config.getInstance().omni.tls.caPath,
minVersion: (Config.getInstance().omni.tls.minVersion as "TLSv1.2" | "TLSv1.3") || "TLSv1.3",
},
// Rate limiting configuration
rateLimit: {
enabled: Config.getInstance().omni.rateLimit.enabled,
maxConnectionsPerIP: Config.getInstance().omni.rateLimit.maxConnectionsPerIp,
maxRequestsPerSecondPerIP: Config.getInstance().omni.rateLimit.maxRequestsPerSecondPerIp || 100,
maxRequestsPerSecondPerIdentity: Config.getInstance().omni.rateLimit.maxRequestsPerSecondPerIdentity || 200,
},
})
getSharedState.omniConfig.port = indexState.OMNI_PORT
const omniServer = await startOmniProtocolServer(
getSharedState.omniConfig,
)
indexState.omniServer = omniServer
log.info(`[CORE] OmniProtocol server started on port ${indexState.OMNI_PORT}`)

Expand All @@ -492,6 +475,11 @@
handleError(error, "NETWORK", { source: ErrorSource.OMNI_STARTUP })
// Continue without OmniProtocol (failsafe - falls back to HTTP)
}

if (!getSharedState.omniAdapter) {
log.error("[CORE] Failed to start OmniProtocol server")
process.exit(1)
}
} else {
log.info("[CORE] OmniProtocol server disabled (set OMNI_ENABLED=true to enable)")
}
Expand Down Expand Up @@ -736,7 +724,7 @@
} else {
// Non-TUI mode: set up Enter key listener to skip the wait
// ONLY DO THIS IF STDIN IS TTY
let cleanupStdin = () => { }
let cleanupStdin = () => {}

if (process.stdin.isTTY) {
const wasRawMode = process.stdin.isRaw
Expand All @@ -749,7 +737,9 @@
const key = chunk.toString()
if (key === "\r" || key === "\n" || key === "\u0003") {
// Enter key or Ctrl+C
if (Waiter.isWaiting(Waiter.keys.STARTUP_HELLO_PEER)) {
if (
Waiter.isWaiting(Waiter.keys.STARTUP_HELLO_PEER)
) {
Waiter.abort(Waiter.keys.STARTUP_HELLO_PEER)
log.info(
"[MAIN] Wait skipped by user, starting sync loop",
Expand Down Expand Up @@ -794,7 +784,9 @@
// Start DTR relay retry service after background loop initialization
// The service will wait for syncStatus to be true before actually processing
if (getSharedState.PROD) {
log.info("[CORE] [DTR] Initializing relay retry service (will start after sync)")
log.info(
"[CORE] [DTR] Initializing relay retry service (will start after sync)",
)
// Service will check syncStatus internally before processing
DTRManager.getInstance().start()
}
Expand All @@ -803,26 +795,37 @@
try {
await ParallelNetworks.getInstance().loadAllL2PS()
} catch (error) {
handleError(error, "CORE", { source: ErrorSource.L2PS_NETWORK_LOADING })
handleError(error, "CORE", {
source: ErrorSource.L2PS_NETWORK_LOADING,
})
}

// Start L2PS hash generation service (for L2PS participating nodes)
// Note: l2psJoinedUids is populated during ParallelNetworks initialization
if (getSharedState.l2psJoinedUids && getSharedState.l2psJoinedUids.length > 0) {
if (
getSharedState.l2psJoinedUids &&
getSharedState.l2psJoinedUids.length > 0
) {
try {
const l2psHashService = L2PSHashService.getInstance()
await l2psHashService.start()
log.info(`[CORE] [L2PS] Hash generation service started for ${getSharedState.l2psJoinedUids.length} L2PS networks`)
log.info(
`[CORE] [L2PS] Hash generation service started for ${getSharedState.l2psJoinedUids.length} L2PS networks`,
)

// Start L2PS batch aggregator (batches transactions and submits to main mempool)
const l2psBatchAggregator = L2PSBatchAggregator.getInstance()
await l2psBatchAggregator.start()
log.info("[CORE] [L2PS] Batch aggregator service started")
} catch (error) {
handleError(error, "CORE", { source: ErrorSource.L2PS_SERVICES_STARTUP })
handleError(error, "CORE", {
source: ErrorSource.L2PS_SERVICES_STARTUP,
})
}
} else {
log.info("[CORE] [L2PS] No L2PS networks joined, L2PS services not started")
log.info(
"[CORE] [L2PS] No L2PS networks joined, L2PS services not started",
)
}
}
}
Expand Down Expand Up @@ -852,7 +855,7 @@
const forceExitTimeout = setTimeout(() => {
log.warning("[CORE] Shutdown timeout exceeded, forcing exit...")
process.exit(0)
}, 5_000)
}, 3_000)
Comment on lines 855 to +858
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Force exit timeout (3s) may be too aggressive for cleanup.

The timeout was reduced to 3 seconds, but the shutdown sequence includes multiple services with their own timeouts. For example, L2PS services at lines 882-883 each have a 3-second timeout (stop(3000)), meaning just these two services could take up to 6 seconds if they need the full timeout.

Consider either:

  1. Increasing the force exit timeout to accommodate the sum of individual service timeouts, or
  2. Reducing individual service timeouts to fit within the global budget.
Suggested fix: Increase timeout to 10 seconds
     const forceExitTimeout = setTimeout(() => {
         log.warning("[CORE] Shutdown timeout exceeded, forcing exit...")
         process.exit(0)
-    }, 3_000)
+    }, 10_000)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const forceExitTimeout = setTimeout(() => {
log.warning("[CORE] Shutdown timeout exceeded, forcing exit...")
process.exit(0)
}, 5_000)
}, 3_000)
const forceExitTimeout = setTimeout(() => {
log.warning("[CORE] Shutdown timeout exceeded, forcing exit...")
process.exit(0)
}, 10_000)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` around lines 855 - 858, The global force exit timer set by
const forceExitTimeout = setTimeout(..., 3_000) is too short relative to
individual service shutdown timeouts (e.g., L2PS services calling stop(3000));
update the shutdown budget by increasing the force exit timeout to a safe value
(e.g., 10_000) or alternatively reduce individual service stop(...) timeouts so
their total worst-case duration fits under the global timer; adjust the constant
in the setTimeout call (forceExitTimeout) and/or the stop(3000) invocations to
keep the global timer larger than the sum of per-service timeouts and ensure all
services have time to clean up.

// Don't let this timer itself keep the process alive
if (forceExitTimeout.unref) forceExitTimeout.unref()

Expand All @@ -861,7 +864,9 @@
if (indexState.tuiManager) {
try {
indexState.tuiManager.stop()
} catch (_) { /* ignore TUI errors during shutdown */ }
} catch (_) {
/* ignore TUI errors during shutdown */
}

Check warning on line 869 in src/index.ts

View check run for this annotation

SonarQubeCloud / SonarCloud Code Analysis

Handle this exception or don't catch it at all.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZ0FQPoLgVd7omoSNWkj&open=AZ0FQPoLgVd7omoSNWkj&pullRequest=691
}

// Stop DTR manager if running (PROD only)
Expand All @@ -887,7 +892,9 @@
try {
await stopOmniProtocolServer()
} catch (error) {
handleError(error, "NETWORK", { source: ErrorSource.OMNI_SHUTDOWN })
handleError(error, "NETWORK", {
source: ErrorSource.OMNI_SHUTDOWN,
})
}
}

Expand All @@ -909,7 +916,9 @@
await import("./features/tlsnotary")
await shutdownTLSNotary()
} catch (error) {
handleError(error, "TLSN", { source: ErrorSource.TLSN_SHUTDOWN })
handleError(error, "TLSN", {
source: ErrorSource.TLSN_SHUTDOWN,
})
}
}

Expand All @@ -922,7 +931,9 @@
getMetricsCollector().stop()
indexState.metricsServer.stop()
} catch (error) {
handleError(error, "CORE", { source: ErrorSource.METRICS_SHUTDOWN })
handleError(error, "CORE", {
source: ErrorSource.METRICS_SHUTDOWN,
})
}
}

Expand All @@ -932,7 +943,9 @@
try {
indexState.rpcServer.stop()
} catch (error) {
handleError(error, "NETWORK", { source: ErrorSource.RPC_SHUTDOWN })
handleError(error, "NETWORK", {
source: ErrorSource.RPC_SHUTDOWN,
})
}
}

Expand All @@ -942,21 +955,29 @@
try {
indexState.signalingServer.disconnect()
} catch (error) {
handleError(error, "NETWORK", { source: ErrorSource.SIGNALING_SHUTDOWN })
handleError(error, "NETWORK", {
source: ErrorSource.SIGNALING_SHUTDOWN,
})
}
}

// Stop HTTP rate limiter cleanup interval
try {
const { RateLimiter: HttpRateLimiter } = await import("./libs/network/middleware/rateLimiter")
const { RateLimiter: HttpRateLimiter } =
await import("./libs/network/middleware/rateLimiter")
HttpRateLimiter.getInstance().destroy()
} catch (_) { /* may not be initialized */ }
} catch (_) {
/* may not be initialized */
}

Check warning on line 971 in src/index.ts

View check run for this annotation

SonarQubeCloud / SonarCloud Code Analysis

Handle this exception or don't catch it at all.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZ0FQPoLgVd7omoSNWkk&open=AZ0FQPoLgVd7omoSNWkk&pullRequest=691

log.info("[CORE] Cleanup complete, exiting...")
clearTimeout(forceExitTimeout)
process.exit(0)
} catch (error) {
handleError(error, "CORE", { source: ErrorSource.GRACEFUL_SHUTDOWN, fatal: true })
handleError(error, "CORE", {
source: ErrorSource.GRACEFUL_SHUTDOWN,
fatal: true,
})
clearTimeout(forceExitTimeout)
process.exit(1)
}
Expand Down
5 changes: 4 additions & 1 deletion src/libs/blockchain/gcr/handleGCR.ts
Original file line number Diff line number Diff line change
Expand Up @@ -371,7 +371,7 @@
* @returns Combined result of all edit applications
* @throws May throw if any edit application fails
*/
static async applyToTx(

Check failure on line 374 in src/libs/blockchain/gcr/handleGCR.ts

View check run for this annotation

SonarQubeCloud / SonarCloud Code Analysis

Refactor this function to reduce its Cognitive Complexity from 16 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZ0FX6qBQcX3ClQ7g4g7&open=AZ0FX6qBQcX3ClQ7g4g7&pullRequest=691
tx: Transaction,
isRollback = false,
simulate = false,
Expand All @@ -396,7 +396,10 @@
// REVIEW: Ensure txhash is set on each GCR edit from the transaction
// This is needed because client-side GCR edits don't have the txhash
// (it's cleared during validation for hash comparison)
edit.txhash = tx.hash
if (!simulate){
edit.txhash = tx.hash
}

log.debug("[applyToTx] Executing GCREdit: " + edit.type)
try {
const result = await HandleGCR.apply(
Expand Down
4 changes: 2 additions & 2 deletions src/libs/blockchain/routines/Sync.ts
Original file line number Diff line number Diff line change
Expand Up @@ -262,8 +262,8 @@ async function verifyLastBlockIntegrity(

if (genesisBlock.hash !== ourGenesisHash) {
log.error("[fastSync] Genesis hash is not coherent")
log.info("[fastSync] Our hash: " + ourGenesisHash)
log.info("[fastSync] Peer hash: " + genesisBlock.hash)
log.error("[fastSync] Our hash: " + ourGenesisHash)
log.error("[fastSync] Peer hash: " + genesisBlock.hash)
process.exit(1)
}

Expand Down
2 changes: 1 addition & 1 deletion src/libs/blockchain/transaction.ts
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,9 @@ export default class Transaction implements ITransaction {
// Initialize with defaults or provided data
Object.assign(this, {
content: {
from_ed25519_address: null,
type: null,
from: "",
from_ed25519_address: "",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find all usages of from_ed25519_address with null-coalescing or equality checks

echo "=== Null coalescing patterns (will break) ==="
rg -n 'from_ed25519_address\s*\?\?' --type=ts

echo ""
echo "=== Strict equality checks (may be affected) ==="
rg -n 'from_ed25519_address\s*(===|!==)' --type=ts

echo ""
echo "=== Falsy checks (should work with both null and empty string) ==="
rg -n '(!|if\s*\()\s*\S*from_ed25519_address' --type=ts

Repository: kynesyslabs/node

Length of output: 699


🏁 Script executed:

# 1. Read transaction.ts to see the type definition and default value
echo "=== transaction.ts around line 60 ==="
sed -n '50,70p' src/libs/blockchain/transaction.ts

# 2. Read subOperations.ts around line 60
echo ""
echo "=== subOperations.ts around line 60 ==="
sed -n '55,65p' src/libs/blockchain/routines/subOperations.ts

# 3. Read handleIdentityRequest.ts around line 63
echo ""
echo "=== handleIdentityRequest.ts around line 63 ==="
sed -n '55,75p' src/libs/network/routines/transactions/handleIdentityRequest.ts

# 4. Read consensus_rollback_smoke.ts around line 58
echo ""
echo "=== consensus_rollback_smoke.ts around line 58 ==="
sed -n '50,70p' better_testing/loadgen/src/features/consensus/consensus_rollback_smoke.ts

Repository: kynesyslabs/node

Length of output: 3209


Breaking change: null"" default affects downstream null-coalescing operator.

In subOperations.ts:60, the expression genesisTx.content.from_ed25519_address ?? "0x0" will now return "" instead of falling back to "0x0", because empty string is not nullish. Genesis transactions will receive "" instead of the intended "0x0" fallback for unset addresses.

Additionally, in handleIdentityRequest.ts:63, the strict equality check tx.content.from_ed25519_address !== sender now compares against "" instead of null, altering validation semantics.

If intentional, audit all usages of from_ed25519_address to confirm the empty string default is the desired behavior, especially in genesis transaction handling where the "0x0" fallback was previously expected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/blockchain/transaction.ts` at line 60, The default for
from_ed25519_address was changed from null to "" which breaks null-coalescing
and equality checks; restore the original semantics by reverting the default in
the transaction model (from_ed25519_address) back to null OR update call-sites:
change uses of genesisTx.content.from_ed25519_address ?? "0x0" to a falsy check
(e.g., use || "0x0") or explicitly treat empty string as unset, and update the
strict comparison in handleIdentityRequest (tx.content.from_ed25519_address !==
sender) to account for "" (e.g., normalize both sides or treat "" as null) so
genesis transactions still fall back to "0x0" and identity validation behaves as
before.

to: "",
amount: null,
data: [null, null],
Expand Down
Loading
Loading