Releases: VirtualFlyBrain/VFBquery
v1.7.4
What's Changed
- Relax HA API FlyBase ID rewriting so
resolve_entityandresolve_combinationfall back to the canonical FlyBase ID when VFB term info cannot provide a preferred symbol or label. - Keep the preferred VFB term name rewrite path when term info is available.
- Update HA API validation tests to cover the canonical-ID fallback behavior.
v1.7.3
What's Changed
- Rewrite
resolve_entityandresolve_combinationHA API requests that receive FlyBase IDs to use the preferred VFB term name before querying Chado. - Return
NOT_FOUNDinstead of passing raw IDs to Chado when no preferred name/label can be derived from VFB term info. - Add focused HA API validation tests covering query normalization, ID rewriting, and the no-fallback path.
v1.7.2
What's Changed\n\n- Bump version to 1.7.2
v1.7.1
What's Changed
FlyBase queries wired into term_info
- FindStocks query now appears on FlyBase Feature terms (FBgn/FBal/FBti/FBtp/FBco/FBst) in term_info, allowing users to find available fly stocks directly from a term's query panel
- FindComboPublications query now appears on FBco (split system combination) terms, enabling publication lookup from term_info
- Both queries are available via
/run_query?id=<short_form>&query_type=FindStocks|FindComboPublicationsand registered in the Docker server'sQUERY_TYPE_MAP
v1.7.0
New Features
FlyBase Stock Finder (flybase_stocks)
resolve_entity(name_or_id)— Resolve gene names, allele symbols, or FlyBase IDs (FBgn/FBal/FBti/FBst/FBco) via 3-level search (exact → synonym → broad ILIKE)find_stocks(feature_id, collection_filter=None)— Find available fly stocks with 4-path gene UNION query, 3-path allele query, insertion, combination, and stock detail lookups- Supports collection filtering (Bloomington, Kyoto, VDRC, etc.)
FlyBase Combination Publications (flybase_combo_pubs)
resolve_combination(name_or_id)— Resolve split system combination names/synonyms (e.g. "MB002B") to FBco IDsfind_combo_publications(fbco_id)— Find linked publications with DOI, PMID, and PMCID
VFB Neuron Connectivity (vfb_connectivity)
list_connectome_datasets()— List available connectome datasets (Hemibrain, FlyWire, MANC, etc.)query_connectivity(upstream_type, downstream_type, weight, group_by_class, exclude_dbs)— Query synaptic connections between neuron types with per-neuron or class-aggregated results- Uses VFBquery's own Neo4jConnect client (no vfb_connect dependency)
API Endpoints
All new functions are exposed via the HA API server with full caching, request coalescing, and backpressure support:
GET /resolve_entity?query=<name_or_id>GET /find_stocks?id=<feature_id>&collection=<filter>GET /resolve_combination?query=<name_or_id>GET /find_combo_publications?id=<FBco_ID>GET /list_connectome_datasetsGET /query_connectivity?upstream_type=X&downstream_type=Y&weight=5&group_by_class=false&exclude_dbs=hb,fafb
Dependencies
- Added
psycopg[binary]>=3.0for FlyBase Chado PostgreSQL access
Full Changelog: v1.6.13...v1.7.0
v1.6.13 - Increase Solr cache write timeout
What's Changed (v1.6.13)
Increased Solr cache write timeout
Solr cache writes are performed asynchronously after the query returns to the user.
We now default to a 30 second write timeout (configurable via VFBQUERY_SOLR_WRITE_TIMEOUT).
This helps prevent large or slow cache writes from spamming errors while still allowing the cache to work when Solr is responsive.
v1.6.12 - Solr cache failover/backoff
What's Changed (v1.6.12)
Solr cache failover/backoff
When Solr becomes unreachable or times out, VFBquery now:
- disables caching temporarily (acts like
VFBQUERY_CACHE_ENABLED=false) - logs a single warning, not repeated timeout errors
- re-tries periodically (default backoff 60s) and re-enables caching once Solr responds
Configuration:
VFBQUERY_SOLR_BACKOFF_SECONDScontrols how long caching stays disabled after a failure
v1.6.11 - Fix security middleware deprecation warning
What's Changed (v1.6.11)
Fix: security middleware & aria deprecation warning
security_middlewarenow blocks all non-API paths with an empty 404 (no stack traces).- The scanner probe counter is initialized at startup so aiohttp no longer emits the "Changing state of started or joined application is deprecated" warning.
Release status
- v1.6.10 exists but does not include the final aiohttp deprecation fix.
- This release (v1.6.11) includes the final patch and is the current main branch tip.
v1.6.10 - Block vulnerability scanner probes
What's Changed
Path allowlist middleware — blocks vulnerability scanners
Production logs showed automated scanners probing for .env, .git/config, wp-config.php, .aws/credentials, phpinfo.php, .s3cfg, and similar sensitive paths.
A new security_middleware now rejects any request that isn't to one of the 4 known API endpoints with a bare 404 (empty body, no stack information leaked):
/get_term_info/run_query/health/status
How it works
- Requests to unknown paths are rejected before reaching any handler or worker queue
- The first 10 blocked probes are logged individually, then every 100th (to avoid log flooding)
- The
/statusendpoint now includes ascanner_probes_blockedcounter
No IP-based blocking
All traffic arrives via the Kubernetes ingress/caching proxy, so source IPs are internal cluster addresses — IP-based rate limiting would not be effective. The path allowlist approach works regardless of source IP.
v1.6.9 - Backpressure: request coalescing, result cache and queue depth limit
What's Changed
Backpressure features to prevent queue buildup
Production logs showed 389 requests waiting with only 20 active workers, taking 3-4 minutes to drain. Three new mechanisms address this:
Request Coalescing
- Identical in-flight queries now share a single worker execution via
RequestCoalescer - All concurrent callers receive the same result when the one executing request completes
- AllDatasets queries are normalized — the
idparameter is ignored since the function returns the same data regardless, so all concurrent AllDatasets requests share one worker slot
In-Memory Result Cache (L1)
ResultCachestores recent results directly in the event-loop process (default TTL: 5 minutes)- Cache hits bypass the worker queue entirely — zero worker slots consumed
- Sits in front of the existing Solr-based L2 cache (which runs inside worker processes)
Queue Depth Limit
- Returns 503 Service Unavailable with
Retry-After: 5header when waiting requests exceed the threshold (default: 200) - Prevents unbounded queue growth that caused multi-minute response times
New configuration
| Env Var | CLI Flag | Default | Description |
|---|---|---|---|
VFBQUERY_MAX_QUEUE_DEPTH |
--max-queue-depth |
200 | Max waiting requests before 503 (0 = unlimited) |
VFBQUERY_CACHE_TTL |
--cache-ttl |
300 | Result cache TTL in seconds |
Enhanced /status endpoint
New fields: cache_size, cache_hits, coalesced_total, coalesced_in_flight, max_queue_depth