22 Mar, 2026

2 commits


21 Mar, 2026

2 commits


20 Mar, 2026

4 commits


19 Mar, 2026

7 commits

  • tangwang
     
  • tangwang
     
  • tangwang
     
  • tangwang
     
  • - Text and image embedding are now split into separate
      services/processes, while still keeping a single replica as requested.
    The split lives in
    [embeddings/server.py](/data/saas-search/embeddings/server.py#L112),
    [config/services_config.py](/data/saas-search/config/services_config.py#L68),
    [providers/embedding.py](/data/saas-search/providers/embedding.py#L27),
    and the start scripts
    [scripts/start_embedding_service.sh](/data/saas-search/scripts/start_embedding_service.sh#L36),
    [scripts/start_embedding_text_service.sh](/data/saas-search/scripts/start_embedding_text_service.sh),
    [scripts/start_embedding_image_service.sh](/data/saas-search/scripts/start_embedding_image_service.sh).
    - Independent admission control is in place now: text and image have
      separate inflight limits, and image can be kept much stricter than
    text. The request handling, reject path, `/health`, and `/ready` are in
    [embeddings/server.py](/data/saas-search/embeddings/server.py#L613),
    [embeddings/server.py](/data/saas-search/embeddings/server.py#L786), and
    [embeddings/server.py](/data/saas-search/embeddings/server.py#L1028).
    - I checked the Redis embedding cache. It did exist, but there was a
      real flaw: cache keys did not distinguish `normalize=true` from
    `normalize=false`. I fixed that in
    [embeddings/cache_keys.py](/data/saas-search/embeddings/cache_keys.py#L6),
    and both text and image now use the same normalize-aware keying. I also
    added service-side BF16 cache hits that short-circuit before the model
    lane, so repeated requests no longer get throttled behind image
    inference.
    
    **What This Means**
    - Image pressure no longer blocks text, because they are on different
      ports/processes.
    - Repeated text/image requests now return from Redis without consuming
      model capacity.
    - Over-capacity requests are rejected quickly instead of sitting
      blocked.
    - I did not add a load balancer or multi-replica HA, per your GPU
      constraint. I also did not build Grafana/Prometheus dashboards in this
    pass, but `/health` now exposes the metrics needed to wire them.
    
    **Validation**
    - Tests passed: `.venv/bin/python -m pytest -q
      tests/test_embedding_pipeline.py
    tests/test_embedding_service_limits.py` -> `10 passed`
    - Stress test tool updates are in
      [scripts/perf_api_benchmark.py](/data/saas-search/scripts/perf_api_benchmark.py#L155)
    - Fresh benchmark on split text service `6105`: 535 requests / 3s, 100%
      success, `174.56 rps`, avg `88.48 ms`
    - Fresh benchmark on split image service `6108`: 1213 requests / 3s,
      100% success, `403.32 rps`, avg `9.64 ms`
    - Live health after the run showed cache hits and non-zero cache-hit
      latency accounting:
      - text `avg_latency_ms=4.251`
      - image `avg_latency_ms=1.462`
    tangwang
     
  • The instability is very likely real overload, but `lsof -i :6005 | wc -l
    = 75` alone does not prove it. What does matter is the live shape of the
    service: it is a single `uvicorn` worker on port `6005`, and the code
    had one shared process handling both text and image requests, with image
    work serialized behind a single lock. Under bursty image traffic,
    requests could pile up and sit blocked with almost no useful tracing,
    which matches the “only blocking observed” symptom.
    
    now adds persistent log files, request IDs, per-request
    request/response/failure logs, text microbatch dispatch logs, health
    stats with active/rejected counts, and explicit overload admission
    control. New knobs are `TEXT_MAX_INFLIGHT`, `IMAGE_MAX_INFLIGHT`, and
    `EMBEDDING_OVERLOAD_STATUS_CODE`. Startup output now shows those limits
    and log paths in
    [scripts/start_embedding_service.sh](/data/saas-search/scripts/start_embedding_service.sh#L80).
    I also added focused tests in
    [tests/test_embedding_service_limits.py](/data/saas-search/tests/test_embedding_service_limits.py#L1).
    
    What this means operationally:
    - Text and image are still in one process, so this is not the final
      architecture.
    - But image spikes will now be rejected quickly once the image lane is
      full instead of sitting around and consuming the worker pool.
    - Logs will now show each request, each rejection, each microbatch
      dispatch, backend time, response time, and request ID.
    
    Verification:
    - Passed: `.venv/bin/python -m pytest -q
      tests/test_embedding_service_limits.py`
    - I also ran a wider test command, but 3 failures came from pre-existing
      drift in
    [tests/test_embedding_pipeline.py](/data/saas-search/tests/test_embedding_pipeline.py#L95),
    where the tests still monkeypatch `embeddings.text_encoder.redis.Redis`
    even though
    [embeddings/text_encoder.py](/data/saas-search/embeddings/text_encoder.py#L1)
    no longer imports `redis` that way.
    
    已把 CLIP_AS_SERVICE 的默认模型切到
    ViT-L-14,并把这套配置收口成可变更的统一入口了。现在默认值在
    embeddings/config.py (line 29) 的 CLIP_AS_SERVICE_MODEL_NAME,当前为
    CN-CLIP/ViT-L-14;scripts/start_cnclip_service.sh (line 37)
    会自动读取这个配置,不再把默认模型写死在脚本里,同时支持
    CNCLIP_MODEL_NAME 和 --model-name
    临时覆盖。scripts/start_embedding_service.sh (line 29) 和
    embeddings/server.py (line 425)
    也补了模型信息输出,方便排查实际连接的配置。
    
    文档也一起更新了,重点在 docs/CNCLIP_SERVICE说明文档.md (line 62) 和
    embeddings/README.md (line
    58):现在说明的是“以配置为准、可覆盖”的机制,而不是写死某个模型名;相关总结文档和内部说明也同步改成了配置驱动表述。
    tangwang
     
  • 中采用了最优T4配置:ct2_inter_threads=2、ct2_max_queued_batches=16、ct2_batch_type=examples。该设置使NLLB获得了显著更优的在线式性能,同时大致保持了大批次吞吐量不变。我没有将相同配置应用于两个Marian模型,因为聚焦式报告显示了复杂的权衡:opus-mt-zh-en
    在保守默认配置下更为均衡,而 opus-mt-en-zh 虽然获得了吞吐量提升,但在
    c=8 时尾延迟波动较大。
    我还将部署/配置经验记录在 /data/saas-search/translation/README.md
    中,并在 /data/saas-search/docs/TODO.txt
    中标记了优化结果。关键实践要点现已记录如下:使用CT2 +
    float16,保持单worker,将NLLB的 inter_threads 设为2、max_queued_batches
    设为16,在T4上避免使用
    inter_threads=4(因为这会损害高批次吞吐量),除非区分在线/离线配置,否则保持Marian模型的默认配置保守。
    tangwang
     

18 Mar, 2026

4 commits

  • Implemented CTranslate2 for the three local translation models and
    switched the existing local_nllb / local_marian factories over to it.
    The new runtime lives in local_ctranslate2.py, including HF->CT2
    auto-conversion, float16 compute type mapping, Marian direction
    handling, and NLLB target-prefix decoding. The service wiring is in
    service.py (line 113), and the three model configs now point at explicit
    ctranslate2-float16 dirs in config.yaml (line 133).
    
    I also updated the setup path so this is usable end-to-end:
    ctranslate2>=4.7.0 was added to requirements_translator_service.txt and
    requirements.txt, the download script now supports pre-conversion in
    download_translation_models.py (line 27), and the docs/config examples
    were refreshed in translation/README.md. I installed ctranslate2 into
    .venv-translator, pre-converted all three models, and the CT2 artifacts
    are now already on disk:
    
    models/translation/facebook/nllb-200-distilled-600M/ctranslate2-float16
    models/translation/Helsinki-NLP/opus-mt-zh-en/ctranslate2-float16
    models/translation/Helsinki-NLP/opus-mt-en-zh/ctranslate2-float16
    Verification was solid. python3 -m compileall passed, direct
    TranslationService smoke tests ran successfully in .venv-translator, and
    the focused NLLB benchmark on the local GPU showed a clear win:
    
    batch_size=16: HF 0.347s/batch, 46.1 items/s vs CT2 0.130s/batch, 123.0
    items/s
    batch_size=1: HF 0.396s/request vs CT2 0.126s/request
    One caveat: translation quality on some very short phrases, especially
    opus-mt-en-zh, still looks a bit rough in smoke tests, so I’d run your
    real quality set before fully cutting over. If you want, I can take the
    next step and update the benchmark script/report so you have a fresh
    full CT2 performance report for all three models.
    tangwang
     
  • tangwang
     
  • tangwang
     
  • tangwang
     

17 Mar, 2026

4 commits

  • tangwang
     
  • tangwang
     
  • 多个独立翻译能力”重构。现在业务侧不再把翻译当 provider
    选型,QueryParser 和 indexer 统一通过 6006 的 translator service client
    调用;真正的能力选择、启用开关、model + scene 路由,都收口到服务端和新的
    translation/ 目录里了。
    
    这次的核心改动在
    config/services_config.py、providers/translation.py、api/translator_app.py、config/config.yaml
    和新的 translation/service.py。配置从旧的
    services.translation.provider/providers 改成了 service_url +
    default_model + default_scene + capabilities,每个能力可独立
    enabled;服务端新增了统一的 backend 管理与懒加载,真实实现集中到
    translation/backends/qwen_mt.py、translation/backends/llm.py、translation/backends/deepl.py,旧的
    query/qwen_mt_translate.py、query/llm_translate.py、query/deepl_provider.py
    只保留兼容导出。接口上,/translate 现在标准支持 scene,context
    作为兼容别名继续可用,健康检查会返回默认模型、默认场景和已启用能力。
    tangwang
     
  • - Rename indexer/product_annotator.py to indexer/product_enrich.py and remove CSV-based CLI entrypoint, keeping only in-memory analyze_products API
    - Introduce dedicated product_enrich logging with separate verbose log file for full LLM requests/responses
    - Change indexer and /indexer/enrich-content API wiring to use indexer.product_enrich instead of indexer.product_annotator, updating tests and docs accordingly
    - Switch translate_prompts to share SUPPORTED_INDEX_LANGUAGES from tenant_config_loader and reuse that mapping for language code → display name
    - Remove hard SUPPORTED_LANGS constraint from LLM content-enrichment flow, driving languages directly from tenant/indexer configuration
    - Redesign LLM prompt generation to support multi-round, multi-language tables: first round in English, subsequent rounds translate the entire table (headers + cells) into target languages using English instructions
    tangwang
     

13 Mar, 2026

4 commits


12 Mar, 2026

5 commits


11 Mar, 2026

2 commits

  • tangwang
     
  • ./scripts/start_tei_service.sh
    START_TEI=0 ./scripts/service_ctl.sh restart embedding
    
    curl -sS -X POST "http://127.0.0.1:6005/embed/text" \
      -H "Content-Type: application/json" \
      -d '["芭比娃娃 儿童玩具", "纯棉T恤 短袖"]'
    tangwang
     

10 Mar, 2026

4 commits

  • tangwang
     
  • - 配置改为“字段基名 + 动态语言后缀”方案,已不再依赖旧 `indexes`。
    [config.yaml](/data/saas-search/config/config.yaml#L17)
    - `search_fields` / `text_query_strategy` 已进入强校验与解析流程。
    [config_loader.py](/data/saas-search/config/config_loader.py#L254)
    
    2. 查询语言计划与翻译等待策略
    - `QueryParser` 现在产出
      `query_text_by_lang`、`search_langs`、`source_in_index_languages`。
    [query_parser.py](/data/saas-search/query/query_parser.py#L41)
    - 你要求的两种翻译路径都在:
      - 源语言不在店铺 `index_languages`:`translate_multi_async` + 等待
        future
      - 源语言在 `index_languages`:`translate_multi(...,
        async_mode=True)`,尽量走缓存
    [query_parser.py](/data/saas-search/query/query_parser.py#L284)
    
    3. ES 查询统一文本策略(无 AST 分支)
    - 主召回按 `search_langs` 动态拼 `field.{lang}`,翻译语种做次权重
      `should`。
    [es_query_builder.py](/data/saas-search/search/es_query_builder.py#L454)
    - 布尔 AST 路径已删除,仅保留统一文本策略。
    [es_query_builder.py](/data/saas-search/search/es_query_builder.py#L185)
    
    4. LanguageDetector 优化
    - 从“拉丁字母默认英文”升级为:脚本优先 +
      拉丁语系打分(词典/变音/后缀)。
    [language_detector.py](/data/saas-search/query/language_detector.py#L68)
    
    5. 布尔能力清理(补充)
    - 已删除废弃模块:
    [boolean_parser.py](/data/saas-search/search/boolean_parser.py)
    - `search/__init__` 已无相关导出。
    [search/__init__.py](/data/saas-search/search/__init__.py)
    
    6. `indexes` 过时收口(补充)
    - 兼容函数改为基于动态字段生成,不再依赖 `config.indexes`。
    [utils.py](/data/saas-search/config/utils.py#L24)
    - Admin 配置接口改为返回动态字段配置,不再暴露 `num_indexes`。
    [admin.py](/data/saas-search/api/routes/admin.py#L52)
    
    7. suggest
    tangwang
     
  • tangwang
     
  • tangwang
     

09 Mar, 2026

2 commits

  • config/config.yaml:
    - qwen3_vllm: enable_prefix_caching true(启用前缀缓存)
    - qwen3_vllm: enforce_eager false(允许 CUDA graph 加速)
    
    reranker/backends/qwen3_vllm.py:
    - TokensPrompt 导入改为 vllm.inputs.data(官方路径,兼容性更好)
    - 缺失 token 时使用 logprob=-10,与官方一致(原为 1e-10)
    - 使用批量 apply_chat_template 替代逐条调用,提升效率
    - logprobs 访问改为官方模式:token not in last 时 -10,否则 last[token].logprob
    
    其他: docs、embeddings、README 等文档更新
    
    Made-with: Cursor
    tangwang
     
  • CNCLIP_DEVICE=cuda TEI_USE_GPU=1 ./scripts/service_ctl.sh start
    搜索后端+indexer+测试前段+4个微服务 跑通
    tangwang