Blame view

requirements_reranker_qwen3_transformers_packed.txt 412 Bytes
4823f463   tangwang   qwen3_vllm_score ...
1
  # Isolated dependencies for qwen3_transformers_packed reranker backend.
749d78c8   tangwang   支持 reranker精简inst...
2
3
4
5
  #
  # Keep this stack aligned with the validated CUDA runtime on our hosts.
  # On this machine, torch 2.11.0 + cu130 fails CUDA init, while torch 2.10.0 + cu128 works.
  # We also cap transformers <5 to stay on the same family as the working vLLM score env.
4823f463   tangwang   qwen3_vllm_score ...
6
7
  
  -r requirements_reranker_qwen3_transformers.txt
749d78c8   tangwang   支持 reranker精简inst...
8
9
  torch==2.10.0
  transformers>=4.51.0,<5