module-attribute  ¶
 ScoreContentPartParam: TypeAlias = (
    ChatCompletionContentPartImageParam
    | ChatCompletionContentPartImageEmbedsParam
)
 
  Bases: TypedDict
A specialized parameter type for scoring multimodal content
The reasons why don't reuse CustomChatCompletionMessageParam directly: 1. Score tasks don't need the 'role' field (user/assistant/system) that's required in chat completions 2. Including chat-specific fields would confuse users about their purpose in scoring 3. This is a more focused interface that only exposes what's needed for scoring
Source code in vllm/entrypoints/score_utils.py
  
 _cosine_similarity(
    tokenizer: PreTrainedTokenizer
    | PreTrainedTokenizerFast,
    embed_1: list[PoolingRequestOutput],
    embed_2: list[PoolingRequestOutput],
) -> list[PoolingRequestOutput]
Source code in vllm/entrypoints/score_utils.py
  
 _parse_score_content(
    data: str | ScoreContentPartParam,
    mm_tracker: BaseMultiModalItemTracker,
) -> _ContentPart | None
Source code in vllm/entrypoints/score_utils.py
  
 _validate_score_input_lens(
    data_1: list[str] | list[ScoreContentPartParam],
    data_2: list[str] | list[ScoreContentPartParam],
)
Source code in vllm/entrypoints/score_utils.py
  
 apply_score_template(
    model_config: ModelConfig, prompt_1: str, prompt_2: str
) -> str
Source code in vllm/entrypoints/score_utils.py
  
  Return position of the first 1 or the length of the list if not found.
Source code in vllm/entrypoints/score_utils.py
  
 get_score_prompt(
    model_config: ModelConfig,
    tokenizer: AnyTokenizer,
    tokenization_kwargs: dict[str, Any],
    data_1: str | ScoreContentPartParam,
    data_2: str | ScoreContentPartParam,
) -> tuple[str, TokensPrompt]
Source code in vllm/entrypoints/score_utils.py
  
 parse_score_data(
    data_1: str | ScoreContentPartParam,
    data_2: str | ScoreContentPartParam,
    model_config: ModelConfig,
    tokenizer: AnyTokenizer,
) -> tuple[str, str, MultiModalDataDict | None]
Source code in vllm/entrypoints/score_utils.py
  
 post_process_tokens(
    model_config: ModelConfig, prompt: TokensPrompt
) -> None
Perform architecture-specific manipulations on the input tokens.
Note
This is an in-place operation.