A lightweight MLP (Multi-Layer Perceptron) or a C-Abstractor that maps visual tokens into the language model's embedding space. 2. Training Methodology The model is typically trained in two distinct stages:
Built upon the LLaMA-2-7B or Mistral-7B architecture, providing a strong foundation for linguistic reasoning and zero-shot capabilities. Photo7B rar
Applying logic to unseen images based on textual prompts. High-Resolution Support: Optimized to process images at pixels to capture small details. 4. Technical Specifications Specification Parameters Context Window 2048 - 4096 Tokens Visual Tokens 576 tokens per image Precision FP16 / BF16 A lightweight MLP (Multi-Layer Perceptron) or a C-Abstractor
Photo7B is a 7-billion parameter multimodal model designed to bridge the gap between high-resolution visual perception and natural language reasoning. By leveraging a decoupled vision encoder and a robust language backbone, Photo7B achieves state-of-the-art performance on benchmarks requiring fine-grained image detail and complex instructional following. 1. Architecture Overview Applying logic to unseen images based on textual prompts
The model is fine-tuned on high-quality, multimodal instruction-following datasets (like LLaVA-Instruct). In this stage, both the projector and the LLM weights may be updated to handle conversational context. 3. Key Capabilities
Explaining complex scenes or reading text within images (OCR).
Utilizes a pre-trained CLIP-ViT-L/14 or similar high-resolution transformer to extract spatial features.