On August 25,???? ??? Alibaba Cloud launched an open-source Large Vision Language Model (LVLM) named Qwen-VL. The LVLM is based on Alibaba Cloud’s 7 billion parameter foundational language model Qwen-7B. In addition to capabilities such as image-text recognition, description, and question answering, Qwen-VL introduces new features including visual location recognition and image-text comprehension, the company said in a statement. These functions enable the model to identify locations in pictures and to provide users with guidance based on the information extracted from images, the firm added. The model can be applied in various scenarios including image and document-based question answering, image caption generation, and fine-grained visual recognition. Currently, both Qwen-VL and its visual AI assistant Qwen-VL-Chat are available for free and commercial use on Alibaba’s “Model as a Service” platform ModelScope. [Alibaba Cloud statement, in Chinese]
Bonsai Tool Sharpening DemonstrationRoll Over, Pol Pot‘Ramen Heads’ Screening at NuartPAAWWW PanelUnitOne and Miyake Taiko at Armstrong TheatreBlack Ops 6 & Warzone Season 4 Reloaded update patch notesLegend Behind the CameraUCLA Nikkei Student Union’s Cultural NightJUMBO TEAM получила квоту на LAN‘Free! Take Your Marks’ Screening Nationwide for One Night Only Creepiest Alexa and Google Assistant security fail yet Samsung fixes fingerprint reader security bug 9 gifts for people who want to start meditating more LG's new dual 'Leave no trace' isn't just for Burning Man. Let's all declare war on MOOP. RED's Hydrogen phone project is dead Huawei's foldable Mate X phone goes on sale on November 15 at a staggering price Galaxy Fold gets update that makes its 6 cameras a lot more powerful Netflix may try to limit password sharing without making customers mad 'Bojack Horseman' Season 6 builds to Bojack's final reckoning
0.2364s , 9726.2578125 kb
Copyright © 2025 Powered by 【???? ???】Alibaba Cloud launches open source Large Vision Language Model Qwen,Global Hot Topic Analysis