Luisa Crawford
Apr 03, 2026 21:53
Alibaba’s Wan 2.7 AI video mannequin hits Collectively AI with text-to-video now stay, image-to-video and modifying instruments coming quickly at aggressive pricing.
Collectively AI has rolled out Alibaba’s Wan 2.7 video technology mannequin on its cloud platform, pricing the text-to-video functionality at $0.10 per second of generated footage. The deployment marks the primary main cloud availability for the four-model suite that Alibaba launched in late March.
The text-to-video mannequin, accessible through the endpoint Wan-AI/wan2.7-t2v, helps 720p and 1080p decision with outputs starting from 2 to fifteen seconds. Audio enter can drive technology, and multi-shot narrative management works straight by means of immediate language—a significant improve over fundamental prompt-to-video techniques that pressure creators into fragmented workflows.
What’s Really Transport
Proper now, solely text-to-video is stay. Collectively AI says image-to-video and reference-to-video capabilities are “coming quickly,” with video modifying instruments to comply with.
The image-to-video mannequin will assist first-frame, first-and-last-frame, and continuation technology—helpful for storyboarding workflows. A 3×3 grid-to-video function targets groups constructing structured content material from static belongings.
Reference-to-video will get extra fascinating for manufacturing work. It’s going to settle for each reference pictures and reference movies as inputs, dealing with multi-character interactions and complicated scene composition at as much as 1080p for 10-second clips.
The Modifying Play
Video Edit, the fourth mannequin within the suite, addresses what’s arguably the most important ache level in AI video: the lack to revise with out ranging from scratch. Collectively AI’s implementation will assist instruction-based modifying through textual content, reference image-based modifications, fashion switch, and temporal function cloning—movement, digicam work, results lifted from supply media.
For artistic groups, preserving these capabilities inside one API floor eliminates the handoff chaos that presently plagues AI video manufacturing. Most workflows at this time contain producing in a single instrument, modifying in one other, and manually patching the outcomes.
Aggressive Positioning
The $0.10 per second pricing places Collectively AI in hanging distance of opponents, although direct comparisons rely closely on decision and period parameters. Wan 2.7 itself has drawn consideration since its March launch—evaluations have referred to as it probably the strongest AI video mannequin of 2026, although some skepticism concerning the hype stays.
Alibaba constructed Wan 2.7 inside its Qwen ecosystem, and earlier variations (2.1 and a couple of.2) had been open-sourced. Whether or not 2.7 follows that path hasn’t been confirmed, however the mannequin is now accessible by means of a number of cloud suppliers together with Atlas Cloud and WaveSpeedAI alongside Collectively AI.
Integration Particulars
For builders already on Collectively AI’s platform, including video technology requires no new authentication or billing setup. The identical SDKs work throughout textual content, picture, and video inference. The corporate presents serverless endpoints for improvement with quantity pricing out there for manufacturing workloads.
Groups evaluating the know-how can take a look at straight in Collectively AI’s playground earlier than committing to API integration. Full documentation covers parameters together with audio inputs, decision management, and the polling loop required for asynchronous video technology jobs.
Picture supply: Shutterstock






