Stay updated with the latest features, improvements, and bug fixes for Layr.
Read the full blog postDecember 8, 2025
Minor documentation and changelog updates
Add Demo Video link to README.md for better user understanding
December 7, 2025
Configuration change listener now correctly watches layr.planSize and layr.planType settings
Removed references to non-existent configuration settings in change detection
Enhanced README with detailed instructions on how to find Layr settings in VS Code and other IDEs
Improved settings documentation with both UI and JSON configuration methods
Better user guidance for accessing customization options
December 7, 2025
Updated watermark format to "Generated by Layr" with day, date, and time information
Watermark now includes full day name (e.g., "Monday, January 15, 2024 at 2:30 PM")
Execute Plan dialog simplified to two buttons: "Execute" and "Copy"
Watermark is now always prepended to AI-generated plans even if AI provider doesn't include it
Execute Plan dialog button layout improved for better user experience
November 22, 2025
Multi-IDE support for Execute Plan feature
Automatic detection and integration with Cursor AI
Automatic detection and integration with Windsurf AI
Automatic detection and integration with Antigravity AI
Generic fallback for other IDE AI assistants
Execute Plan now works across VS Code, Cursor, Windsurf, Antigravity, and other compatible IDEs
Improved AI assistant detection with multiple fallback strategies
Execute Plan now functions properly in Cursor, Windsurf, and Antigravity
Better error handling when AI chat extensions are not available
November 22, 2025
Published to Open VSX Registry for broader IDE support
Now available on Cursor, Windsurf, Antigravity, VSCodium, Gitpod, Eclipse Theia, and other Open VSX compatible editors
Added npm scripts for easier publishing workflow
Updated README with Open VSX installation instructions
Enhanced documentation for multi-IDE compatibility
November 22, 2025
Customizable plan generation with size and type settings
Plan Size options: Concise (80-100 lines), Normal (180-240 lines), Descriptive (300+ lines)
Plan Type options: Hobby, SaaS, Production, Enterprise, Prototype, Open Source
Intelligent prompt engineering that adapts to user preferences
Timestamp in watermark showing generation date and time
Enhanced system prompts with explicit size constraints and line count targets
Improved AI differentiation between plan types with feature inclusion/exclusion lists
Plans now properly respect size settings with accurate line counts
Different plan types generate distinctly different outputs
November 22, 2025
Execute Plan feature: Send generated plans directly to AI coding assistants
Integration with GitHub Copilot Chat for automatic plan execution
Intelligent validation: Only executes Layr-generated plans with watermark verification
Fallback support: Copies plan to clipboard if AI assistant not available
Safety features: Confirmation dialogs and plan validation before execution
Enhanced Execute Plan command with full AI assistant integration
November 20, 2025
Configured secure Vercel API proxy endpoint for production deployment
Extension now fully functional with secure backend API
November 20, 2025
Pre-configured Groq Integration: Extension now comes with Groq AI built-in - no setup required!
Enhanced AI Planning: Upgraded to Llama 3.3 70B model with 8000 token limit for more detailed plans
Zero Configuration UX: Complete plug-and-play experience - install and start planning instantly
Removed All User Settings: Eliminated configuration complexity - no API keys or settings to manage
Enhanced Output: Plans now include Executive Summary, Phases, Dependencies, Risk Analysis, and more
Environment file loading now uses extension directory instead of workspace root
Token limit increased from 4000 to 8000 for more comprehensive plans
November 2, 2025
Support for Kimi (Moonshot AI) provider with multiple models
Support for DeepSeek provider with deepseek-v3.1, deepseek-chat, deepseek-coder
Support for Grok (xAI) provider with grok-4, grok-3, grok-2
Support for O3 (OpenAI reasoning model) with o3, o3-mini
Reorganized provider architecture with dedicated files for each model
Updated factory pattern to support all 7 AI providers
Earlier Release
Initial multi-provider support (Gemini, OpenAI, Claude)
Basic AI planning functionality
Template-based fallback system