v1.4.1
What's New in 1.4.1
- New Versioning: This extension now aligns with semantic versioning and is bumping from 0.3 to 1.4 on a minor release, and then to .1 for a final fix for a last minute WASM build copy issue.
- New Name: The name change to "LLM Context Generator" makes it clear that the extension can be used with various LLMs, not just GPT models.
- Improved File Exclusions: Adds a new .ignore configuration to add additional .ignore pattern lists.
- Closes File exclusion settings #2
- Improved File Marking System: Adds multi-file/folder marking system using the Explorer menu to easily select specific files for context generation. Automatic file tracking handles when marked files are moved or deleted.
- Enhanced File Extension Handling: Broadened support for more diverse file types and programming languages out of the box.
- Improved Token Count Estimation: Switched from
gpt-3-encoder
to@dqbd/tiktoken
for enhanced token counting accuracy and efficiency. for the generated context to help manage usage within LLM token limits, with a configurable warning if the context exceeds a predefined number of tokens (default: 8,000). - Codebase Refactor: Improved code structure and maintainability using ESBuild, TypeScript, and a more modular design. Removed unused test files and outdated dependencies.
- Updated README: Revised documentation to reflect the new name, features, and usage instructions.
New Contributors
- @dependabot made their first contribution in #4
Full Changelog: v0.3...v1.4.1