Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump torchaudio from 0.13.1+cpu to 2.1.1 #2331

Closed
wants to merge 1 commit into from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Nov 20, 2023

Bumps torchaudio from 0.13.1+cpu to 2.1.1.

Release notes

Sourced from torchaudio's releases.

v2.1.1

This is a minor release, which is compatible with PyTorch 2.1.1 and includes bug fixes, improvements and documentation updates.

Bug Fixes

  • Cherry-pick 2.1.1: Fix WavLM bundles (#3665)
  • Cherry-pick 2.1.1: Add back compression level in i/o dispatcher backend by (#3666)

Torchaudio 2.1 Release Note

Hilights

TorchAudio v2.1 introduces the new features and backward-incompatible changes;

  1. [BETA] A new API to apply filter, effects and codec
    torchaudio.io.AudioEffector can apply filters, effects and encodings to waveforms in online/offline fashion.
    You can use it as a form of augmentation.
    Please refer to https://pytorch.org/audio/2.1/tutorials/effector_tutorial.html for the examples.
  2. [BETA] Tools for forced alignment
    New functions and a pre-trained model for forced alignment were added.
    torchaudio.functional.forced_align computes alignment from an emission and torchaudio.pipelines.MMS_FA provides access to the model trained for multilingual forced alignment in MMS: Scaling Speech Technology to 1000+ languages project.
    Please refer to https://pytorch.org/audio/2.1/tutorials/ctc_forced_alignment_api_tutorial.html for the usage of forced_align function, and https://pytorch.org/audio/2.1/tutorials/forced_alignment_for_multilingual_data_tutorial.html for how one can use MMS_FA to align transcript in multiple languages.
  3. [BETA] TorchAudio-Squim : Models for reference-free speech assessment
    Model architectures and pre-trained models from the paper TorchAudio-Squim: Reference-less Speech Quality and Intelligibility measures in TorchAudio were added. You can use torchaudio.pipelines.SQUIM_SUBJECTIVE and torchaudio.pipelines.SQUIM_OBJECTIVE models to estimate the various speech quality and intelligibility metrics. This is helpful when evaluating the quality of speech generation models, such as TTS.
    Please refer to https://pytorch.org/audio/2.1/tutorials/squim_tutorial.html for the detail.
  4. [BETA] CUDA-based CTC decoder
    torchaudio.models.decoder.CUCTCDecoder takes emission stored in CUDA memory and performs CTC beam search on it in CUDA device. The beam search is fast. It eliminates the need to move data from CUDA device to CPU when performing automatic speech recognition. With PyTorch's CUDA support, it is now possible to perform the entire speech recognition pipeline in CUDA.
    Please refer to https://pytorch.org/audio/2.1/tutorials/asr_inference_with_cuda_ctc_decoder_tutorial.html for the detail.
  5. [Prototype] Utilities for AI music generation
    We are working to add utilities that are relevant to music AI. Since the last release, the following APIs were added to the prototype.
    Please refer to respective documentation for the usage.
    • torchaudio.prototype.chroma_filterbank
    • torchaudio.prototype.transforms.ChromaScale
    • torchaudio.prototype.transforms.ChromaSpectrogram
    • torchaudio.prototype.pipelines.VGGISH
  6. New recipes for training models. Recipes for Audio-visual ASR, multi-channel DNN beamforming and TCPGen context-biasing were added.
    Please refer to the recipes
  7. Update to FFmpeg support The version of supported FFmpeg libraries was updated.
    TorchAudio v2.1 works with FFmpeg 6, 5 and 4.4. The support for 4.3, 4.2 and 4.1 are dropped.
    Please refer to https://pytorch.org/audio/2.1/installation.html#optional-dependencies for the detail of the new FFmpeg integration mechanism.
  8. Update to libsox integration
    TorchAudio now depends on libsox installed separately from torchaudio. Sox I/O backend no longer supports file-like object. (This is supported by FFmpeg backend and soundfile)
    Please refer to https://pytorch.org/audio/2.1/installation.html#optional-dependencies for the detail.

New Features

... (truncated)

Commits

Most Recent Ignore Conditions Applied to This Pull Request
Dependency Name Ignore Conditions
torchaudio [>= 0.9.a, < 0.10]
torchaudio [>= 2.0.a, < 2.1]

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [torchaudio](https://github.com/pytorch/audio) from 0.13.1+cpu to 2.1.1.
- [Release notes](https://github.com/pytorch/audio/releases)
- [Commits](https://github.com/pytorch/audio/commits/v2.1.1)

---
updated-dependencies:
- dependency-name: torchaudio
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Nov 20, 2023
@codecov-commenter
Copy link

codecov-commenter commented Nov 20, 2023

Codecov Report

Merging #2331 (3e530cd) into main (0400813) will decrease coverage by 58.47%.
The diff coverage is n/a.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##             main    #2331       +/-   ##
===========================================
- Coverage   85.59%   27.13%   -58.47%     
===========================================
  Files         324      324               
  Lines       29326    29325        -1     
  Branches     5407     5342       -65     
===========================================
- Hits        25101     7956    -17145     
- Misses       2842    21118    +18276     
+ Partials     1383      251     -1132     

see 257 files with indirect coverage changes

Copy link
Contributor Author

dependabot bot commented on behalf of github Dec 13, 2023

Looks like torchaudio is up-to-date now, so this is no longer needed.

@dependabot dependabot bot closed this Dec 13, 2023
@dependabot dependabot bot deleted the dependabot/pip/torchaudio-2.1.1 branch December 13, 2023 10:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants