Skip to content

Releases: tenstorrent/tt-metal

v0.55.0-rc1

15 Jan 02:07
76ce6c7
Compare
Choose a tag to compare
v0.55.0-rc1 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12779718295

📦 Uncategorized

  • Add noc read/write burst command support to CCL command kernel. Also add automated command lowering to these noc commands
  • MeshWorkload: Initial Implementation
  • [CCL] Fix padding issues
  • #15868: use a buffer's size when creating its CB in groupnorm
  • Fix trace region size
  • #0: Bump E2E perf threshold for host bound WH Resnet variants
  • Extract Device interface
  • Extend graph capture to include device information
  • Quick fix replacing Device* with IDevice in graph tracker
  • #0: Add unit_tests_ttnn_tensor to post-commit
  • Xuncai/ccl global sem
  • #16153: Add fused activations to input tensors
  • Remove ARCH_NAME specific includes from erisc_datamover_builder
  • remove unused function
  • [TT-Train] Updates related to the fixed matmul
  • [Llama3] Add max prefill chunk sizes for different model/device combinations
  • Add sharded sweeps identiy, neg, selu, abs
  • Handle padded shards in ttnn.convert_to_chw
  • #16492: Add new APIs for setting which sub_device_ids to stall on
  • #0: Track local_cb_size to ensure that remote cb config is correctly sent by FD
  • support keepdim for prod
  • #16225: Int32 support for abs
  • Sharded sweeps: prelu, softmax, sinh, softplus, relu_max and relu_min
  • Changing output channel size in the readme example
  • Fix double move in TTNN invoke_composite launch_op
  • Quick fix how to storage/access for devices in the DevicePool
  • Add native N-dimensional tiled-interleaved permute support when the tiles are now broken.
  • fix multi-iter in reduce scatter and adopt runtime arg overrider infra
  • [tt-train] Add linear regression ddp example
  • Remove eth_l1_address_params.h from device.cpp
  • Sharded sweeps: exp, exp2, expm1, erfc, erfinv, round, log
  • Fix ttnn.concat golden function when groups > 1
  • #16171: Assert that NCRISC NOC is idle at kernel end.
  • Remove eth_l1_address_params.h from tt_cluster.cpp and watcher
  • Remove dev_mem_map.h usage from watcher_device_reader.cpp
  • #14616: Remove ARCH_* ifdefs from tt_cluster.cpp
  • Add support for DRAM Prefetcher op
  • Resolve reduce-scatter-async sharded tensor correctness bug & hang
  • disable flaky t3k test
  • Remove "noc_parameters.h" from device.cpp
  • Remove restriction of input_nsticks_per_core % w == 0
  • Add tt-forge sweep for conv2d.
  • Remove noc header file inclusion from watcher_device_reader.cpp
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Short list failing conv2d for forge sweeps
  • Remove halo from shard spec
  • Address issues of var & std
  • #16492: Remove sub_device_ids apis from various read/write functions throughout the stack
  • #6344: Update RoBERTa QA demo
  • Remove noc_parameters.h inclusion from ttnn
  • Resubmit #16339: parameterize dispatch_constants
  • #11512: Refactor bitwise sweeps, add bitwise sharded sweeps, modify t…
  • Update CODEOWNERS
  • Enable multi-core and fixing bfloat8 for untilize with unpadding
  • Set up targeting idle eth cores on BH - won't enable because of hang debug
  • Reorganize Print Pages Infrastructure
  • lower fabric erisc datamover eth context switching frequency when workload is running
  • Composite binary sweeps: gcd and lcm
  • Remove ARCH_NAME from host library code
  • [tt-train] Add nanogpt ddp mode
  • #16312: Fix full op to query physical shape for buffer volume
  • #16366: Changed default kernal_config_val for 32bit matmul
  • #16621: Add barriers at end of cq_dispatch_slave.cpp
  • Build wheels in models unit tests workflow
  • Mo/10234 eth dispatch profiling
  • Support subcoregrids in concat_heads
  • Build wheels in ttnn unit tests workflow because the tests need it and we forgot to put it in
  • #16590: profiler trace detection fix
  • #16503: Optimize CoreRangeSets for CBs and semaphores
  • Revert "#16621: Add barriers at end of cq_dispatch_slave.cpp"
  • Fix nightly stable diffusion tests
  • #0: Used github team for conv files
  • Sweeps: fixed abs, added acos and acosh sharded and non sharded
  • fix reduce scatter multi-link support bug
  • support i/p tensors of all dimensions/rank for prod operation
  • Create Infrastructure to exactly calculate L1 Memory Usage for Conv2D #15088
  • #12253: Implement Batch norm operation for inference mode
  • Port all experimental ops to compute_output_specs
  • #16443: Add a programming example of vecadd_multi_core and gtest
  • Enable to/from torch tests for 0D/1D tensors
  • Port all data movements ops to compute_output_specs
  • #15246: Add sweep tests for addcdiv, addcmul, rdiv, rsub, ceil
  • Fix build break
  • Logical sharding for input tensor and halo output
  • #16495: reduce grid for falcon7b mlp matmul
  • Stress NOC mcast test
  • [skip ci] Update subdevice doc
  • Read from and write to partial buffer regions for interleaved buffers where offset and size of specified buffer region are divisible by buffer page size
  • Fix resnet large on GS
  • Fix Pre-allgather Layernorm bad PCC when use 1D reduction
  • #16353: skip no volume tensors
  • Create README.md
  • Update README.md
  • #16367: Added support to enable dram and l1 memory collection without saving to disk
  • Update .clang-format-ignore
  • Tweak BH csrrs init code
  • #0: Clean up confusing refs to Greyskull from ttnn.copy error messages.
  • Update perf and latest features for llm models (Jan 13)
  • Update README.md
  • #16657: Fix to_layout conversion into row major for 1D tensors
  • Tilize with val padding results in L1 cache OOM
  • #0: Fixes from commit ae61802
  • #0: Skip build-docker-image during post-commit code-analysis since the docker image is already built in a previous job
  • Generate test executables per architecture
  • #16587: Update UMD submodule commit for P150 compatibility
  • Replace some instances of Tensor::get_shape with get_logical_shape
  • Update METALIUM_GUIDE.md
  • #16621: Add barriers at end of cq_dispatch_slave.cpp on IERISC
  • Finish porting OPs to compute_output_specs
  • ScopedGraphCapture
  • #15756 Pull in BH LLK fix for maxpool hang
  • #15246: Add sweep tests for logical_and, logical_or, logical_xor
  • #0: (MINOR) Bump to v0.55.0
  • #11512: Add sweeps for eltwise sharded ops 3
  • Add sweeps for unary, unary_sharded and binary_sharded versions of ops: fmod, remainder, maximum, minimum.
  • Don't leak tt_cluster.hpp through kernel_types.hpp
  • #6983: Renable skipped TT-NN unit test
  • #15450: Remove default values from circular buffer parameters in LLK compute APIs
  • update build flag on programming examples docs
  • Fix for P100 board type
  • Sever TT-Train's dependency on TT-Metalium's tests
  • [TT-Train] Update generate of LLM
  • [TT-Train] Add bias=false in LinearLayer
  • TT-Fabric Bringup Initial Check-in
  • #0: Sanitize writes to mailbox on ethernet cores.
  • Add Llama11B-N300 and Llama70B-TG (TP=32) to LLM table in README.md
  • [skip ci] Update llms.md
  • Update test_slice.py
  • #16625: Refactor tracking of sub-device managers from Device to a new class
  • Update code-analysis.yaml
  • [skip ci] Update llms.md
  • remove references to LFS

v0.54.0-rc23

14 Jan 02:06
Compare
Choose a tag to compare
v0.54.0-rc23 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12759327887

📦 Uncategorized

  • Isolate tracy
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small

v0.54.0

14 Jan 14:17
Compare
Choose a tag to compare
v0.54.0 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12768962484

📦 Uncategorized

  • Isolate tracy
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small
  • #16066: Add seed param to uniform and bernoulli ops
  • #0: Add StrongType to help creating non-clashing alias types
  • #0: Fix ccl workers not starting
  • #15642: Replace shapes in eltwise
  • Remove old fd init code path
  • Remove more namespace pollution caused by using namespace tt::tt_metal in header file
  • #0: make dependent configs dependent
  • #13643: Extend binary-ng math support to match all primitive binary ops
  • Fix wrong output tensor shape for prod
  • Update CODEOWNERS
  • Add subdevice support to multicore untilize
  • add multi-iteration support to reduce scatter async
  • #16356: Program Dispatch Modifications for MeshWorkload
  • Refactor conv files using clang-format
  • #15338: Fix watcher using the wrong cmd bufs for addr sanitization when using dynamic noc
  • Add cluster-axis API support to reduce scatter
  • split ttnn unit tests 8 ways
  • split ttnn tests into 10 groups
  • #0: Fixes for remote circular buffer synchronization
  • #0: Initial tech report for Sub-Device feature
  • Adapt to tt-system-tools hugepages configuration
  • Further removal of Shape/LegacyShape in order to allow 0D/1D tensors
  • #16134: add test cases for pre-allocated CreateBuffer / ttnn::event_query
  • setting multi-core for tilize with padding
  • reshape assert fix
  • #16165: Add binary SFPU divide init function
  • #15879: supported subcoregrid for createqkv heads
  • Reimplemented dropout as separate op.
  • #16356: Reland Program Dispatch Modifications for MeshWorkload
  • suppport all dim lengths for reduction
  • Check that writes don't go to below the ringbuffer
  • #16390: Move reduce_scatter_async into experimental namespace and enable cluster api tests
  • Typecast in ng
  • Speed up linking for incremental builds.
  • #0: Don't return shared ptrs of global sems/cbs, and directly return the object instead
  • Add support for act_block_h_override to Width Sharded Conv2d
  • #0: Fix CMakeLists
  • Update install_dependencies.sh to install hugepages using tt-system-tools hugepages service
  • delete stale/(now) invalid assert after recent update to use virtual …
  • Fix CB Overflow issue on certain transposes and permutes
  • Removing LegacyShape from Tensor::pad
  • Add experimental APIs to access Hal
  • Remove documenation references to "setup_hugepages.py"
  • #16175: Add DPRINT TileSlice support for int types
  • Fix remaining minor input/output issues with TG-Llama3 vLLM integration
  • #0: Reshuffle some logic in resize_remote_sender/receiver_cb_interface to fix perf degradation in some models
  • Move conv specific ops from tensor_utils to conv2d.
  • Support all ND shapes for tilize/untilize
  • Remove unused ARCH_NAME specific includes "eth_l1_address_map.h"
  • #0: Fix failing test case for width sharded non-32 multiple output width
  • #15605: Only force-stall ethernet programs on earlier ethernet programs
  • #16339: parameterize dispatch_constants
  • Ucheema/tt fabric arch md
  • Add ttnn.conv2d unit tests for UNet Shallow at groups=4,6,8
  • Pad greater than 4D
  • [tt-train] Memory efficient option to run GPT2
  • #15732: add matmul block h/w parameter processing
  • #0: Enable unity for sublibraries
  • Remove redundant function determine_parallel_config_non_tile_mul_width.
  • Add support for tiled indices via padding/alignment aware embedding kernel (tiled indices only)
  • Bw sharded sweeps: neg_bw, log_bw, relu_bw, relu6_bw, leaky_relu_bw, rsqrt_bw
  • Conv2dConfig reallocate_halo_output default to true
  • [Llama3] Change prefill padding in LlamaGenerator to nearest 2048 and optimize chunked prefill readback
  • Added check for global non-constexpr uint64_t value in kernel
  • Update CONTRIBUTING.md
  • Dedicated target for HostDevCommon
  • Fix bug when calling CreateDevice in a loop on TG
  • Fix cb allocation errors for halo and conv2d
  • The library is the authority on include dir locations, not the consumers
  • #0: fix corerange handling in ROPE
  • undo revert of #16247
  • #16495: update test pccs after matmul changes and skip test with ND PCC failure
  • Reserve vector in cluster function
  • Xuncai/flash decode bugfix

v0.54.0-rc22

13 Jan 02:08
1a7e545
Compare
Choose a tag to compare
v0.54.0-rc22 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12739066348

📦 Uncategorized

  • Add buffering to DPRINT
  • Isolate tracy
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small

v0.54.0-rc21

11 Jan 02:07
ca2c867
Compare
Choose a tag to compare
v0.54.0-rc21 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12719934369

📦 Uncategorized

  • Add buffering to DPRINT
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small

v0.54.0-rc20

10 Jan 02:07
14dac66
Compare
Choose a tag to compare
v0.54.0-rc20 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12701599118

📦 Uncategorized

  • Add buffering to DPRINT
  • Python -> Python3
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small

v0.54.0-rc19

08 Jan 02:06
Compare
Choose a tag to compare
v0.54.0-rc19 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12662398466

📦 Uncategorized

  • Add buffering to DPRINT
  • Python -> Python3
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small

v0.54.0-rc18

07 Jan 02:28
bf94433
Compare
Choose a tag to compare
v0.54.0-rc18 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12643496109

📦 Uncategorized

  • Add buffering to DPRINT
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small

v0.54.0-rc17

06 Jan 02:30
cb02e39
Compare
Choose a tag to compare
v0.54.0-rc17 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12624900279

📦 Uncategorized

  • Add buffering to DPRINT
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small

v0.54.0-rc16

04 Jan 02:28
Compare
Choose a tag to compare
v0.54.0-rc16 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12606309953

📦 Uncategorized

  • Add buffering to DPRINT
  • Revert "#15565 Add unit test to show sharding ttnn.from_torch problems"
  • [UMD] Removed set_*_params calls and constants
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports
  • #15061: Extended {to,from}_vector to support tilized layout, bf4/8 formats
  • #16265: Remove creation op
  • Fix unsigned arithmetic bugs in reshape ops
  • Fix compile issue for earlier c++ versions
  • #0: Typo fix in TT distributed tech report
  • [Llama3-text vLLM integration] Modify Llama3 text model (new and old codebase) forward apis for vLLM compatibility
  • LLM tech report sections 3.1, 3.4, 3.5
  • LLM Tech report section 4.4
  • Move some Device methods to private section
  • #0: [skip_ci] Update Distributed Tech Report with Discord Server link
  • #15857: Binary Forge Sweep Tests Set1
  • #0: Fix get_dispatch_core_config in conftest.py to not modify the device_params to not affect subsequent tests
  • #0: Remove hardcoded grid width in all_gather and skip test_sharded_matmul test when the device grid size is too small