Skip to content

Releases: tenstorrent/tt-metal

v0.44.0

27 Feb 15:57
Compare
Choose a tag to compare

📦 Uncategorized

  • Update CreateBuffer to return shared_ptr, and Enqueue R/W buffer to accept std::shared_ptr
  • #4794: Implement DownBlock2D using ttnn for stable_diffusion model
  • #4797: Implement BasicTransformerBlock sub-module using ttnn for stab…
  • #0: write cluster config for FD mode, non tunneling cores as well
  • Update bw test, change mulsi calls to use *
  • #3003: updated tt-lib documentation
  • #0: Update to v0.44.0
  • #4003: added ability to trace ttnn operations using torchtrail library
  • Support moreh logsoftmax
  • #4614: gitmodules: Use https URLs for submodules
  • #0: add reviewers to frequently touched ops docs file
  • backward ops - hypot and atan2
  • #4885: Move program device map to program
  • #4858: Add support for float to int typecast
  • Matmul_block on a smaller grid size
  • Revert "#0: Add support for typecast float to int"
  • Add dst ethernet router support and remote command processor to accept FD packets on remote chip
  • Falcon40B TT Implementation
  • #5198: Fix moreh softmax related bug
  • #0: skip MOREH Softmax tests from main
  • #3122: Use device grid size in falcon_attention to be genereric...
  • #0: Add assertions for interleaved tensors for ops that don't support sharding
  • #5169: Add activation ops to ttnn
  • #3003: add duration to the ttnn operation nodes when TTNN_ENABLE_LOGGING=1 is used to compile the code
  • #5027: Optimize group attn matmul for Falcon40B decode
  • #0: add documentation about managing documentation
  • Adding docs for maxpool, avg pool and upsample
  • Revert "#0: skip MOREH Softmax tests from d5811b7
  • #5165: Add hyperbolic ops to ttnn
  • #4866: Add grayskull open source llk-library
  • #5002: simplified preprocessing of CNNs using preprocess_model
  • Create GroupNorm sharded in TTNN
  • #5097: Support for dedicated completion queue thread
  • upsample test calculate grid
  • fix for sharded allocater when num banks == num cores
  • MHA tutorial interactive notebook with diagrams
  • #4003: Adding a profile tutorial
  • #0: Added non-blocking read stress test
  • Revert "MHA tutorial interactive notebook with diagrams"
  • #0: Update all_gather to work for multi_link. Update falcon-40b to use 2 links for all gathers
  • #5142: Remove slow dispatch mode from workgin sweeps
  • #3003: fixed the input tensor documentation
  • #0: Temp slower resnet VM run
  • throw on fast dispatch for to_host_sharded as its not supported
  • #5253: Fix kv_past_len being passed in to rotary embedding for falcon models
  • #5233: started adding ttnn_functional_resnet
  • #3003: updated ttnn documentation to explain what features it has over tt_lib. Added standalone examples of basic usage of ttnn
  • #0: Speedup incremental builds
  • #0: Change setup.py to be git worktree friendly
  • MHA tutorial interactive notebook with diagrams
  • #3003: disable tutorial 6 from running as the unit test
  • Agrebenisan/non blocking tensor reads
  • #5275: CODEOWNERS: update to include files relevant for ttnn team
  • Fix an intermittent launch message transfer error
  • Revert "MHA tutorial interactive notebook with diagrams"
  • #0: add parens in LLK doc
  • #3003: only unit test tutorials that work on pipelines
  • #5246: Add unary math ops to ttnn
  • Vignesh/stable diffusion ttnn basic transformer block fix
  • #4854: Implement attention and rms_norm sub-module using ttnn for mis…
  • #4795: Add upblock2d to functional stable diffusion model
  • #4796: Implement Transformer2DModel using ttnn for stable_diffusion m…
  • #0: Adding llk wormhole_b0 submodule
  • #4003: Adding pyind11 to ttnn
  • #5296: Fix broken link to host_api.hpp in README.md
  • #0: Fix bug with the way we were measuring bert inference time
  • #0: Change local tt_lib._C module install from symlink to copy
  • #5233: added ability to fold batch_norm2d into conv2d
  • #5222: replace hex8_to_hex32.py with cpp to shave off some compile time -temporary fix
  • Enable tests for WHB0
  • #5137: Cleanups for newer Linux distro / toolchains
  • #5233: implemented support for converting all Resnet-18 modules using preprocess_model function
  • #3003: fix model preprocessing bug
  • #4799: Implement CrossAttnDownBlock2D sub-module using ttnn for stabl…
  • #4800: Implement UNetMidBlock2DCrossAttn using ttnn for stable_diffus…
  • #4798: Add ttnn cross attn upblock2d in functional stable diffusion m…
  • #4801: Implement Unet 2D Condition model using ttnn for stable_diffus…
  • #4965: Rename Conv2D to Conv2d and MaxPool2D to MaxPool2d to match torch
  • #0: Remove departed team member from CODEOWNERS
  • #0: add to codeowners
  • #5314: Only stall on first scheduled read after commands with side effects
  • #4965: fix bad rebase
  • #0: Add more instructions for dispatching workflow actions and a note about skipping git hooks
  • Update optimized Bert to support WH grid sizes, add sharding support for RMSNorm
  • #4642: create gtest_smoke as a sanity test suit
  • #5341: context switch if eth txq is full
  • #5323: Convolutions of small size fail during parallelization calculations
  • Npetrovic/transformer softmax
  • Fix groupnorm for narrow channels
  • #4862: added more test for ttnn bloom. Update optimized ttnn bert to match the structure of non-optimized ttnn bert
  • #0: Add an envvar parser with value detection and default value setti…
  • #4732: Clean up compute kernel apis
  • #5318: Modify Falcon7B to use attn_matmul for wormhole
  • #0: make logLocationsRecord a static function
  • #5233: run convs with auto-format
  • #5377: Avoid segfault by checking buffer !null before getting device
  • Alex/metal/pack untilize b0
  • #4487: Support block sharding in upsample
  • #5359: update python package transformers + dependencies to include Falcon
  • #3708: Add support for LN having gamma/beta in bfp8
  • #4003: Skip sweep tests if not available
  • #4003: use faster TMs in optimized ttnn whisper
  • #4732: Clean up compute_kernel_api
  • More optimizations for group_attn_matmul
  • #5233: updated resnet18 to run residual connections
  • #3003: added more meaningful errors to ttnn. Updated getitem to run on device in the cases when it can
  • #5233: simplified the logic in tracer
  • #3003: include ttl operations and necessary types under ttnn.ttl
  • #0: Add note about no merge commits in main
  • #0: Add timeout in profiler regression workflow
  • codeowners update
  • #5365: Add device argument to determine grid size based on target
  • disable whisper until further investigation, see issue #5430
  • #3003: fixed ttnn convs
  • #3886: Fix build error for C++ tests in debug mode
  • #4954: Support depth 32 in maxpool writer
  • #0: Pass output cb to pack init functions
  • #0: skipping DeviceLoadBlankKernels on remote devices
  • #5359: transformers: update version and relax pcc asserts
  • #3003: guidelines for adding new op
  • Don't assume user has one entry in their $PYTHONPATH
  • FP32 tensor support for matmul
  • #3003: updated tutorial 001 to describe the tensor more comprehensively before showing the add
  • Onboard additional metal code owners
  • #5402: Add redesigned host-side sw command queue, it can be configured i…
  • #3003: fixed docs
  • Alex/metal/enable conv tests on b0
  • #5356: git bisect script to find broken commits
  • #0: Update data_format.cpp file
  • Add skip to full grid matmul whb0
  • #3003: simplified the logic in ttnn/operations/matmul.py. Added dataclasses instead of tuples for CoreGrid and ShardShape
  • #5204: adding moreh's test suit. removing an absolute assertion.
  • Npetrovic/lt gt ne fix
  • #0: Move device id attribute from tensor to DeviceStorage
  • #3003: fixed scheduled pipeline
  • Npetrovic/transformer concat sweeps ttnn
  • #3003: added support for running ttnn.matmul using 1D_systolic_array. Also, added support for passsing in the program config directly
Read more

v0.43.0

08 Feb 18:02
Compare
Choose a tag to compare

📦 Uncategorized

  • #4668: Yolov5 GS Demo Benchmarking
  • #0: uplift umd; pick up fix for n150 cluster
  • #3178: Fix for wormhole b0 reduce w
  • #4489: fixed bugs in the program caching of eltwise unary and eltwise binary. Updated bloom to use L1 memory config
  • #4821: Add cumsum op to tt_dnn
  • Dispatch/Bandwidth tests
  • #4003: fixed test_eltwise_unary_op
  • Argmax and Argmin Support
  • #3212: softmax works after reduce fix of max, sum, etc. for WHB0
  • #0: (MINOR) Update version to v0.43.0
  • #4761: Add call to ttl repeat_interleave and also provide script for …
  • #4003: fixed the bug with printing the compile-time attributes
  • Support moreh arange
  • Remove skip_for_wormhole_b0 for test_moreh_softmax and test_moreh_softmin
  • #4541: remove unpad start at 0 limitation
  • Agrebenisan/restart cmd fix
  • Support moreh SGD
  • #0: Use fetch-depth: 0 instead of fetch-tags because otherwise git complains of commit SHA/tag conflict
  • #0: Add code owners for primary operations api binding
  • #4547: Add 2x2 window unit tests to ttnn maxpool
  • #4003: restructure ttnn
  • #4889: Change TileSlice printing to only print tile data
  • #4836: Add support for blocking conv activation in 2d systolic conv v…
  • #0: Update unicast cycles lower bound
  • #4904: Add support for 1d width sharded LN
  • #4941: Convert command header to struct for easier maintainability
  • #4823: enable sum_0 operation fails with low PCC [Wormhole,Grayskull]
  • Fix sharded buffers for one core in fast dispatch
  • #4906: global reduce sum, mean, max, min operations added
  • Revert "#4823: enable sum_0 operation fails with low PCC [Wormhole,GS]
  • #0: Change codeowners from specific op binding files/dirs to all tt_lib bindings
  • #4003: split unary sweep into per op sweeps
  • #4232: added support for converting from numpy arrays to ttnn tensors. Borrow data whenever possible when converting from numpy/torch
  • Uplift AttnMatmul to support GroupAttnMatmul
  • Add watcher-specific CI tests
  • #4916: Add avg pool to ttnn
  • #0: Add a lock on DPRINT server raise/wait structures
  • #4967: added validation for input tensors
  • #4971: update documentation by a new doc hierarchy;
  • #0: Leftover decorate_operation replacement for avg pool
  • #4899: fix the permute to operate on the intended shape
  • #4730: Add tt_lib.tensor.concat
  • Aliu/enqueue eth
  • #4003: Updating functional performance from changes in ttnn.permute w…
  • #4984: Remove dead OP_INFO and graph interpreter
  • #4878: initial commit to add Conv parameters to ttnn.preprocess_model_parameters
  • Update Program Hashes for Ops using Mem config
  • #4984: Remove unused dprint functionality
  • Aliu/ci fix
  • #4215: Add Argmax and Argmin Fallback
  • #4999: added input tensor validation to add, sub and mul operations.
  • Support for softmax rm major sharding and causal mask sharding
  • #0: provide API for where() to support scalar True/False branches
  • #5003: Update expected compile and runtimes for perf regression on VM
  • Revert "Update Program Hashes for Ops using Mem config"
  • #4931: add apis to get ethernet by socket ids
  • #4786: Add upsample_nearest2d functional stable diffusion
  • #4986: deploy docs only to main and enable devs to run docs build on different pages
  • Deploy ttnn sweeps results to docs
  • #4958: Move all python api unit tests to frequent in order to reduce SD pipeline length
  • #4999: Added input validation for ttnn.matmul and ttnn.linear. Add unit test for linear operation. Update input tensor validation in binary.py. Fix compute_output_shapes in bmm_op.cpp
  • #4620: Fix+improve bw test
  • #4852: Add unit tests for functional bloom
  • #5032: scalar argument versions for relops
  • #0: Add some README recommendations from MCW to clarify issue about access to internal workflows VM installation page
  • #4790: Implement GEGLU using ttnn for stable_diffusion model
  • #4999: Adding validation checks
  • #4791: Implement Feedforward sub-module using ttnn for stable_diffusi…
  • Npetrovic/bw ops sweeps
  • #4999: update documentation of ttnn operations to include the validation schema
  • #0: Remove model run from frequent_api_pipeline per @tt-rkim
  • Minor dprint/watcher cleanup
  • #4858: Add support for typecast
  • #0: Disable dprint tests because they're flaky at the moment
  • #4946: Add trig ops to ttnn
  • Nshanker/convs split by 2
  • #4946: Add inv trig ops to ttnn
  • #4003: fixed circular dependency in decorators
  • #5054: Removed asserts from conv op host code that are not required. …
  • #4003: fixed circular dependencies in ttnn
  • #4852: Fix CI pipeline by re-enabling functional bloom for causal LM
  • GroupNorm Sharded. support
  • #4972: is_sharded and memory_config is free from tensor
  • #0: eltwise ops/activate operator tracking for GS, and WHB0
  • Aliu/fd tunneling pr
  • #4642: Converted 14 old cpp tests to use gtest, with capabilities to switch btwn FD/SD when possible
  • #4852: Add tests for functional ttnn bloom implementation.
  • #4003: correctly convert all parameters of torch module to ttnn parameters
  • #5082: Pow gradient calculation method is different with pytorch
  • Argmax/Argmin support for channel, batch and all dim
  • #4420: switch to shared_ptr
  • #4420: return shared_future from taskflow async wrapper
  • Minor DPrint fixes
  • #0: Enable/disable clearing L1 from env var
  • #4003: started moving ttnn operation to C++
  • #4003: Add script to help with finding issues that we need approval for
  • #5044: Adding support for optional output tensors
  • #4003: Adding the open flag to show only open PRs
  • #5048: Add CreateDevices and CloseDevices api to detail
  • decouple ClearProgramCache from CommandQueue
  • Conv fixes for padding input channels. Shallow conv fixes. Conv input/output autoformatting. Cleanup
  • Asarje/mp unpack tilize fused
  • Update CreateBuffer to return shared_ptr, and Enqueue R/W buffer to accept std::shared_ptr
  • #5137: Cleanups for newer Linux distro / toolchains
  • Revert "#5137: Cleanups for newer Linux distro / toolchains"
  • Revert "Update CreateBuffer to return shared_ptr, and Enqueue R/W buffer to accept std::shared_ptr"
  • #4793: Implement ResnetBlock2D using ttnn for stable_diffusion model
  • #4788: Implement Downsample2D using ttnn for stable_diffusion model
  • #4792: Implement CrossAttention sub-module using ttnn for stable_diff…
  • #4747: Reduce amount of samples in bert sweeps
  • #4789: Add upsample2d to functional_stable_diffusion model
  • #0: Add fix for lamb optimizer
  • #5057: Add relational ops support to TTNN
  • skip eth test suite on GS
  • #4003: updated ttnn.Tensor to be derived form ttl.tensor.Tensor
  • Asarje/shwetank upsample
  • #5082: power gradient is erroneous when exponent is in range (0-1)

v0.42.0

26 Jan 14:59
Compare
Choose a tag to compare

📦 Uncategorized

  • Syrmia/new sweeps
  • Update test sweeps for the system memory input buffer
  • #4181: Add bfloat8_b dtype fix for tests that should support bfloat8_b
  • #4343: Add new op sweeps for GS and WH
  • #0: (MINOR) Update to v0.42.0
  • #4311: Automate determining and scheduling RC generation
  • Jedi main
  • #0: Remove path appends from test files
  • #4003: Adding padding for whisper
  • #4632: Add dprint server support for eth cores
  • #4003: added ttnn.group_norm
  • #4003: added ttnn.silu
  • #3999: move fallback_ops.silu -> tt_lib.tensor.silu
  • #4683: Support tracing
  • #0: Patch for bad state reached when enqueuing trace
  • Nshanker/remove pow of 2 req for channels size
  • #4003: added ttnn.pad
  • #4730: Adding ttnn.concat as fallback
  • #4003: added ttnn.split
  • Syrmia/ttnn sweeps
  • #4347: Move VGG tensors to L1
  • #4670: Add end to end demo for functional roberta model
  • #4431: mnist gs_demo benchmark
  • #4623: lenet gs demo benchmarking [Pending CI]
  • #4720: Improve folder structure of broken sweep tests
  • Adding interface to assign dispatch kernels to dispatch functionality and adding kernel to service remote command queue
  • #4003: Fixing whisper pcc in last layer
  • #4003: updated ttnn unit tests to assert using higher PCC thresholds
  • #4761: Adding fallback for repeat_interleave
  • #4003: simplified the logic in to_layout
  • #4003: added ttnn.log
  • #4003: updated ttnn.to_layout and ttnn.pad to do the right thing with padded shape
  • #0: Fix reference to Python integration test in README
  • #0: As a quick fix for now, source /etc/rc.local to re-insert number of hugepages back in after starting weka service in perf pipelines
  • #4003: updated model names
  • #4617: Matmul went to 0.9998887677925289 with float comparison to torch
  • #0: Fix bad access to memconfig/device when input tensors are on host
  • #4503: Demo for functional bloom
  • #4611: Add end to end test for ViT model with ImageNet data
  • #4506: SSD gs demo benchmarking
  • #4504: Add end to end demo for functional t5 model
  • #4557: Uplift swin model to resolve errors in tests & Add test_perf_accuracy...
  • #4556: Roberta gs demo benchmarking
  • #3974: nanogpt uplift and move weights to weka path
  • #4610: EfficientNet gs demo benchmark
  • #4003: added more sweeps
  • #4231: Fine-tune the unary ops for add, sub, div, mul binops with one scalar constant arg
  • #516: Sanity check tracy artifact generation
  • #4003: fixed crashing sweep tests
  • #0: Update get_semaphore to return 16B aligned semaphore addresses
  • #0: Add tracy dependencies to github actions runner workflows
  • #4730: Add sweep test for ttnn.concat
  • Update ops for sharding used in falcon 40b
  • #4833: Create initial ttnn sweeps with csv artifact upload
  • #4003: debugging whisper
  • #4003: Setting all = [] to block whild card imports
  • TTNN Sharded tensor support
  • #3662: Impl moreh_clip_grad_norm
  • #4609: Deit gs demo benchmarking
  • #4741: Add sum op to tt_dnn
  • #4622: Yolov3 GS demo Benchmarking
  • #0: Add weka mount + force hugepage mount with /etc/rc.local in frequent pipelines
  • #0: Reduce timeout of multi queue single device FD post commit
  • #4003: Make ttnn sweep tests available from pytest
  • Add MaxPool2d to ttnn
  • Ttnn 4761 add sweep for repeat interleave
  • #0: Remove checkout secret
  • #4847: Error out when there are insufficient num hugepages
  • simpler hugepage check
  • Revert "#4839: simpler hugepage check"
  • #4862: Disable test_moreh_clip_grad_norm_with_error_if_nonfinite
  • #4374: Benchmarking for bloom TT model
  • #4505: Add end to end demo for functional bert model
  • #4003: updated documentation
  • #4003: updated concat operation to raise an exception if the dimension is out of range
  • #0: Losen models perf tolerance for GS
  • #0: Add more instructions on syseng assets installation + direct users to additional hugepages setup if needed for cloud VMs
  • #4815: New restart command which safely resets a command queue into a starting state
  • Revert "#4815: New restart command which safely resets a command queue into a starting state"

v0.41.0

13 Jan 21:15
Compare
Choose a tag to compare

Metal

API Changes

  • tt::tt_metal::detail::GLOBAL_CQ replaced with tt::tt_metal::detail::GetCommandQueue(Device *device)
  • New num_hw_cqs parameter to specify underlying number of HW CQs for a given Device: Device *CreateDevice(chip_id_t device_id, const uint8_t num_hw_cqs = 1, const std::vector<uint32_t>& l1_bank_remap = {});

Tools

Profiler

  • Integrated Tracy host-side CLI capture and csv report generation with metal’s profiler infrastructure
  • Added support for device profiling on ethernet cores for Wormhole systems.

ttNN

Infrastructure

  • Updated ttnn documentation with visualizations and examples
  • Added padded shape to ttnn
  • Renamed ttnn.nlp to ttnn.transformer
  • Updated ttnn.transformer.split_query_key_value_and_split_heads to handle most shapes, multi head query and cases when key_value_states are used to compute key and value
  • Added ttnn.rms_norm
  • Added ttnn.Shape and exposed support for padded shape. Simplified broadcasting and reduction operations
  • Moved ttnn.Tensor to C++
  • Added debug decorator for ttnn operations

Operations

  • Layer operators layernorm, conv,softmax were optimized for multi-core computation; model specific operators for Falcon7B were also added.
  • The operator normalize_global was added to the tt_lib.tensor namespace; this transforms the tensor by normalizing elements to the mean and standard deviation of the entire tensor.
  • The operator lamb_optimizer was added to the tt_lib.tensor namespace to help with computing the back-propagation algorithm and weight update for DNN in the training loop.

The following backward operators, for use with back-propagation training loop, have been added to tt_dnn library; they are accessible with suffix _bw in the tt_lib.tensor namespace.

 1. abs
 2. add
 3. addalpha
 4. addcdiv
 5. addcmul
 6. binary_assign
 7. binary_le
 8. clamp
 9. clamp_max
10. clamp_min
11. div
12. exp
13. fill
14. fill_zero
15. gt
16. log
17. lt
18. max
19. min
20. mul
21. ne
22. neg
23. relu
24. rsqrt
25. rsub
26. sigmoid
27. sqrt
28. sub
29. tan
30. tanh
31. unary_add
32. unary_assign
33. unary_div
34. unary_mul
35. unary_pow
36. unary_sub
37. where

Models

  • Added ttnn implementation for Roberta, Whisper, T5-small, and flan-T5-small
  • Updated ttnn implementation of Bloom to work with L1 memory, and cleaned up ttnn implementation of BERT
  • Updated Mistral implementation to use tilized tensors and operations
  • Updated VGG model to load pre-tilized weight tensors and use tilized tensors
  • Added benchmarking demo for DistilBert and T5 using SQuAD dataset for question answering

v0.40.0

09 Jan 20:01
Compare
Choose a tag to compare

📦 Uncategorized

  • Opt LN_sharded and SMX_sharded
  • #1919: Turn existing allocator tests into gtests
  • Agrebenisan/fd perf opt
  • #3932: Rename unary op args which were input_a -> input, binary ops from input, other -> input_a, input_b
  • #3971: Fix TSLICE printing truncation when hitting MAX_COUNT
  • #0: Fix undefined variable error when running with watcher
  • #4141: Add GetPreferredNOCForDRAMRead, GetPreferredNOCForDRAMWrite and update all ops to use these apis
  • #3420: fix eth core init L1 bug
  • #0: Add ttnn founding engineers as CODEOWNERS of functional models
  • #0: Commonize logic between E2E and device perf functions/scripts. Enable assertions for device perf scripts/ci
  • Issue 4073: Fix for host-side hanging when an invalid DPRINT WAIT command is running on the device.
  • #0: Add tt-rkim as CODEOWNERS for setup_hugepages.py
  • #4003: implemented functional t5 model
  • #3003: commonized variable names across tnn tests. Removed ttnn.experimental. Added ttnn.unary and commonized the import of ttl unary ops
  • #0: Delete extra text in first docs page about being added to repo
  • write watcher log to built/ folder rather than kernel subfolder
  • Add Batch>1 fix for matmul blocking API
  • #4231: improve unary add, sub, mul and div implementation in SFPU. Add complex polar operator
  • #3493: sharded tensor support
  • REVERT #4231: Fine-tune the unary ops to improve performance
  • #0: Move setup_hugepages.py to release assets
  • #0: (MINOR) Update VERSION to 0.40.0
  • #4301: Fix link to announcements in README
  • #4301: Replace some more instances of Metal w/ Metalium in docs
  • Llk refactor uplift
  • #0: Fix TT-Metalium docs link in get_performance.rst
  • #0: uplift in device code
  • #4176: uplift umd plus tt_metal changes
  • init fw once
  • Merge v2 of untilize_with_halo, maxpool, and conv ops for Resnet-50
  • Backward ops for Metalium - part-2
  • #4211: Assert that hugepages number is greater than or equal to required, rather than equal to
  • Update resnet readme
  • Add Run Instructions for BERT_large sharded in readme
  • Add batch 20 for resnet-50
  • #4376: Support mixed precision for eltwise binary with prescaling
  • Increase timeout of slow dispatch unit tests and switch to Y_M_D format for ops logs
  • #0: point umd to main, comestic change
  • New tilize and straightforward vec gen in matmul kernel examples
  • #4216: Enable DPrint slow dispatch testing
  • #4376: Call llk reconfig functions in compute kernel apis for WH
  • #4336: #4386: Fix interleaved_to_sharded writer waiting on incorrect amount of data for uneven shards
  • #1433: removed Device* and MemoryConfig from DeviceStorage
  • #0: Increase fast dispatch post commit timeout and shorten full regressions because we no longer need that much time
  • #4003: added ttnn.mean, ttnn.rsqrt and ttnn.pow and deleted and got rid of ttl use in ttnn_functional_t5. Updated ttnn.Tensor to store shape as ttnn.Shape
  • Aliu/load base erisc
  • #4399: add spell checker script for docs spellchecking
  • #2134: Uplift UMD
  • #0: fix memory leaks found in test_sfpu via valgrind
  • Revert "#4399: add spell checker script spellcheck.sh should be read…
  • #0: update llk.rst for minor ReST syntax
  • #2934: Make one CommandQueue and one HW CommandQueue (SysmemWriter) per device
  • #4003: convert ttl.tennsor.Shape to tuple when using it in torch functions
  • #4211: Fix HP targeting issues in main from cq-per-device changes

v0.39.0

12 Dec 15:57
Compare
Choose a tag to compare

📦 Uncategorized

  • #0: Add extra sentence about use cases in somewhat vague terms
  • #3824: cache weight tensors for mistral
  • Npetrovic/power fp sweep
  • #3918: Fix falcon7b perf profiling & add support to load weights from HF when weka is not mounted
  • Rename KernelID -> KernelHandle and CircularBufferID -> CBHandle
  • Aliu/erisc cleanup
  • #3003: ttnn program logging
  • Watcher output/doc tweaks
  • #4014: added support for uint16 datatype
  • #4000: Add links to demo folders in note in first 5 things
  • #3751: Fix sfpu load/store of ints
  • enable watcher for stress test actions
  • #3058: Give first pass at flattening build by getting rid of tt-metal intermediate libs
  • Revert "#3058: Give first pass at flattening build by getting rid of …
  • #3219: Added host functions which tilize and untilize bfloat16 vectors
  • stress test machine config update
  • #0: update to use concat on device
  • #3895: ttnn functional optimized Bert
  • #4014: Fix bug with packing uint16 datatype
  • #3824: move mistral embedding weights to weka
  • #3978: Fix readme to instruct running pytest without warnings
  • Dma/3467 dprint cleanup
  • #0: identity operator for comparison of SFPU ops
  • #3058: Add tracy back into build and test with ENABLE_TRACY=1
  • #3979: Add support for ResNet for weka unmounted machines to download ImageNet
  • #3990: Remove DPRINT SETW sticky bit
  • #4041: Add moreh_layernorm op
  • #4044: Add moreh_softmax, moreh_softmin ops
  • #3103: profile the SFPU operators
  • #0: function typo fix
  • #3211: bug in WH B0 - sum along dim3
  • Implementation for Bert Sharded Batch 12
  • #4069: Avoid reading out of bounds in the hugepage
  • #4014: Add testing for uint16 and uint32 on device
  • #0: Disable TestPrintRaiseWait gtest until a fix for nondet issue is in
  • Move hugepages section and refer to public syseng instructions for accelerator-level dependencies
  • #4055: non-deterministic test_pow_fractional PCC error with watcher enabled
  • #0: update test_sfpu and profiling conflict
  • #4043: Add discord link to docs support page + README
  • Noc on erisc
  • #3894: backward ops for tt-metal
  • #3972: Update tracy and device-side profiler docs
  • #4085: update seed value and re-verify the reported bug
  • #2860: Init one UMD per MMIO device ID and the remote devices it controls
  • #4074: Add opened, reopened, synchronize pull_request triggers (default) for static checks pipeline
  • #0: Ignore /device, not device/ in .gitignore
  • #4074: Add wording to CONTRIBUTING.md to be open to future forks + to discourage clogging up pipelines with too many PRs
  • #4053: Upgrade driver from 1.23 to 1.26 in release assets from syseng
  • #4065: Update pinned python3.8-venv to 20.04.9 because 20.04.8 is gone
  • #4096: Fix issue with DPRINT server closing too early for some WAITs
  • #4053: Add chmod ugo+x step in ansible scripts for copying over script assets
  • #4109: ttnn examples.rst needs update
  • #4158: support full repeat interleave developed for Mistral
  • #4076: Add instructions for execution for programming_examples and fix one typo
  • #0: (MINOR) Bump minor to v0.39.0
  • #4053: Get rid of FW labels for silicon runner targets
  • #3752: update ttnn tutorials and make them more descriptive
  • #3994: Add bfloat16 dtype to sweep tests
  • #0: update ownership for SFPU ops profiler, and Backward ops code
  • #3420: move init erisc info to clear l1 call
  • #3918: Add falcon caching support
  • #4125: Refactor tests for backward ops
  • Perf bloom
  • #4121: Unset TT_METAL_SLOW_DISPATCH_MODE when empty string in yaml. R…
  • #4079: Remove dprints from op kernels
  • #4176: uplift umd to include create-eth-map fixes
  • #4017: Replace static device APIs to query num available devices and num availale pcie devices with standalone host APIs
  • Fixup some error messages
  • Rework build system
  • #4228: Revert umd change to see if seg faults go away
  • #4003: use if-else instead of try-except in ttnn.reshape and ttnn.permute
  • #4003: updated ttnn.model_preprocessing to keep the structure of the model weights
  • #0: Changing name for major places from Metal to Metalium
  • #4186: Move all assets except for setup_hugepages.py to internal workflows
  • #4003: run test_performance_of_bloom_for_question_answering using L1 Config and assuming fused softmax
  • #3003: updated ttnn tests

v0.38.0

24 Nov 19:50
Compare
Choose a tag to compare

📦 Uncategorized

  • #3820: Trunc fallback op
  • #3703: Support power with non integer exponent: tt_lib.tensor.power_fp
  • #308: Add a new test for coverage of previous issue with dprinting float consts from ncrisc
  • #0: Update UMD submdoule and add cluster wrapper fof get_pcie_base_addr_from_device
  • ttnn - added Bert
  • Remove asserts and enable lto for release builds
  • #2220: Use new UMD apis to get PCIe address ranges
  • #3814: Use UMD fast write path to update the CQ write pointer, clean up the names of the write/read core APIs so they do not reference DRAM
  • #0: Fix the repeat interleave doc
  • #3003: use log_debug instead of log_info for logging operations
  • Revert "#2220: Use new UMD apis to get PCIe address ranges"
  • Update get_started.rst
  • #0: Remove kkwong from CODEOWNERS
  • #0: Fix scatter op
  • #3829: Add new void* enqueue apis
  • #2516: Remove datacopy into uint32_t vector now that we have void* apis
  • #3640: eltwise binary op perf optimzation
  • #0: Fix microbenchmark csv artifact path
  • #3568: Move weigths dtype from bfloat16 to bfp8 in mistral model
  • Fix SPDX headers to be machine readable
  • #3804: Split device perf job into separate workflow from E2E perf
  • #0: Update untilizewithunpad to support some cases of unpadding width in width sharding
  • #2498: Upload syseng assets as part of release
  • #0: (MINOR) Update to v0.38.0
  • #2498: Revert "#2498: REVERT ME - test out release pipeline without r…
  • Update llama-2 version
  • #3566: support mistral model for generic batch size
  • #3718: Link multicasts that use the same path to avoid multiple path reservations in a row
  • remove UpdateRuntimeArg
  • #3704: Increase size of trisc1 code hole for now
  • Doc update for EnqueueReadBuffer
  • Env variable cleanup
  • Documenting Compute Kernels API Sprint
  • #3647: Add fix for test for polyval coeffs generation
  • #0: mistral code refactor and reuse variables
  • Codeowners update
  • #3914: Apply scatter for mistral model
  • Rewrote ttnn_optimized_multi_head_attention using only ttnn operations
  • Update models' landing page
  • #3904: First docs changes for Project Grayskull
  • Adding compute kernel api docs for untilize, tilize, unpack, tile_move_copy and reg_api
  • document compute_kernel_api/matmul.h, compute_kernel_api/pack.h, and compute_kernel_api/bcasth.h
  • #3887: repeat operator implementation
  • restrict my ownership to host API docs only
  • #0: update profiling for unary ops
  • #2220: Redo use new UMD apis to get PCIe address ranges
  • Merge latest resnet optimizations
  • Add support for eth kernels full stack
  • #0: Update docs on device side profiler
  • #3913: Update mem config for the mistral modules
  • #3003: updated links to steps 3 and 4 of getting started
  • #3830: Fix CB failures in perf pipelines
  • #0: enable test for wormhole, use eps from device
  • #3003: Adding ttnn_functional_bloom
  • #3926: refactored run_device_operation to commonize the logic of runn…
  • #0: add --tile-factor, --use-L1, --use-DRAM, or --help options
  • Moreh Matmul Op

v0.37.0

17 Nov 22:42
Compare
Choose a tag to compare

Metal

API Changes

  • Top-level API to create a Program:
    Program CreateProgram();

  • GetRuntimeArgs now returns a reference to underlying runtime args to allow for in-place updates. This results in noticeably better performance for host-bound workloads:
    std::vector<uint32_t>& GetRuntimeArgs(const Program &program, KernelID kernel_id, const CoreCoord &logical_core);

  • Two other variants of updating runtime arguments that results in better host-side performance in certain situations:

    • void UpdateRuntimeArg(const Program &program, KernelID kernel, const std::variant<CoreCoord, CoreRange, CoreRangeSet> &core_spec, size_t offset, uint32_t value);
    • void SetRuntimeArgs(const Program &program, KernelID kernel, const std::vector< CoreCoord > & core_spec, const std::vector< std::vector<uint32_t> > &runtime_args);

    (NOTE: UpdateRuntimeArg is getting removed by next release as it’s use as been superseded by the other functions)

  • GetCircularBufferConfig now returns a const reference: const CircularBufferConfig &GetCircularBufferConfig(Program &program, CircularBufferID cb_handle);

  • Updating circular buffer config parameters are done through separate 3 functions:

    • void UpdateCircularBufferTotalSize(Program &program, CircularBufferID cb_handle, uint32_t total_size);
    • void UpdateCircularBufferPageSize(Program &program, CircularBufferID cb_handle, uint8_t buffer_index, uint32_t page_size);
    • void UpdateDynamicCircularBufferAddress(Program &program, CircularBufferID cb_handle, const Buffer &buffer);
  • Moved slow/host dispatch APIs to detail namespace:

    • void LaunchProgram(Device *device, Program &program);
    • void ReadFromBuffer(const Buffer &buffer, std::vector<uint32_t> &host_buffer);
    • void WriteToBuffer(const Buffer &buffer, const std::vector<uint32_t> &host_buffer);

Tools - Profiler

  • Updating the path for all profiler artifacts to be under generated/profiler folder

ttNN

Infrastructure

  • Introduced ttnn.embedding to facilitate word embeddings
  • Added preprocess_parameters for generic conversion of torch parameters with caching
  • Added ttnn.experimental.gelu
  • Added ttnn.experimental.layer_norm
  • Updated program hash to be std::size_t and significantly sped up its computation

Operations

  • Support for split tensor into two has support for tensor [W, Z, Y, X] shape along Y in addition to existing X.
  • Support trunc function has fallback support equivalent to torch.trunc
  • Support power function with exponent which is not integral: tt_lib.tensor.power_fp()
  • Support for reshape operator on host for ROW_MAJOR layout

Models

Notes not available.

v0.36.1

05 Nov 17:22
Compare
Choose a tag to compare

Metal

Wormhole Bringup

  • Added some APIs to query device ethernet connectivity.
  • Added first phase of ethernet data movement support, basic unit tests passing on N300.

API Changes

Notes not available.

Tools - Profiler

  • Device only and host only profiling options for profile_this.py script
  • Examples for fast dispatch device program profiling

Tools - Watcher

  • Added kernel names/paths to watcher log file

Extra features

Notes not available.

Eager/ttNN

Infrastructure

  • Added initial implementation of TTNN APIs
    • Added functions to interface with torch: from_torch, to_torch
    • Added functions to move tensor to/from device: to_device, from_device
    • Added functions to change the layout of the tensor: to_layout
    • Added matmul, add, sub, mul, reshape, permute and softmax operations
  • Implemented Multi-Head-Attention using TTNN APIs
  • Added 3 tutorials to showcase TTNN
  • Updated Documentation to describe TTNN and its APIs

Operations

Following on-device operators are added to tt_lib.tensor module:

  • interleave repeat
  • triu
  • tril
  • rmsnorm
  • groupnorm
  • silu (update to be first-class unary operator)

Models

  • For BERT demo, added loading of cached pre-processed weights (stored as TT tensors) to avoid conversion from Torch to TT tensors.
  • Added demo for ResNet that executes on TT hardware. Demo takes images from ImageNet and processes them in batches of 8.

v0.35.0

27 Oct 23:36
Compare
Choose a tag to compare

Metal

Wormhole Bringup

  • Extended gtests to run on all available devices in Wormhole systems.
  • Single device tests passing on remote chips.

API Changes

  • These 2 functions:

    • uint32_t CreateSemaphore(Program &program, const CoreRange &core_range, uint32_t initial_value)
    • uint32_t CreateSemaphore(Program &program, const CoreRangeSet &core_range_set, uint32_t initial_value)

    have been replaced by

    • uint32_t CreateSemaphore(Program &program, const std::variant<CoreRange,CoreRangeSet> &core_spec, uint32_t initial_value).
  • These 3 functions:

    • void SetRuntimeArgs(const Program &program, KernelID kernel, const CoreCoord &logical_core, const std::vector<uint32_t> &runtime_args)
    • void SetRuntimeArgs(const Program &program, KernelID kernel, const CoreRange &core_range, const std::vector<uint32_t> &runtime_args)
    • void SetRuntimeArgs(const Program &program, KernelID kernel, const CoreRangeSet &core_range_set, const std::vector<uint32_t> &runtime_args)

    have been replaced by

    • void SetRuntimeArgs(const Program &program, KernelID kernel, const std::variant<CoreCoord, CoreRange, CoreRangeSet> &core_spec, const std::vector<uint32_t> &runtime_args)
  • These 2 functions:

    • KernelID CreateDataMovementKernel(Program &program, const std::string &file_name, const std::variant<CoreCoord, CoreRange, CoreRangeSet> &core_spec, const std::optional<DataMovementConfig> &config = {})
    • KernelID CreateComputeKernel(Program &program, const std::string &file_name, const std::variant<CoreCoord, CoreRange, CoreRangeSet> &core_spec, const std::optional<ComputeConfig> &config = {})

    have been replaced by:

    • KernelID CreateKernel(Program &program, const std::string &file_name, const std::variant<CoreCoord, CoreRange, CoreRangeSet> &core_spec, const std::variant<DataMovementConfig,ComputeConfig> & config)

Tools - Profiler

  • Improved profile_this.py log management strategy to avoid conservative log folder checks from profiling

Extra features

  • Runtime Compute Args: Arguments can be sent to Compute Kernels at runtime in the same way as DataMovement Kernels. The kernel uses the same get_arg_val<type>(<index>) to retrieve it. The host uses the same tt_metal::SetRuntimeArgs(Program program, KernelID kernel, const std::variant<CoreCoord, CoreRange, CoreRangeSet> & core_spec, const std::vector<uint32_t> &runtime_args), as the host used to communicate to DataMovement Kernels.

Eager (Ops)

There have been no notable changes to communicate in this release.

Models

  • Moved code that implements and tests models from tests/models to top level models folder. In the models folder, models are separated into demos (working models with end2end demo code) and experimental (models that are under development).
  • Added implementation of Falcon7B for GS and PyTorch demos for nanoGPT and T5
  • Added BERT Large end2end demo on GS (set up for question answering)