Skip to content

Commit

Permalink
Make all struct CamelCase (#1316)
Browse files Browse the repository at this point in the history
  • Loading branch information
antimora authored Feb 15, 2024
1 parent dfb739c commit 44266d5
Show file tree
Hide file tree
Showing 39 changed files with 216 additions and 216 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ impl<E: FloatElement> DynamicKernel for FusedMatmulAddRelu<E> {
```

Subsequently, we'll go into implementing our custom backend trait for the WGPU backend.
Note that we won't go into supporting the `fusion` feature flag in this tutorial, so
Note that we won't go into supporting the `fusion` feature flag in this tutorial, so
we implement the trait for the raw `WgpuBackend` type.

```rust, ignore
Expand Down
22 changes: 11 additions & 11 deletions burn-book/src/basic-workflow/data.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ at `examples/guide/` [directory](https://github.com/tracel-ai/burn/tree/main/exa

```rust , ignore
use burn::{
data::{dataloader::batcher::Batcher, dataset::vision::MNISTItem},
data::{dataloader::batcher::Batcher, dataset::vision::MnistItem},
tensor::{backend::Backend, Data, ElementConversion, Int, Tensor},
};

pub struct MNISTBatcher<B: Backend> {
pub struct MnistBatcher<B: Backend> {
device: B::Device,
}

impl<B: Backend> MNISTBatcher<B> {
impl<B: Backend> MnistBatcher<B> {
pub fn new(device: B::Device) -> Self {
Self { device }
}
Expand All @@ -42,13 +42,13 @@ Next, we need to actually implement the batching logic.

```rust , ignore
#[derive(Clone, Debug)]
pub struct MNISTBatch<B: Backend> {
pub struct MnistBatch<B: Backend> {
pub images: Tensor<B, 3>,
pub targets: Tensor<B, 1, Int>,
}

impl<B: Backend> Batcher<MNISTItem, MNISTBatch<B>> for MNISTBatcher<B> {
fn batch(&self, items: Vec<MNISTItem>) -> MNISTBatch<B> {
impl<B: Backend> Batcher<MnistItem, MnistBatch<B>> for MnistBatcher<B> {
fn batch(&self, items: Vec<MnistItem>) -> MnistBatch<B> {
let images = items
.iter()
.map(|item| Data::<f32, 2>::from(item.image))
Expand All @@ -71,7 +71,7 @@ impl<B: Backend> Batcher<MNISTItem, MNISTBatch<B>> for MNISTBatcher<B> {
let images = Tensor::cat(images, 0).to_device(&self.device);
let targets = Tensor::cat(targets, 0).to_device(&self.device);

MNISTBatch { images, targets }
MnistBatch { images, targets }
}
}
```
Expand All @@ -81,7 +81,7 @@ impl<B: Backend> Batcher<MNISTItem, MNISTBatch<B>> for MNISTBatcher<B> {

The iterator pattern allows you to perform some tasks on a sequence of items in turn.

In this example, an iterator is created over the `MNISTItem`s in the vector `items` by calling the
In this example, an iterator is created over the `MnistItem`s in the vector `items` by calling the
`iter` method.

_Iterator adaptors_ are methods defined on the `Iterator` trait that produce different iterators by
Expand All @@ -100,7 +100,7 @@ If we go back to the example, we can break down and comment the expression used
images.

```rust, ignore
let images = items // take items Vec<MNISTItem>
let images = items // take items Vec<MnistItem>
.iter() // create an iterator over it
.map(|item| Data::<f32, 2>::from(item.image)) // for each item, convert the image to float32 data struct
.map(|data| Tensor::<B, 2>::from_data(data.convert(), &self.device)) // for each data struct, create a tensor on the device
Expand All @@ -115,8 +115,8 @@ Book.

</details><br>

In the previous example, we implement the `Batcher` trait with a list of `MNISTItem` as input and a
single `MNISTBatch` as output. The batch contains the images in the form of a 3D tensor, along with
In the previous example, we implement the `Batcher` trait with a list of `MnistItem` as input and a
single `MnistBatch` as output. The batch contains the images in the form of a 3D tensor, along with
a targets tensor that contains the indexes of the correct digit class. The first step is to parse
the image array into a `Data` struct. Burn provides the `Data` struct to encapsulate tensor storage
information without being specific for a backend. When creating a tensor from data, we often need to
Expand Down
8 changes: 4 additions & 4 deletions burn-book/src/basic-workflow/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ impl ModelConfig {
conv1: Conv2dConfig::new([1, 8], [3, 3]).init_with(record.conv1),
conv2: Conv2dConfig::new([8, 16], [3, 3]).init_with(record.conv2),
pool: AdaptiveAvgPool2dConfig::new([8, 8]).init(),
activation: ReLU::new(),
activation: Relu::new(),
linear1: LinearConfig::new(16 * 8 * 8, self.hidden_size).init_with(record.linear1),
linear2: LinearConfig::new(self.hidden_size, self.num_classes)
.init_with(record.linear2),
Expand All @@ -33,7 +33,7 @@ manually. Everything is validated when loading the model with the record.
Now let's create a simple `infer` method in a new file `src/inference.rs` which we will use to load our trained model.

```rust , ignore
pub fn infer<B: Backend>(artifact_dir: &str, device: B::Device, item: MNISTItem) {
pub fn infer<B: Backend>(artifact_dir: &str, device: B::Device, item: MnistItem) {
let config = TrainingConfig::load(format!("{artifact_dir}/config.json"))
.expect("Config should exist for the model");
let record = CompactRecorder::new()
Expand All @@ -43,7 +43,7 @@ pub fn infer<B: Backend>(artifact_dir: &str, device: B::Device, item: MNISTItem)
let model = config.model.init_with::<B>(record);

let label = item.label;
let batcher = MNISTBatcher::new(device);
let batcher = MnistBatcher::new(device);
let batch = batcher.batch(vec![item]);
let output = model.forward(batch.images);
let predicted = output.argmax(1).flatten::<1>(0, 1).into_scalar();
Expand All @@ -56,6 +56,6 @@ The first step is to load the configuration of the training to fetch the correct
configuration. Then we can fetch the record using the same recorder as we used during training.
Finally we can init the model with the configuration and the record before sending it to the wanted
device for inference. For simplicity we can use the same batcher used during the training to pass
from a MNISTItem to a tensor.
from a MnistItem to a tensor.

By running the infer function, you should see the predictions of your model!
8 changes: 4 additions & 4 deletions burn-book/src/basic-workflow/model.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ use burn::{
nn::{
conv::{Conv2d, Conv2dConfig},
pool::{AdaptiveAvgPool2d, AdaptiveAvgPool2dConfig},
Dropout, DropoutConfig, Linear, LinearConfig, ReLU,
Dropout, DropoutConfig, Linear, LinearConfig, Relu,
},
tensor::{backend::Backend, Tensor},
};
Expand All @@ -48,7 +48,7 @@ pub struct Model<B: Backend> {
dropout: Dropout,
linear1: Linear<B>,
linear2: Linear<B>,
activation: ReLU,
activation: Relu,
}
```

Expand Down Expand Up @@ -98,7 +98,7 @@ There are two major things going on in this code sample.
pub struct MyCustomModule<B: Backend> {
linear1: Linear<B>,
linear2: Linear<B>,
activation: ReLU,
activation: Relu,
}
```

Expand Down Expand Up @@ -178,7 +178,7 @@ impl ModelConfig {
conv1: Conv2dConfig::new([1, 8], [3, 3]).init(device),
conv2: Conv2dConfig::new([8, 16], [3, 3]).init(device),
pool: AdaptiveAvgPool2dConfig::new([8, 8]).init(),
activation: ReLU::new(),
activation: Relu::new(),
linear1: LinearConfig::new(16 * 8 * 8, self.hidden_size).init(device),
linear2: LinearConfig::new(self.hidden_size, self.num_classes).init(device),
dropout: DropoutConfig::new(self.dropout).init(),
Expand Down
22 changes: 11 additions & 11 deletions burn-book/src/basic-workflow/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,23 +43,23 @@ Moving forward, we will proceed with the implementation of both the training and
for our model.

```rust , ignore
impl<B: AutodiffBackend> TrainStep<MNISTBatch<B>, ClassificationOutput<B>> for Model<B> {
fn step(&self, batch: MNISTBatch<B>) -> TrainOutput<ClassificationOutput<B>> {
impl<B: AutodiffBackend> TrainStep<MnistBatch<B>, ClassificationOutput<B>> for Model<B> {
fn step(&self, batch: MnistBatch<B>) -> TrainOutput<ClassificationOutput<B>> {
let item = self.forward_classification(batch.images, batch.targets);

TrainOutput::new(self, item.loss.backward(), item)
}
}

impl<B: Backend> ValidStep<MNISTBatch<B>, ClassificationOutput<B>> for Model<B> {
fn step(&self, batch: MNISTBatch<B>) -> ClassificationOutput<B> {
impl<B: Backend> ValidStep<MnistBatch<B>, ClassificationOutput<B>> for Model<B> {
fn step(&self, batch: MnistBatch<B>) -> ClassificationOutput<B> {
self.forward_classification(batch.images, batch.targets)
}
}
```

Here we define the input and output types as generic arguments in the `TrainStep` and `ValidStep`.
We will call them `MNISTBatch` and `ClassificationOutput`. In the training step, the computation of
We will call them `MnistBatch` and `ClassificationOutput`. In the training step, the computation of
gradients is straightforward, necessitating a simple invocation of `backward()` on the loss. Note
that contrary to PyTorch, gradients are not stored alongside each tensor parameter, but are rather
returned by the backward pass, as such: `let gradients = loss.backward();`. The gradient of a
Expand All @@ -81,8 +81,8 @@ which is generic over the `Backend` trait as has been covered before. These trai
`burn::train` and define a common `step` method that should be implemented for all structs. Since
the trait is generic over the input and output types, the trait implementation must specify the
concrete types used. This is where the additional type constraints appear
`<MNISTBatch<B>, ClassificationOutput<B>>`. As we saw previously, the concrete input type for the
batch is `MNISTBatch`, and the output of the forward pass is `ClassificationOutput`. The `step`
`<MnistBatch<B>, ClassificationOutput<B>>`. As we saw previously, the concrete input type for the
batch is `MnistBatch`, and the output of the forward pass is `ClassificationOutput`. The `step`
method signature matches the concrete input and output types.

For more details specific to constraints on generic types when defining methods, take a look at
Expand Down Expand Up @@ -118,20 +118,20 @@ pub fn train<B: AutodiffBackend>(artifact_dir: &str, config: TrainingConfig, dev

B::seed(config.seed);

let batcher_train = MNISTBatcher::<B>::new(device.clone());
let batcher_valid = MNISTBatcher::<B::InnerBackend>::new(device.clone());
let batcher_train = MnistBatcher::<B>::new(device.clone());
let batcher_valid = MnistBatcher::<B::InnerBackend>::new(device.clone());

let dataloader_train = DataLoaderBuilder::new(batcher_train)
.batch_size(config.batch_size)
.shuffle(config.seed)
.num_workers(config.num_workers)
.build(MNISTDataset::train());
.build(MnistDataset::train());

let dataloader_test = DataLoaderBuilder::new(batcher_valid)
.batch_size(config.batch_size)
.shuffle(config.seed)
.num_workers(config.num_workers)
.build(MNISTDataset::test());
.build(MnistDataset::test());

let learner = LearnerBuilder::new(artifact_dir)
.metric_train_numeric(AccuracyMetric::new())
Expand Down
2 changes: 1 addition & 1 deletion burn-book/src/building-blocks/module.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,4 +160,4 @@ Burn comes with built-in modules that you can use to build your own modules.
| Burn API | PyTorch Equivalent |
| ------------------ | --------------------- |
| `CrossEntropyLoss` | `nn.CrossEntropyLoss` |
| `MSELoss` | `nn.MSELoss` |
| `MseLoss` | `nn.MSELoss` |
14 changes: 7 additions & 7 deletions burn-book/src/custom-training-loop.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,21 +40,21 @@ pub fn run<B: AutodiffBackend>(device: &B::Device) {
let mut optim = config.optimizer.init();
// Create the batcher.
let batcher_train = MNISTBatcher::<B>::new(device.clone());
let batcher_valid = MNISTBatcher::<B::InnerBackend>::new(device.clone());
let batcher_train = MnistBatcher::<B>::new(device.clone());
let batcher_valid = MnistBatcher::<B::InnerBackend>::new(device.clone());
// Create the dataloaders.
let dataloader_train = DataLoaderBuilder::new(batcher_train)
.batch_size(config.batch_size)
.shuffle(config.seed)
.num_workers(config.num_workers)
.build(MNISTDataset::train());
.build(MnistDataset::train());
let dataloader_test = DataLoaderBuilder::new(batcher_valid)
.batch_size(config.batch_size)
.shuffle(config.seed)
.num_workers(config.num_workers)
.build(MNISTDataset::test());
.build(MnistDataset::test());
...
}
Expand Down Expand Up @@ -140,7 +140,7 @@ Note that after each epoch, we include a validation loop to assess our model's p
previously unseen data. To disable gradient tracking during this validation step, we can invoke
`model.valid()`, which provides a model on the inner backend without autodiff capabilities. It's
important to emphasize that we've declared our validation batcher to be on the inner backend,
specifically `MNISTBatcher<B::InnerBackend>`; not using `model.valid()` will result in a compilation
specifically `MnistBatcher<B::InnerBackend>`; not using `model.valid()` will result in a compilation
error.

You can find the code above available as an
Expand Down Expand Up @@ -195,7 +195,7 @@ where
M: AutodiffModule<B>,
O: Optimizer<M, B>,
{
pub fn step(&mut self, _batch: MNISTBatch<B>) {
pub fn step(&mut self, _batch: MnistBatch<B>) {
//
}
}
Expand All @@ -214,7 +214,7 @@ the backend and add your trait constraint within its definition:
```rust, ignore
#[allow(dead_code)]
impl<M, O> Learner2<M, O> {
pub fn step<B: AutodiffBackend>(&mut self, _batch: MNISTBatch<B>)
pub fn step<B: AutodiffBackend>(&mut self, _batch: MnistBatch<B>)
where
B: AutodiffBackend,
M: AutodiffModule<B>,
Expand Down
6 changes: 3 additions & 3 deletions burn-book/src/saving-and-loading.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ model definition as a simple example.
pub struct Model<B: Backend> {
linear_in: Linear<B>,
linear_out: Linear<B>,
activation: ReLU,
activation: Relu,
}
```

Expand All @@ -59,7 +59,7 @@ impl<B: Backend> Model<B> {
Model {
linear_in: LinearConfig::new(10, 64).init_with(record.linear_in),
linear_out: LinearConfig::new(64, 2).init_with(record.linear_out),
activation: ReLU::new(),
activation: Relu::new(),
}
}
Expand All @@ -70,7 +70,7 @@ impl<B: Backend> Model<B> {
Model {
linear_in: l1,
linear_out: l2,
activation: ReLU::new(),
activation: Relu::new(),
}
}
}
Expand Down
8 changes: 4 additions & 4 deletions burn-core/src/nn/loss/mse.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,17 @@ use burn_tensor::{backend::Backend, Tensor};

/// Calculate the mean squared error loss from the input logits and the targets.
#[derive(Clone, Debug)]
pub struct MSELoss<B: Backend> {
pub struct MseLoss<B: Backend> {
backend: PhantomData<B>,
}

impl<B: Backend> Default for MSELoss<B> {
impl<B: Backend> Default for MseLoss<B> {
fn default() -> Self {
Self::new()
}
}

impl<B: Backend> MSELoss<B> {
impl<B: Backend> MseLoss<B> {
/// Create the criterion.
pub fn new() -> Self {
Self {
Expand Down Expand Up @@ -67,7 +67,7 @@ mod tests {
let targets =
Tensor::<TestBackend, 2>::from_data(Data::from([[2.0, 1.0], [3.0, 2.0]]), &device);

let mse = MSELoss::new();
let mse = MseLoss::new();
let loss_no_reduction = mse.forward_no_reduction(logits.clone(), targets.clone());
let loss = mse.forward(logits.clone(), targets.clone(), Reduction::Auto);
let loss_sum = mse.forward(logits, targets, Reduction::Sum);
Expand Down
4 changes: 2 additions & 2 deletions burn-core/src/nn/relu.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ use crate::tensor::Tensor;
///
/// `y = max(0, x)`
#[derive(Module, Clone, Debug, Default)]
pub struct ReLU {}
pub struct Relu {}

impl ReLU {
impl Relu {
/// Create the module.
pub fn new() -> Self {
Self {}
Expand Down
Loading

0 comments on commit 44266d5

Please sign in to comment.