-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
6e42ae0
commit 1dc22d5
Showing
4 changed files
with
151 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,4 +2,5 @@ | |
.jupyter | ||
Manifest.toml | ||
_build | ||
notebooks/data/ | ||
notebooks/data/ | ||
.vscode |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,147 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"vscode": { | ||
"languageId": "julia" | ||
} | ||
}, | ||
"source": [ | ||
"# GPUs\n", | ||
"\n", | ||
"## Computing Devices\n", | ||
"\n", | ||
"We can query the number of available GPUs." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 29, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"CUDA.DeviceIterator() for 1 devices:\n", | ||
"0. NVIDIA GeForce GTX 970" | ||
] | ||
}, | ||
"metadata": {}, | ||
"output_type": "display_data" | ||
} | ||
], | ||
"source": [ | ||
"using CUDA,Flux\n", | ||
"\n", | ||
"CUDA.devices()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Vectors and GPUs\n", | ||
"\n", | ||
"### Storage on the GPU\n", | ||
"\n", | ||
"There are several ways to store a Vector on the GPU. For example, we can specify a storage device when creating a Vector. Next, we create the Vector variable X on the first gpu. The Vector created on a GPU only consumes the memory of this GPU. We can use the nvidia-smi command to view GPU memory usage. In general, we need to make sure that we do not create data that exceeds the GPU memory limit." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 30, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"3×2 CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}:\n", | ||
" 1.0 1.0\n", | ||
" 1.0 1.0\n", | ||
" 1.0 1.0" | ||
] | ||
}, | ||
"metadata": {}, | ||
"output_type": "display_data" | ||
} | ||
], | ||
"source": [ | ||
"X = cu(ones(3,2))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Neural Networks and GPUs\n", | ||
"\n", | ||
"Similarly, a neural network model can specify devices. The following code puts the model parameters on the GPU." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 31, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"Chain(\n", | ||
" Dense(3 => 1), \u001b[90m# 4 parameters\u001b[39m\n", | ||
") " | ||
] | ||
}, | ||
"metadata": {}, | ||
"output_type": "display_data" | ||
} | ||
], | ||
"source": [ | ||
"model = Chain(Dense(3=>1))\n", | ||
"model = model |> gpu" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"For example, when the input is a tensor on the GPU, the model will calculate the result on the same GPU." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 32, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"1×2 CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}:\n", | ||
" 0.0954295 0.0954295" | ||
] | ||
}, | ||
"metadata": {}, | ||
"output_type": "display_data" | ||
} | ||
], | ||
"source": [ | ||
"model(X)" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Julia 1.10.3", | ||
"language": "julia", | ||
"name": "julia-1.10" | ||
}, | ||
"language_info": { | ||
"file_extension": ".jl", | ||
"mimetype": "application/julia", | ||
"name": "julia", | ||
"version": "1.10.3" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |