Skip to content

Commit

Permalink
Add initial codebase created as part of Google Summer of Code 2018
Browse files Browse the repository at this point in the history
  • Loading branch information
kevinstadler committed Aug 15, 2018
1 parent 92f24e2 commit f8e184d
Show file tree
Hide file tree
Showing 81 changed files with 5,162 additions and 3 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
/bin/
*.jar
21 changes: 18 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,19 @@
# processing-sound
Audio library for Processing built with JSyn
## Processing Sound library

This library replaces the prior Processing Sound library (<https://github.com/processing/processing-sound-archive>). The API is 100% compatible, so all code written with the prior code base will work with the new version of the library.
The new Sound library for Processing 3 provides a simple way to work with audio. It can play, analyze, and synthesize sound. The library comes with a collection of oscillators for basic wave forms, a variety of noise generators, and effects and filters to alter sound files and other generated sounds. The syntax is minimal to make it easy for beginners who want a straightforward way to add some sound to their Processing sketches!

### How to use

The easiest way to install the Sound library is through Processing's Contribution Manager. The library comes with many example sketches, the full online reference can be found [here](https://www.processing.org/reference/libraries/sound/). Please report bugs [https://github.com/processing/processing-sound/issues](here).

### How to build

1. `git clone [email protected]:processing/processing-sound.git`
2. into the `library/` folder copy (or soft-link) your Processsing's `core.jar` (and, optionally, also your Android SDK's `android.jar`, API level 26 or higher). Other dependencies (in particular Phil Burk's [JSyn](http://www.softsynth.com/jsyn/) engine on which this library is based) are downloaded automatically.
3. `ant dist` (or, alternatively, run build.xml from within Eclipse)

The resulting `processing-sound.zip` can be extracted into your Processing installation's `libraries/` folder.

### License

LGPL v2.1
78 changes: 78 additions & 0 deletions build.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
<?xml version="1.0"?>
<project name="Processing Sound Library" default="build">

<property file="./build.properties" />

<path id="classpath">
<fileset dir="..">
<!-- look for Processing's core.jar in ../processing -->
<!-- alternatively, this file can also be placed in -->
<!-- the local library folder -->
<include name="processing/core/library/core.jar" />
</fileset>
<fileset dir="library">
<include name="*.jar" />
</fileset>
</path>

<target name="clean" description="Clean the build directories">
<delete dir="bin" />
<delete file="library/sound.jar" />
</target>

<target name="checkandroid">
<available file="library/android.jar" property="hasandroid" />
</target>

<target name="android-deps" unless="hasandroid" description="Download an android.jar">
<!-- this part of the Android SDK is required to build JSynAndroidAudioDeviceManager -->
<!-- preferrably you should soft-link or copy the android.jar of your locally
installed SDK into this project's library/ directory -->
<get src="https://github.com/marianadangelo/android-platforms/raw/master/android-26/android.jar" dest="library/" usetimestamp="true" />
</target>

<target name="deps" depends="checkandroid,android-deps" description="Get library dependencies">
<mkdir dir="library" />
<get src="http://www.softsynth.com/jsyn/developers/archives/jsyn-20171016.jar" dest="library/" usetimestamp="true" />
<get src="https://github.com/kevinstadler/JavaMP3/releases/download/v1.0.2/javamp3-1.0.2.jar" dest="library/" usetimestamp="true" />
</target>

<target name="compile" depends="deps" description="Compile sources">
<mkdir dir="bin" />
<javac source="1.8" target="1.8" srcdir="src" destdir="bin" encoding="UTF-8" includeAntRuntime="false" nowarn="true">
<classpath refid="classpath" />
</javac>
</target>

<target name="javadoc">
<javadoc bottom="Processing Sound" destdir="docs" verbose="false" doctitle="Javadocs: Processing Sound" public="true" windowtitle="Javadocs: Processing Sound" additionalparam="-notimestamp">
<fileset dir="src" defaultexcludes="yes">
<include name="**/*" />
</fileset>
<classpath refid="classpath" />
</javadoc>
</target>

<target name="build" depends="clean,compile" description="Build Sound library jar">
<jar destfile="library/sound.jar">
<fileset dir="bin" />
</jar>
</target>

<target name="dist" depends="build,javadoc">
<zip destfile="../processing-sound.zip">
<zipfileset dir="." prefix="sound">
<exclude name=".*" />
<exclude name="build.xml" />
<exclude name="bin/**" />
<exclude name="docs/**" />
<exclude name="examples/**/application.*/**" />
<exclude name="library/android.jar" />
<exclude name="library/core.jar" />
<exclude name="src/**" />
</zipfileset>
</zip>
<!--copy file="library.properties"
toFile="../processing-sound.txt" /-->
</target>
</project>
64 changes: 64 additions & 0 deletions examples/Analysis/FFTSpectrum/FFTSpectrum.pde
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
/**
* This sketch shows how to use the FFT class to analyze a stream
* of sound. Change the number of bands to get more spectral bands
* (at the expense of more coarse-grained time resolution of the spectrum).
*/

import processing.sound.*;

// Declare the sound source and FFT analyzer variables
SoundFile sample;
FFT fft;

// Define how many FFT bands to use (this needs to be a power of two)
int bands = 128;

// Define a smoothing factor which determines how much the spectrums of consecutive
// points in time should be combined to create a smoother visualisation of the spectrum.
// A smoothing factor of 1.0 means no smoothing (only the data from the newest analysis
// is rendered), decrease the factor down towards 0.0 to have the visualisation update
// more slowly, which is easier on the eye.
float smoothingFactor = 0.2;

// Create a vector to store the smoothed spectrum data in
float[] sum = new float[bands];

// Variables for drawing the spectrum:
// Declare a scaling factor for adjusting the height of the rectangles
int scale = 5;
// Declare a drawing variable for calculating the width of the
float barWidth;

public void setup() {
size(640, 360);
background(255);

// Calculate the width of the rects depending on how many bands we have
barWidth = width/float(bands);

// Load and play a soundfile and loop it.
sample = new SoundFile(this, "beat.aiff");
sample.loop();

// Create the FFT analyzer and connect the playing soundfile to it.
fft = new FFT(this, bands);
fft.input(sample);
}

public void draw() {
// Set background color, noStroke and fill color
background(125, 255, 125);
fill(255, 0, 150);
noStroke();

// Perform the analysis
fft.analyze();

for (int i = 0; i < bands; i++) {
// Smooth the FFT spectrum data by smoothing factor
sum[i] += (fft.spectrum[i] - sum[i]) * smoothingFactor;

// Draw the rectangles, adjust their height using the scale factor
rect(i*barWidth, height, barWidth, -sum[i]*height*scale);
}
}
Binary file added examples/Analysis/FFTSpectrum/data/beat.aiff
Binary file not shown.
49 changes: 49 additions & 0 deletions examples/Analysis/PeakAmplitude/PeakAmplitude.pde
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
/**
* This sketch shows how to use the Amplitude class to analyze the changing
* "loudness" of a stream of sound. In this case an audio sample is analyzed.
*/

import processing.sound.*;

// Declare the processing sound variables
SoundFile sample;
Amplitude rms;

// Declare a smooth factor to smooth out sudden changes in amplitude.
// With a smooth factor of 1, only the last measured amplitude is used for the
// visualisation, which can lead to very abrupt changes. As you decrease the
// smooth factor towards 0, the measured amplitudes are averaged across frames,
// leading to more pleasant gradual changes
float smoothingFactor = 0.25;

// Used for storing the smoothed amplitude value
float sum;

public void setup() {
size(640, 360);

//Load and play a soundfile and loop it
sample = new SoundFile(this, "beat.aiff");
sample.loop();

// Create and patch the rms tracker
rms = new Amplitude(this);
rms.input(sample);
}

public void draw() {
// Set background color, noStroke and fill color
background(125, 255, 125);
noStroke();
fill(255, 0, 150);

// smooth the rms data by smoothing factor
sum += (rms.analyze() - sum) * smoothingFactor;

// rms.analyze() return a value between 0 and 1. It's
// scaled to height/2 and then multiplied by a fixed scale factor
float rms_scaled = sum * (height/2) * 5;

// We draw a circle whose size is coupled to the audio analysis
ellipse(width/2, height/2, rms_scaled, rms_scaled);
}
Binary file added examples/Analysis/PeakAmplitude/data/beat.aiff
Binary file not shown.
40 changes: 40 additions & 0 deletions examples/Effects/BandPassFilter/BandPassFilter.pde
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
/**
* In this example, a WhiteNoise generator (equal amount of noise at all frequencies) is
* passed through a BandPass filter. You can control both the central frequency
* (left/right) as well as the bandwidth of the filter (up/down) with the mouse. The
* position and size of the circle indicates how much of the noise's spectrum passes
* through the filter, and at what frequency range.
*/

import processing.sound.*;

WhiteNoise noise;
BandPass filter;

void setup() {
size(640, 360);

// Create the noise generator + Filter
noise = new WhiteNoise(this);
filter = new BandPass(this);

noise.play(0.5);
filter.process(noise);
}

void draw() {
// Map the left/right mouse position to a cutoff frequency between 20 and 10000 Hz
float frequency = map(mouseX, 0, width, 20, 10000);
// And the vertical mouse position to the width of the band to be passed through
float bandwidth = map(mouseY, 0, height, 1000, 100);

filter.freq(frequency);
filter.bw(bandwidth);

// Draw a circle indicating the position + width of the frequency window
// that is allowed to pass through
background(125, 255, 125);
noStroke();
fill(255, 0, 150);
ellipse(mouseX, height, 2*(height - mouseY), 2*(height - mouseY));
}
33 changes: 33 additions & 0 deletions examples/Effects/HighPassFilter/HighPassFilter.pde
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
/**
* This is a simple WhiteNoise generator, run through a HighPass filter which only lets
* the higher frequency components of the noise through. The cutoff frequency of the
* filter can be controlled through the left/right position of the mouse.
*/

import processing.sound.*;

WhiteNoise noise;
HighPass highPass;

void setup() {
size(640, 360);

// Create the noise generator + filter
noise = new WhiteNoise(this);
highPass = new HighPass(this);

noise.play(0.5);
highPass.process(noise);
}

void draw() {
// Map the left/right mouse position to a cutoff frequency between 10 and 15000 Hz
float cutoff = map(mouseX, 0, width, 10, 15000);
highPass.freq(cutoff);

// Draw a circle indicating the position + width of the frequencies passed through
background(125, 255, 125);
noStroke();
fill(255, 0, 150);
ellipse(width, height, 2*(width - mouseX), 2*(width - mouseX));
}
33 changes: 33 additions & 0 deletions examples/Effects/LowPassFilter/LowPassFilter.pde
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
/**
* This is a simple WhiteNoise generator, run through a LowPass filter which only lets
* the lower frequency components of the noise through. The cutoff frequency of the
* filter can be controlled through the left/right position of the mouse.
*/

import processing.sound.*;

WhiteNoise noise;
LowPass lowPass;

void setup() {
size(640, 360);

// Create the noise generator + filter
noise = new WhiteNoise(this);
lowPass = new LowPass(this);

noise.play(0.5);
lowPass.process(noise);
}

void draw() {
// Map the left/right mouse position to a cutoff frequency between 20 and 10000 Hz
float cutoff = map(mouseX, 0, width, 20, 10000);
lowPass.freq(cutoff);

// Draw a circle indicating the position + width of the frequencies passed through
background(125, 255, 125);
noStroke();
fill(255, 0, 150);
ellipse(0, height, 2*mouseX, 2*mouseX);
}
46 changes: 46 additions & 0 deletions examples/Effects/Reverberation/Reverberation.pde
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
/**
* Play a sound sample and apply a reverb filter to it, changing the effect
* parameters based on the mouse position.
*
* With the mouse pointer at the top of the sketch you'll only hear the "dry"
* (unprocessed) signal, move the mouse downwards to add more of the "wet"
* reverb signal to the mix. The left-right position of the mouse controls the
* "room size" and damping of the effect, with a smaller room (and more refraction)
* at the left, and a bigger (but more dampened) room towards the right.
*/

import processing.sound.*;

SoundFile soundfile;
Reverb reverb;

void setup() {
size(640, 360);
background(255);

// Load a soundfile
soundfile = new SoundFile(this, "vibraphon.aiff");

// Create the effect object
reverb = new Reverb(this);

// Play the file in a loop
soundfile.loop();

// Set soundfile as input to the reverb
reverb.process(soundfile);
}

void draw() {
// Change the roomsize of the reverb
float roomSize = map(mouseX, 0, width, 0, 1.0);
reverb.room(roomSize);

// Change the high frequency dampening parameter
float damping = map(mouseX, 0, width, 0, 1.0);
reverb.damp(damping);

// Change the wet/dry relation of the effect
float effectStrength = map(mouseY, 0, height, 0, 1.0);
reverb.wet(effectStrength);
}
Binary file not shown.
Loading

0 comments on commit f8e184d

Please sign in to comment.