Chroma and Image Distortion

Wrote this shader for processing a while back. It adds a sort of drunken vibe when applied to images. 

Fragment Shader

precision highp float;

uniform sampler2D texture;
uniform float noiseOffset;
uniform float noiseMultipler;
uniform float noiseAddMultipler;
uniform float colorOffset;
uniform float colorOffsetMod;
uniform float positionMixAlpha;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;

//https://gist.github.com/patriciogonzalezvivo/670c22f3966e662d2f83
float rand(vec2 n) { 
    return fract(sin(dot(n, vec2(12.9898, 4.1414))) * 43758.5453);
}

float noise(vec2 p){
    vec2 ip = floor(p);
    vec2 u = fract(p);
    u = u*u*(3.0-2.0*u);

    float res = mix(
        mix(rand(ip),rand(ip+vec2(1.0,0.0)),u.x),
        mix(rand(ip+vec2(0.0,1.0)),rand(ip+vec2(1.0,1.0)),u.x),u.y);
    return res*res;
}

//  https://www.shadertoy.com/view/MdX3Rr
//  by inigo quilez
//
const mat2 m2 = mat2(0.8,-0.6,0.6,0.8);
float fbm( in vec2 p ){
    float f = 0.0;
    f += 0.5000*noise( p ); p = m2*p*2.02;
    f += 0.2500*noise( p ); p = m2*p*2.03;
    f += 0.1250*noise( p ); p = m2*p*2.01;
    f += 0.0625*noise( p );

    return f/0.9375;
}


void main() {
	vec2 p = vertTexCoord.xy;
    vec2 modNoiseP = (p*noiseMultipler + noiseOffset);//2
    float noiseF = (fbm(modNoiseP) + fbm(modNoiseP + noiseAddMultipler* fbm(modNoiseP)));
    float cMod = noiseF * colorOffsetMod;//3
    vec4 red = texture2D(texture,  mix(p, vec2(noiseF) - vec2(colorOffset * cMod,0.), positionMixAlpha));
    vec4 blue = texture2D(texture, mix(p, vec2(noiseF) + vec2(colorOffset * cMod,0.), positionMixAlpha));
    vec4 green = texture2D(texture, mix(p, vec2(noiseF), positionMixAlpha));
	gl_FragColor = vec4(red.r, green.g, blue.b, 1.);
}

Vertex Shader

#define PROCESSING_TEXTURE_SHADER

uniform mat4 transform;
uniform mat4 texMatrix;

attribute vec4 vertex;
attribute vec4 color;
attribute vec2 texCoord;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;

void main() {
    gl_Position = transform * vertex;
    pos = transform * vertex;
    vertColor = color;
    vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);
}

Image to WAV and back

After writing my last blog about databending in java I realized that Audacity has a batch editing tool called Chains. This allows you to batch apply effects to many audio files at once.

So I wrote a helper class in java for manipulating the pixel data in images. And also allowing you to write the pixel data to a WAV file and load it back. Using this you can then batch convert all the frames of a video to WAV files, apply a chain to them in audacity and then batch convert them all back to images. This is far better than just manipulating the pixel data in java as it means you don't have to write your own code for the audio effects.

Audacity Chains Info: here

Code

package com.augustuspash.databend;

import java.awt.Point;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.awt.image.Raster;
import java.awt.image.SampleModel;
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;

import javax.imageio.ImageIO;


public class Databend {

	static byte[] headerWAV = new byte[] { 0x52, 0x49, 0x46, 0x46, 0x00, 0x00, 0x00, 0x00, 0x57, 0x41, 0x56, 
			0x45, 0x66, 0x6D, 0x74, 0x20, 0x10, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x00, 0x44, (byte) 0xAC,
			0x00, 0x00, (byte) 0x88, 0x58, 0x01, 0x00, 0x02, 0x00, 0x10, 0x00, 0x64, 0x61, 0x74, 0x61 };
	
	public static void batchImageToWave(int start, int stop, String imagePath, String outputPath) throws IOException {
		for (int i = start; i < stop; i++) {
			imageToWave(String.format(imagePath, i), String.format(outputPath, i));
		}
	}
	
	public static void batchWaveToImage(int start, int stop, String imagePath, String wavePath, String outputPath, String type) throws IOException {
		for (int i = start; i < stop; i++) {
			waveToImage(String.format(imagePath, i), String.format(wavePath, i), String.format(outputPath, i), type);
		}
	}
	
	public static void imageToWave(String imagePath, String outputPath) throws IOException {
		BufferedImage img = loadImage(imagePath);
		byte[] origData = getPixelBytes(img);
		writeBytesToWave(outputPath, origData);
	}
	
	public static void waveToImage(String origImagePath, String wavePath, String outputPath, String type) throws IOException {
		BufferedImage img = loadImage(origImagePath);
		byte[] origData = getPixelBytes(img);
		byte[] reloadData = getWavBytes(wavePath);
		BufferedImage img2 = pixelBytesToImage(img.getWidth(), img.getHeight(), img.getType(), img.getSampleModel(), origData.length, reloadData);
		saveImage(img2, outputPath, type);
	}
	
	public static void writeBytesToWave(String templatePath, byte[] data) throws IOException {
		Path file = Paths.get(templatePath);
		Files.write(file, headerWAV);
		Files.write(file, new byte[]{(byte)((data.length>>8)&0xFF),(byte)(data.length&0xFF)}, StandardOpenOption.WRITE, StandardOpenOption.APPEND);
		Files.write(file, data, StandardOpenOption.WRITE, StandardOpenOption.APPEND);
	}
	
	public static byte[] getPixelBytes(String path) throws IOException {
		BufferedImage img = null;
		img = ImageIO.read(new File(path));
		return ((DataBufferByte)img.getRaster().getDataBuffer()).getData();
	}
	
	public static byte[] getPixelBytes(BufferedImage image){
		return ((DataBufferByte)image.getRaster().getDataBuffer()).getData();
	}
	
	public static BufferedImage loadImage(String path) throws IOException {
		return ImageIO.read(new File(path));
	}
	
	public static BufferedImage pixelBytesToImage(int width, int height, int type, SampleModel sampleModel, int origLength, byte[] data) {
		BufferedImage resultImg = new BufferedImage(width, height, type);
		if (origLength < data.length) {
			byte[] result = new byte[origLength];
			System.arraycopy(data, 0, result, 0, origLength);
			data = result;
		} else if (origLength > data.length) {
			byte[] result = new byte[origLength];
			System.arraycopy(data, 0, result, 0, data.length);
			data = result;
		}
		resultImg.setData(Raster.createRaster(sampleModel, new DataBufferByte(data, data.length), new Point() ) );
		return resultImg;
	}
	
	public static void saveImage(BufferedImage image, String path, String type) throws IOException {
		ImageIO.write(image, type, new File(path));
	}
	
	public static byte[] getWavBytes(String path) throws IOException {
		Path file = Paths.get(path);
		byte[] fileRAW = Files.readAllBytes(file);
		byte[] out = new byte[fileRAW.length - 45];
		for (int i = 45; i < fileRAW.length; i++) {
			out[i - 45] = fileRAW[i];
		}
		return out;
	}
	
	public static short bytesToShort(byte[] bytes, int index, ByteOrder byteOrder) {
		ByteBuffer bb = ByteBuffer.allocate(2);
		bb.order(byteOrder);
		bb.put(bytes[index]);
		bb.put(bytes[index+1]);
		return bb.getShort(0);
	}
	
	public static byte[] shortToBytes(short s) {
		return new byte[]{(byte)((s>>8)&0xFF),(byte)(s&0xFF)};
	}
	
	public static short[] bytesToShorts(byte[] bytes, ByteOrder byteOrder) {
		ByteBuffer bb = ByteBuffer.allocate(2);
		bb.order(byteOrder);
		short[] output = new short[bytes.length/2];
		for (int i = 0; i < bytes.length - 1; i+=2) {
			bb.put(bytes[i]);
			bb.put(bytes[i+1]);
			output[i/2] = bb.getShort(0);
			bb.clear();
		}
		return output;
	}
	
	public static byte[] shortsToBytes(short[] shorts) {
		byte[] output = new byte[shorts.length * 2];
		for (int i = 0; i < shorts.length; i++) {
			byte[] tmp = new byte[]{(byte)((shorts[i]>>8)&0xFF),(byte)(shorts[i]&0xFF)};
			output[i*2] = tmp[0];
			output[i*2 + 1] = tmp[1];
		}
		return output;
	}
}

Example

//convert all frames to wav files (this will load frames with the file names output00001.bmp, output00002.bmp and etc)
batchImageToWave(0, 100, "Z:\\Videos\\video-Frames\\output%05d.bmp", "Z:\\Videos\\video-Frames\\wavs\\%05d.wav");
//this will convert the new wav files to images
batchWaveToImage(0, 100, "Z:\\Videos\\video-Frames\\output%05d.bmp", "Z:\\Videos\\video-Frames\\wavs\\cleaned\\%05d.wav", "Z:\\Videos\\video-Frames\\new\\output%05d.bmp", "bmp");

Databending in Java

The above image is the result of databending a snippet of a video. This was created by extracting each frame of the video, loading it into Audacity (a music editor) as sound, applying a normalize filter to it and then exporting it back to an image file. After that the resulting images where converted into a gif.

In order to save time I decided to try and see if I could do it programmatically. First the image is loaded and the pixel data is extracted as a byte array. The byte array then has a sound effect applied to it, treating each byte as a sample of sound. The resulting byte array is then applied to an image and saved to a file. You can then do this for each extracted frame of a video.

Code

Loading Pixel Data

BufferedImage img = null;
try {
    img = ImageIO.read(new File("Z:\\Pictures\\image.bmp"));
} catch (IOException e) {
	throw e;
}
byte[] pixels = (DataBufferByte)img.getRaster().getDataBuffer()).getData();

Convert Bytes to Number

short[] values = new short[pixels.length/2];
ByteBuffer bb = ByteBuffer.allocate(2);
for (int i = 0; i < pixels.length-1; i+=2) {
	bb.put(pixels[i]);
	bb.put(pixels[i+1]);
	values[i/2] = bb.getShort(0);
	bb.clear();
}

Effects

The Pitch Shift effect used the code from here. Although written in c# it was easily converted to java. The Normalize effect is created by finding the maximum value of the pixels and then multiplying each value by the target maximum divided by the maximum. (values[i] = (short)(values[i] * (double)targetMax/(double)max)). The echo was created by adding the value of the pixels multiplied by the decay, to the value of the pixels the delay amount further down the array. (values[i+delay] += (short)((float)values[i] * decay)) The delay amount can be calculated by multiplying the delay in milliseconds by the sample rate. 

Applying to Image and writing to file

for (int i = 0; i < values.length; i++) {
    byte[] arr = new byte[]{(byte)(((short)values[i]>>8)&0xFF),(byte)((short)values[i]&0xFF)};
    pixels[i*2] = arr[0];
	pixels[i*2+1] = arr[1];	
}

BufferedImage resultImg = new BufferedImage(img.getWidth(), img.getHeight(), img.getType());
resultImg.setData(Raster.createRaster(img.getSampleModel(), new DataBufferByte(pixels, pixels.length), new Point() ) );
try {
	ImageIO.write(resultImg, "bmp", new File(Z:\\Pictures\\output.bmp"));
} catch(Exception e) {
	throw e;
}

Visual Studio Solidity Demo Project

After messing around with Ethereum, a decentralized platform for applications based on bitcoin, I developed a simple project in Visual Studio to help me run and debug contracts (applications/programs that run on Ethereum). This project is a Node.js project and connects to testrpc to run the contracts. 

The project needs to be be commented but should be quite straight forward.

Project Directory

The project directory contains 3 important files (app.js, config.json and info.txt) and 2 important folders (contracts and tests).

App.js is the main project file and contains the code that compiles and runs the contracts and then starts the testing. Config.json is a json file with all of the information needed for app.js to load the contracts. In this file you define what contracts to run, what their constructors are and what test file you should use to test it. Info.txt contains a command to run testrpc and a list of accounts and private keys that will be created when it starts that can be used.

The contracts folder is for storing all of your solidity source files and the tests folder is for storing all of your test files.

Testing

When you run the project, it will compile and deploy the contracts outlined in the config file and then pass them to their corresponding test programs.

I have written a simple test library similar to the node js assert but will print the output to the console. First call 'assert.test("test case");' to log what function you are testing and then 'assert.that("part");' to log what part of that function output your testing and then the assert function you are testing with. All these of functions can be chained together. For a full list of functionality look at the source at /node_modules/unitTest.js.

Config

default config.

{
  "server": {
    "host": "http://localhost",
    "port": 8545
  },
  "contracts": [
    {
      "file":"test.sol",
      "constructor": [ "0xcd6ccc877d642ce8fff4da904db0013f137848ef" ],
      "test": "test.js"
    }
  ]
}

Editing

The solidity files do not have highlighting or code completion so I suggest either writing the contracts in the browser IDE or Visual Studio Code (with the highlighting extension installed) and then testing. 

Download: here

Ethereum: here
Solidity: here
TestRPC: here
Browser IDE: here

Visual Studio Node.js: here
Visual Studio Code Highlighting: here