Max Headroom

Processing script based off the TV show Max Headroom. 

float rotx = PI/4;
float roty = PI/4;

float boxSize = 800;
float lineNum = 30;
void setup()
{
  size(1020, 720, P3D);
  textureMode(NORMAL);
  shapeMode (CENTER);
}
 
void draw()
{
   
  background(0);
  translate(width/2.0, height/2.0, boxSize/2);
  rotateX(rotx);
  rotateY(roty);
  stroke(255);
  noFill();
  float z = boxSize/2;
  pushMatrix();
  int side = 0;
  for (float rotY = HALF_PI; rotY <= TWO_PI; rotY+= HALF_PI) {
    switch (side) {
      case(0):
      stroke(255,0,0);
      break;
      case(1):
      stroke(255,255,0);
      break;
      case(2):
      stroke(255,255,0);
      break;
      case(3):
      stroke(255,0,0);
      break;
    }
    side++;
    for (int i = 0; i <= lineNum; i++) {
      if (rotY == HALF_PI || rotY == TWO_PI) {
        drawLineFromX(-boxSize/2, boxSize/2, -boxSize/2 + i * boxSize/lineNum, z, 20);
      } else{
        drawLineFromY(-boxSize/2, boxSize/2, -boxSize/2 + i * boxSize/lineNum, z, 20);
      }
    }
    rotateY(rotY);
  }
  popMatrix();
  pushMatrix();
  rotateX(HALF_PI);
  stroke(0,0,255);
  for (int i = 0; i <= lineNum; i++) {
        drawLineFromY(-boxSize/2, boxSize/2, -boxSize/2 + i * boxSize/lineNum, z, 20);
  }
  popMatrix();
  pushMatrix();
  rotateX(-HALF_PI);
  stroke(0,255,0);
  for (int i = 0; i <= lineNum; i++) {
        drawLineFromY(-boxSize/2, boxSize/2, -boxSize/2 + i * boxSize/lineNum, z, 20);
  }
  popMatrix();
  lineNum = 20 + 10 * sin(frameCount / 100.0);
  boxSize = 700 + 100 * sin(frameCount / 100.0 + PI);
  if(!mousePressed){
    roty+=0.01;
    rotx+=0.02;
  }
}

float lineZOffset(int i, float z) {
  float size = 25 + 25 *sin(frameCount / 50.0 + PI /3);
  return z - size+ size * noise(frameCount / 100.0 + (float)i * 10000);
}

void drawLineFromX(float xStart, float xEnd, float y, float z, float points) {
  beginShape();
  float next = abs(xEnd - xStart) / points;
  curveVertex(xStart - next, y, z);
  curveVertex(xStart, y, z);
  int i = 1;
  for (float x = xStart + next; x < xEnd; x+= next) {
    curveVertex(x,  y, lineZOffset(2 + i, z));
    i++;
  }
  curveVertex(xEnd, y,  z);
  curveVertex(xEnd - next, y,  z);
  endShape();
}

void drawLineFromY(float yStart, float yEnd, float x, float z, float points) {
  beginShape();
  float next = abs(yEnd - yStart) / points;
  curveVertex(x,  yStart - next,z);
  curveVertex(x,  yStart, z);
  int i = 1;
  for (float y = yStart + next; y < yEnd; y+= next) {
  curveVertex(x,  y, lineZOffset(2 + i, z));
    i++;
  }
  curveVertex(x,  yEnd, z);
  curveVertex(x,  yEnd + next, z);
  endShape();
}
 

A Dark and Stormy Night, Programming with Poems

The above image was made using ADASN (A Dark and Stormy Night), a simple stack based programming language. I designed the code of the programming language to be written as normal English.  Every word of a sentence that has over 3 characters represents a number, 1 if it starts with a, 2 if it starts with b and etc. All the words in the sentence are then added up and if the resulting number corresponds to a command then the command is executed otherwise it is pushed onto the stack. Numbers 55 through to 86 represent commands. When the code is executed it generates a video where the code runs separately for each pixel every frame of the video.

The code tends to work best when written as a poem as it allows for more flexible sentence structure. The trick is to write code that not only describes how to make an image but what that image means.

Currently I have written a transpiler using JavaCC that converts the code into java and glsl, using processing to interface with opengl. This creates java source code that can then be compiled into a runnable jar file.

Commands

  • 55     time - pushes the current running time to the stack
  • 56    dup - duplicates the top of the stack
  • 57    push - pushes the number number to the stack even if its a command
  • 58    decimal - pops then pushes 1 / that numbers  
  • 59    <= - pops the top 2 from the stack then compares then and pushes 1 or 0
  • 60    >= - pops the top 2 from the stack then compares then and pushes 1 or 0
  • 61    < - pops the top 2 from the stack then compares then and pushes 1 or 0
  • 62    > - pops the top 2 from the stack then compares then and pushes 1 or 0
  • 63    = - pops the top 2 from the stack then compares then and pushes 1 or 0
  • 64    != - pops the top 2 from the stack then compares then and pushes 1 or 0
  • 65    + - pops the top 2 from the stack then pushes one plus the other
  • 66    - - pops the top 2 from the stack then pushes one minus the other
  • 67    * - pops the top 2 from the stack then pushes one times the other
  • 68    / - pops the top 2 from the stack then pushes one divided by the other
  • 69    % - pops the top 2 from the stack then pushes one modulus the other
  • 70    x - pushes the x position of the current pixel
  • 71    y - pushes the y position of the current pixel
  • 72    r - pushed the red color value of the current pixel
  • 73    g - pushed the green color value of the current pixel
  • 74    b - pushed the blue color value of the current pixel
  • 75    sR - pops then sets the current red color value to that number
  • 76    sG - pops then sets the current green color value to that number
  • 77    sB - pops then sets the current blue color value to that number
  • 78    sin - pops then pushes the sin of that number
  • 79    cos - pops then pushes the cos of that number
  • 80    tan - pops then pushes the tan of that number
  • 81    rand - pushes a random number
  • 82    width - pushes the width of the video
  • 83    height - pushes the height of the video
  • 84     pop - pops the stack
  • 85    length - pops the top 2 on the stack and pushes the length of them
  • 86    atan - pops the top 2 on the stack and pushes the atan of them

Example

Bolbus.

Terrifically gigantic,
floating a buv, 
up in the heavens,
floating with love.

Rising at morn,
falling at night,
quivering a buv.

Rising at morn,
falling at night,
soaring a buv.

Watching from far above,
shining down sun,
moving the fun.

Beautiful.

Terrifically gigantic,
floating above, 
up in the heavens,
floating with love.

Rising at morn,
falling at night,
quivering a buv.

Rising at morn,
falling at night,
soaring above.

Watching from far above,
shining down sun,
moving the fun.

Terrifically gigantic,
floating around all up a bove, 
up in the heavens,
floating with love.

Around, all bashly above.

Terrifically gigantic,
floating a buv, 
up in the heavens,
floating with love.

Rising at morn,
falling at night,
quivering a buv.

Watching from far above,
shining sun,
moving the fun.

Rising at morn,
falling at night,
xhuming the sun.

(As you can see i did cheat a bit by changing the spelling of some words, xhuming instead of exhuming and a buv/a bove instead of above)

Parsing

This is what it gets parsed down to:

  1. 2 82 68 70 66 2 83 68 71 66 85 4 82 68 62 75
  2. 2 width / x - 2 height / y - length  4 width / > setR
  3. R = length y - (height /2), x - (width /2)) >  width / 4

 

 

Chroma and Image Distortion

Wrote this shader for processing a while back. It adds a sort of drunken vibe when applied to images. 

Fragment Shader

precision highp float;

uniform sampler2D texture;
uniform float noiseOffset;
uniform float noiseMultipler;
uniform float noiseAddMultipler;
uniform float colorOffset;
uniform float colorOffsetMod;
uniform float positionMixAlpha;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;

//https://gist.github.com/patriciogonzalezvivo/670c22f3966e662d2f83
float rand(vec2 n) { 
    return fract(sin(dot(n, vec2(12.9898, 4.1414))) * 43758.5453);
}

float noise(vec2 p){
    vec2 ip = floor(p);
    vec2 u = fract(p);
    u = u*u*(3.0-2.0*u);

    float res = mix(
        mix(rand(ip),rand(ip+vec2(1.0,0.0)),u.x),
        mix(rand(ip+vec2(0.0,1.0)),rand(ip+vec2(1.0,1.0)),u.x),u.y);
    return res*res;
}

//  https://www.shadertoy.com/view/MdX3Rr
//  by inigo quilez
//
const mat2 m2 = mat2(0.8,-0.6,0.6,0.8);
float fbm( in vec2 p ){
    float f = 0.0;
    f += 0.5000*noise( p ); p = m2*p*2.02;
    f += 0.2500*noise( p ); p = m2*p*2.03;
    f += 0.1250*noise( p ); p = m2*p*2.01;
    f += 0.0625*noise( p );

    return f/0.9375;
}


void main() {
	vec2 p = vertTexCoord.xy;
    vec2 modNoiseP = (p*noiseMultipler + noiseOffset);//2
    float noiseF = (fbm(modNoiseP) + fbm(modNoiseP + noiseAddMultipler* fbm(modNoiseP)));
    float cMod = noiseF * colorOffsetMod;//3
    vec4 red = texture2D(texture,  mix(p, vec2(noiseF) - vec2(colorOffset * cMod,0.), positionMixAlpha));
    vec4 blue = texture2D(texture, mix(p, vec2(noiseF) + vec2(colorOffset * cMod,0.), positionMixAlpha));
    vec4 green = texture2D(texture, mix(p, vec2(noiseF), positionMixAlpha));
	gl_FragColor = vec4(red.r, green.g, blue.b, 1.);
}

Vertex Shader

#define PROCESSING_TEXTURE_SHADER

uniform mat4 transform;
uniform mat4 texMatrix;

attribute vec4 vertex;
attribute vec4 color;
attribute vec2 texCoord;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;

void main() {
    gl_Position = transform * vertex;
    pos = transform * vertex;
    vertColor = color;
    vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);
}

Image to WAV and back

After writing my last blog about databending in java I realized that Audacity has a batch editing tool called Chains. This allows you to batch apply effects to many audio files at once.

So I wrote a helper class in java for manipulating the pixel data in images. And also allowing you to write the pixel data to a WAV file and load it back. Using this you can then batch convert all the frames of a video to WAV files, apply a chain to them in audacity and then batch convert them all back to images. This is far better than just manipulating the pixel data in java as it means you don't have to write your own code for the audio effects.

Audacity Chains Info: here

Code

package com.augustuspash.databend;

import java.awt.Point;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.awt.image.Raster;
import java.awt.image.SampleModel;
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;

import javax.imageio.ImageIO;


public class Databend {

	static byte[] headerWAV = new byte[] { 0x52, 0x49, 0x46, 0x46, 0x00, 0x00, 0x00, 0x00, 0x57, 0x41, 0x56, 
			0x45, 0x66, 0x6D, 0x74, 0x20, 0x10, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x00, 0x44, (byte) 0xAC,
			0x00, 0x00, (byte) 0x88, 0x58, 0x01, 0x00, 0x02, 0x00, 0x10, 0x00, 0x64, 0x61, 0x74, 0x61 };
	
	public static void batchImageToWave(int start, int stop, String imagePath, String outputPath) throws IOException {
		for (int i = start; i < stop; i++) {
			imageToWave(String.format(imagePath, i), String.format(outputPath, i));
		}
	}
	
	public static void batchWaveToImage(int start, int stop, String imagePath, String wavePath, String outputPath, String type) throws IOException {
		for (int i = start; i < stop; i++) {
			waveToImage(String.format(imagePath, i), String.format(wavePath, i), String.format(outputPath, i), type);
		}
	}
	
	public static void imageToWave(String imagePath, String outputPath) throws IOException {
		BufferedImage img = loadImage(imagePath);
		byte[] origData = getPixelBytes(img);
		writeBytesToWave(outputPath, origData);
	}
	
	public static void waveToImage(String origImagePath, String wavePath, String outputPath, String type) throws IOException {
		BufferedImage img = loadImage(origImagePath);
		byte[] origData = getPixelBytes(img);
		byte[] reloadData = getWavBytes(wavePath);
		BufferedImage img2 = pixelBytesToImage(img.getWidth(), img.getHeight(), img.getType(), img.getSampleModel(), origData.length, reloadData);
		saveImage(img2, outputPath, type);
	}
	
	public static void writeBytesToWave(String templatePath, byte[] data) throws IOException {
		Path file = Paths.get(templatePath);
		Files.write(file, headerWAV);
		Files.write(file, new byte[]{(byte)((data.length>>8)&0xFF),(byte)(data.length&0xFF)}, StandardOpenOption.WRITE, StandardOpenOption.APPEND);
		Files.write(file, data, StandardOpenOption.WRITE, StandardOpenOption.APPEND);
	}
	
	public static byte[] getPixelBytes(String path) throws IOException {
		BufferedImage img = null;
		img = ImageIO.read(new File(path));
		return ((DataBufferByte)img.getRaster().getDataBuffer()).getData();
	}
	
	public static byte[] getPixelBytes(BufferedImage image){
		return ((DataBufferByte)image.getRaster().getDataBuffer()).getData();
	}
	
	public static BufferedImage loadImage(String path) throws IOException {
		return ImageIO.read(new File(path));
	}
	
	public static BufferedImage pixelBytesToImage(int width, int height, int type, SampleModel sampleModel, int origLength, byte[] data) {
		BufferedImage resultImg = new BufferedImage(width, height, type);
		if (origLength < data.length) {
			byte[] result = new byte[origLength];
			System.arraycopy(data, 0, result, 0, origLength);
			data = result;
		} else if (origLength > data.length) {
			byte[] result = new byte[origLength];
			System.arraycopy(data, 0, result, 0, data.length);
			data = result;
		}
		resultImg.setData(Raster.createRaster(sampleModel, new DataBufferByte(data, data.length), new Point() ) );
		return resultImg;
	}
	
	public static void saveImage(BufferedImage image, String path, String type) throws IOException {
		ImageIO.write(image, type, new File(path));
	}
	
	public static byte[] getWavBytes(String path) throws IOException {
		Path file = Paths.get(path);
		byte[] fileRAW = Files.readAllBytes(file);
		byte[] out = new byte[fileRAW.length - 45];
		for (int i = 45; i < fileRAW.length; i++) {
			out[i - 45] = fileRAW[i];
		}
		return out;
	}
	
	public static short bytesToShort(byte[] bytes, int index, ByteOrder byteOrder) {
		ByteBuffer bb = ByteBuffer.allocate(2);
		bb.order(byteOrder);
		bb.put(bytes[index]);
		bb.put(bytes[index+1]);
		return bb.getShort(0);
	}
	
	public static byte[] shortToBytes(short s) {
		return new byte[]{(byte)((s>>8)&0xFF),(byte)(s&0xFF)};
	}
	
	public static short[] bytesToShorts(byte[] bytes, ByteOrder byteOrder) {
		ByteBuffer bb = ByteBuffer.allocate(2);
		bb.order(byteOrder);
		short[] output = new short[bytes.length/2];
		for (int i = 0; i < bytes.length - 1; i+=2) {
			bb.put(bytes[i]);
			bb.put(bytes[i+1]);
			output[i/2] = bb.getShort(0);
			bb.clear();
		}
		return output;
	}
	
	public static byte[] shortsToBytes(short[] shorts) {
		byte[] output = new byte[shorts.length * 2];
		for (int i = 0; i < shorts.length; i++) {
			byte[] tmp = new byte[]{(byte)((shorts[i]>>8)&0xFF),(byte)(shorts[i]&0xFF)};
			output[i*2] = tmp[0];
			output[i*2 + 1] = tmp[1];
		}
		return output;
	}
}

Example

//convert all frames to wav files (this will load frames with the file names output00001.bmp, output00002.bmp and etc)
batchImageToWave(0, 100, "Z:\\Videos\\video-Frames\\output%05d.bmp", "Z:\\Videos\\video-Frames\\wavs\\%05d.wav");
//this will convert the new wav files to images
batchWaveToImage(0, 100, "Z:\\Videos\\video-Frames\\output%05d.bmp", "Z:\\Videos\\video-Frames\\wavs\\cleaned\\%05d.wav", "Z:\\Videos\\video-Frames\\new\\output%05d.bmp", "bmp");

Databending in Java

The above image is the result of databending a snippet of a video. This was created by extracting each frame of the video, loading it into Audacity (a music editor) as sound, applying a normalize filter to it and then exporting it back to an image file. After that the resulting images where converted into a gif.

In order to save time I decided to try and see if I could do it programmatically. First the image is loaded and the pixel data is extracted as a byte array. The byte array then has a sound effect applied to it, treating each byte as a sample of sound. The resulting byte array is then applied to an image and saved to a file. You can then do this for each extracted frame of a video.

Code

Loading Pixel Data

BufferedImage img = null;
try {
    img = ImageIO.read(new File("Z:\\Pictures\\image.bmp"));
} catch (IOException e) {
	throw e;
}
byte[] pixels = (DataBufferByte)img.getRaster().getDataBuffer()).getData();

Convert Bytes to Number

short[] values = new short[pixels.length/2];
ByteBuffer bb = ByteBuffer.allocate(2);
for (int i = 0; i < pixels.length-1; i+=2) {
	bb.put(pixels[i]);
	bb.put(pixels[i+1]);
	values[i/2] = bb.getShort(0);
	bb.clear();
}

Effects

The Pitch Shift effect used the code from here. Although written in c# it was easily converted to java. The Normalize effect is created by finding the maximum value of the pixels and then multiplying each value by the target maximum divided by the maximum. (values[i] = (short)(values[i] * (double)targetMax/(double)max)). The echo was created by adding the value of the pixels multiplied by the decay, to the value of the pixels the delay amount further down the array. (values[i+delay] += (short)((float)values[i] * decay)) The delay amount can be calculated by multiplying the delay in milliseconds by the sample rate. 

Applying to Image and writing to file

for (int i = 0; i < values.length; i++) {
    byte[] arr = new byte[]{(byte)(((short)values[i]>>8)&0xFF),(byte)((short)values[i]&0xFF)};
    pixels[i*2] = arr[0];
	pixels[i*2+1] = arr[1];	
}

BufferedImage resultImg = new BufferedImage(img.getWidth(), img.getHeight(), img.getType());
resultImg.setData(Raster.createRaster(img.getSampleModel(), new DataBufferByte(pixels, pixels.length), new Point() ) );
try {
	ImageIO.write(resultImg, "bmp", new File(Z:\\Pictures\\output.bmp"));
} catch(Exception e) {
	throw e;
}