Cut this concept into pieces, this is my last dev blog

Building a “Life Cutter” for Papa Roach

Lee Martin
Bits and Pieces

--

A brown sticker stating Papa Roach Life Cutter 22nd Anniversary Edition. A roach sits on top.

“Last Resort” by Papa Roach, now celebrating its 22nd anniversary, is one of those all time 2000s bangers that has found new life thanks to the larger web embracing the lyrics as a meme, rather than its original darker context. There are many parodies and examples of it being used in comical ways but none looms quite as large as Papa Roach’s own witty response to one of Trump’s incoherent tweets. Clearly, Papa Roach aren’t afraid to lean into this reality and that’s precisely why I was hired to come up with a digital concept to celebrate the track’s anniversary.

In an attempt to draw the shortest line between opportunity and concept, I pondered if we could indeed, allow users to cut their life into pieces. Two things came to mind. First, the Banksy shredder app I developed back in 2018 contained a nice cutting mechanic. Second, in 2020 I was experimenting with tweet powered dynamic live videos. What if we developed a live dynamic video stream which cut any photos tweeted to @PapaRoach or a series of hashtags, into pieces?

Well, that’s exactly what we did. On 3/7, the day of the anniversary, we went live on Papa Roach’s Twitter, YouTube, and Facebook using Restream and OBS Project. The stream itself was actually just a local browser window which contained our “Life Cutter” app, which I built using Nuxt. In addition, I ran a little script that listened to new tweets (with images) that met one of the following criteria:

  • A reply to @paparoach
  • A @paparoach mention
  • #lastresort
  • #cutmylifeintopieces
  • #paparoach
  • #egotrip
  • #paparoach2022

(Retweets were skipped.)

Incoming tweets were ushered to the “Life Cutter” app using Pusher. That’s when they fell in from the top, settled into the shredder, and were then passed through, being cut. For this dev blog, I’d like to break down the cutting mechanism itself which was developed using HTML <canvas> and Greensock. If you’re interested in the Twitter setup, check out this dev blog for a tweet powered activation I developed for Waterparks. Anyway, here’s the CodePen I’ll be breaking down. Let’s get cutting.

Life Cutter

A clip from the actual “Life Cutter” live stream

Our “Life Cutter” app consists of a single HTML <canvas> and is mostly constructed in pure Javascript with the exception of Greensock being used for some of the tweening. Photos must be loaded, animated, and of course, cut into pieces. Let’s start by defining a method to load any incoming images so they’re ready to be used by canvas.

Loading Images

I’ve mentioned this before but I use a simple little Promise method to handle image loading. Here’s what that looks like.

loadImage(url) {
return new Promise((resolve, revoke) => {
// Initialize image
let img = new Image()
// Image loaded
img.onload = () => {
// Resolve
resolve(img)
} // Update image src to url
img.src = url
})
}

Then, when we’re ready to load an image, we simply need to call the method with an image url. I won’t be covering it in this blog (or CodePen) but it was imperative that we queued incoming photos so Twitter didn’t overwhelm the app. I used the queue function of async to help with this. Next, let’s write another helper method that will help generate the cut pieces of our photo.

Generating Cuts

While the Banksy shredder used a strip shredding design, I thought it would be best to recreate a more brutal crosscut mechanic into our app. As I told the client, we’re trying to cut their lives into “pieces” not “strips.” The drawImage method of canvas allows us to extract a section of an image and place it onto an existing or new canvas. All we needed to do was calculate the size and position of each cut. After a bit of experimentation, I landed on a grid of 16 columns and 10 rows, resulting in 160 pieces. Since images could come in at different sizes, we’ll want to calculate the cut size of both the source and destination images. By the way, I just decided that the outcome photo size would be 150 pixels because it seemed like a decent size on most devices. You could make this much more responsive.

// Initialize cuts
let cuts = []

// Get source cut sizes
let sCutWidth = image.width / this.cutCols
let sCutHeight = image.height / this.cutRows

// Get destination cut sizes
let dCutWidth = this.photoSize / this.cutCols
let dCutHeight = this.photoSize / this.cutRows

We could then loop through all of the rows and columns to calculate the specific position of each cut from the source image. All of these variables are then passed to a renderCut method which simply uses the drawImage method to draw the cut image piece onto a new offline canvas. This way we can pre-render our cut pieces once and not continually during the animation loop. Once the cut is rendered, we’ll add the newly generated cut canvas, a boolean for keeping track of the cut status, the x offset of where the cut should be positioned, and a threshold defining the point at which when the cut should occur. We can then return the cuts.

// Loop through rows
for (let row = 0; row < this.cutRows; row += 1) {
// Loop through cols
for (let col = 0; col < this.cutCols; col += 1) {
// Get source cut position
let sCutX = col * sCutWidth
let sCutY = row * sCutHeight

// Render cut
let canvas = this.renderCut(image, sCutX, sCutY, sCutWidth, sCutHeight, 0, 0, dCutWidth, dCutHeight)

// Push cutting
cuts.push({
canvas: canvas,
cut: false,
cutOffset: col * dCutWidth,
cutThreshold: 1 - (row / (this.cutRows))
})
}
}

// Return cuts
return cuts

Here’s a look at that simple renderCut method also. We create a new canvas on the fly and resize it to the destination cut size. Then we simply draw the image based on the cut parameters we calculated in the loop.

renderCut(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight) {
// Create offline canvas
let canvas = document.createElement('canvas')

// Resize canvas
canvas.height = dHeight
canvas.width = dWidth

// Get context
let context = canvas.getContext('2d')

// Draw image
context.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight)

// Return canvas
return canvas
}

Now, let’s initialize our canvas and handle how our photos and pieces will be drawn onto it.

Canvas Initialization

Unlike our renderCut canvas, which is offline, our visualization canvas actually exists on our page as a <canvas> element. We’ll want to first resize this based on our browser size and then make sure it resizes itself anytime the user resizes their browser. Here’s that setup.

// Base
// ----------

// Get canvas
let canvas = document.getElementById('life-cutter')

// Resize canvas
canvas.height = window.innerHeight
canvas.width = window.innerWidth

// Get context
let context = canvas.getContext('2d')
// Resize
// ----------

// Listen for window resizing
window.addEventListener('resize', () => {
// Resize canvas
canvas.height = window.innerHeight
canvas.width = window.innerWidth
})

Next, we’ll define the render loop which clears and draws all photos and pieces on the “Life Cutter.”

Canvas Rendering

Our render loop handles how the photos and pieces are drawn onto our canvas. I decided to use Greensock to tween the photo animation (which I will discuss later) but all of the pieces’ animation logic occurs in this loop. Let’s begin by drawing our photo.

First, we’ll request an animation frame to keep our render loop running. Then, we’ll clear the canvas. Since we know a photo should only be intact above the cut point (halfway) of our “Life Cutter” we’ll use the clip method of canvas to contain it visually within that top section. Here’s the drawImage method again to handle drawing the photo image and I’ve also thrown in a border stroke for good measure. Again, we’ll handle tweening the animation of any photos separately.

// Request animation frame
requestAnimationFrame(animate)

// Clear rect
context.clearRect(0, 0, canvas.width, canvas.height)

// Clip top area
context.save()
context.beginPath()
context.rect(0, 0, canvas.width, canvas.height / 2)
context.clip()

// If a photo exists
if (this.photo) {
// Get image and positions
let { image, x, y } = this.photo

// Draw image
context.drawImage(image, 0, 0, image.width, image.height, x, y, this.photoSize, this.photoSize)

// Draw stroke
context.strokeRect(x, y, this.photoSize, this.photoSize)
}

// Restore clipping
context.restore()

Now we can draw any cut pieces which may exist. Again, we’ll use the drawImage method but we’ll also adjust the position, vertical velocity, and rotation of each piece. This gives each piece a nice dynamic and physical feel. If any pieces are below the height of the overall canvas, we’ll remove them from the pieces array so they are no longer drawn. Once again we’ll use canvas clip to contain pieces to the bottom area of our visualization.

// Clip bottom area
context.save()
context.beginPath()
context.rect(0, canvas.height / 2, canvas.width, canvas.height / 2)
context.clip()

// If pieces exist
if (this.pieces.length) {
// Loop through each piece
this.pieces.forEach((piece, i) => {
// Move piece down
piece.y += piece.v_y

// If piece is below canvas
if (piece.y > canvas.height) {
// Remove piece from array
this.pieces.splice(i, 1)
} else {
// Adjust vertical velocity
piece.v_y = piece.v_y + 0.3
// Adjust rotation
piece.rotation += piece.rotate
// Get center rotation point
let centerX = piece.x + (piece.canvas.width / 2)
let centerY = piece.y + (piece.canvas.height / 2)
// Rotate
context.translate(centerX, centerY)
context.rotate(piece.rotation * Math.PI / 180)
context.translate(-centerX, -centerY)
// Draw piece
context.drawImage(piece.canvas, piece.x, piece.y)
// Draw stroke
context.strokeRect(piece.x, piece.y, piece.canvas.width, piece.canvas.height)
// Reset transform
context.setTransform(1, 0, 0, 1, 0, 0)
}
})
}

// Restore clipping
context.restore()

With our canvas and render loop prepped, it’s time to add some photos to our “Life Cutter.” In my opinion, this is the fun part.

Adding Photos

While the actual app allowed for multiple photos, the CodePen example handles one photo at a time. First, we load the image of the photo using our handy loadImage method. Then, we update our photo object. The photo object receives all the rendered cuts by calling our generateCuts method. Finally, we position the photo above the canvas and randomly across.

// Load image
let image = await this.loadImage(url)

// Initialize photo
this.photo = {
image: image,
cuts: this.generateCuts(image),
x: Math.floor(Math.random() * (window.innerWidth - this.photoSize)),
y: -this.photoSize
}

Now we’re ready to use Greensock to animate our photo and subsequently cut it into pieces. To do this, we’ll use Greensock’s Timeline tool to manage a series of tweens and (more importantly) their callbacks. Let’s first initialize our timeline. Since the CodePen is on autoplay, I’m using the onComplete callback of our timeline to add another photo once the entire timeline of events has been completed.

// Initialize timeline
let tl = gsap.timeline({
onComplete: () => {
// Add next photo
this.addPhoto()
}
})

The first tween sends our photo from the top of the page to the cut point in the center. I’m using a power4.out ease to allow it to settle nicely at that position.

// Send photo to cutter
tl.to(this.photo, {
duration: 2.5,
ease: 'power4.out',
y: (window.innerHeight / 2) - this.photoSize
})

The next tween is quite simple, we’re just passing the photo through the cutter using a linear ease. However, things get a bit more interesting in the onUpdate callback. As the photo tweens down through the cutter, we can check the tween’s progress() method to understand how far along the animation is. We can then use this value to figure out which pieces should be cut. Remember that cutThreshold property? Here’s where that comes in. By looking for cuts which have crossed the threshold and have not been cut, we create an array of ridiculously named uncuts. These uncuts are then cut and added as pieces to our pieces array. Each piece includes the rendered cut, position, random velocity, and random rotation.

// Pass photo through cutter
tl.to(this.photo, {
duration: 2,
ease: 'linear',
y: window.innerHeight / 2,
onUpdate: function () {
// Find uncut cuts ready for cutting
let uncuts = photo.cuts.filter(cut => {
return !cut.cut && this.progress() > cut.cutThreshold
})

// Loop through uncut cuts
uncuts.forEach(uncut => {
// Set cut to true
uncut.cut = true

// Cut into pieces
pieces.push({
canvas: uncut.canvas,
x: photo.x + uncut.cutOffset,
y: window.innerHeight / 2,
v_y: Math.random() * 10 - 5,
rotation: 0,
rotate: Math.random() * 4 - 2
})
})
}
})

Finally, at the same time the photo is passing through the cutter, I’m randomizing its x position slightly to give it a little vibration. This helps sell the cutting aspect a bit. We can use “<” to have this tween occur at the same time the pass through is occurring.

// Vibrate photo
tl.to(this.photo, {
duration: 0.1,
repeat: 20,
x: `random(${this.photo.x - 10}, ${this.photo.x + 10})`
}, "<")

And that’s pretty much it. If you want to expand on this, try adjusting the build to allow for multiple photos and using the async queue method I mentioned to handle a barrage of images. You might also want to adjust how many cuts get made or perhaps masking each cut piece to include some frayed edges. Good luck and drop a comment if you have any questions.

Thanks

Papa Roach from the “Last Resort” music video. Jacoby stands in center of crowd of fans. Fisheye camera.
Papa Roach in “Last Resort” music video

Shout out to Aylish O’Sullivan, Chad Horton, Ian Dietrich, and Mike Greene for conducting the inspiring call which led to this concept and helping turn it into a reality. We’re off to work on the next project for Papa Roach. 🤘🏻 Speaking of which, thanks to the band for understanding the opportunity and embracing the continued affection for “Last Resort.” Papa Roach’s new album Ego Trip is out April 8th.

--

--

Netmaker. Playing the Internet in your favorite band for two decades. Previously Silva Artist Management, SoundCloud, and Songkick.