We’ve had a Kinect hanging around at SuperHeroes for a while now, which we’ve used in collaboration with Magic Bullet on some projects. One one of those projects, Istvan Pataki (one of the developers at MB) and I were talking about making a plugin that would put the mocap Kinect skeleton into C4D. This was before Microsoft released the source code to the public, and when Magic Bullet was one of the few companies to have hacked the Kinect for non-Xbox use. However, clients and paying projects got in the way, and we never got around to it.
Then yesterday I discovered NI-Mate, a free (at least at this Beta stage) mocap control for the Kinect, as well as their free C4D plugin. It was a breeze to install, no tricks or anything, it worked cleanly. The instructional video is 4 minutes long and covers how to do the capture – kudos guys for making it simple and concise, as well as user friendly.
I took a quick stab at it, put on my dancing shoes, and danced for the camera. Then grabbed the mocap data, assigned a rig to the nulls NI-Mate generates, and applied that to an extremely basic mesh. The reason: I wanted to make a particle system based on the mocap data.
This is still very rough, but I know how I’ll be spending my weekend and any extra time I have.
I originally thought of the original Cornucopious to be just a still, but it had so much dynamism to it that I decided to animate it for 10 seconds. It doesn’t hurt that the spheres in the center where already animated via Thinking Particles, so one step was already done.
The spheres and the ‘candy blocks’ where animated using Thinking Particles (the latter one had two emitters following a spline). The rings were also dynamic and helped push the spheres and candy around for a nice effect. Both the array of rings (pink) and the ring in the back (orange) are animated using the Sound Effector in Mograph; it was pretty hard to fine-tune it to work on just the bass beats so I ended up making a sound file of just beeps on every beat. In a way it was like setting keyframes in a sound file.
Here’s a hardware render of the project, I thought the bright green with the black looked nice.
Here’s a render that started out as messing around with some Subsurface Scattering, which led to messing around with some glassy/plastic materials with subtle Index of Refractions.
The spheres have the most SSS, and they were created using MoDynamic Thinking Partcles on the spheres to have them repulse each other. The little rectangle candies are done using a Cloner Object cloned along a spline.
Last week some of the crew from Bare Bones came over to Amsterdam for an event/show hosted by Sid Lee. We got in touch with Matt Lambert and he asked us to do some quick videos to add into the show. He gave us some slow-motion footage he had shot earlier that day and wanted us to give it the Bare Bones treatment. With a time limit of a couple of hours, we hashed out 4 videos.
The team consisted of myself, David Schagerström, Oscar Gränse, and Raymo Ventura. It was a mixed bag of techniques: hand-drawn animation, some light 3D, motion tracking, and all-around simplifying and grungefying.
It was a nice and refreshing excercise (plus cool to work with Bare Bones) to work with a very limited time constraint and not strive for technical perfection but instead focus on concept – in this case, Voodoo.
Here are the two clips I worked on (sound design by Raymo Ventura):
Here’s a link to some photos provided by Sid Lee from the event.
A few weeks ago I was watching Evangelion (yeah yeah, whatever), and there was a particular shot that really drew my attention. I really like the lighting – the saturation and directional qualities of it, to be precise. It also really evoked this idea of solitude that I really liked, despite two characters being in there. So I screencapped it and when I found the time quickly built and lit the scene. Architecture rendering isn’t really my thing; I wouldn’t really call it an architecture render, but it’s closer to that than what I regularly post. I also had fun working with the color grading.
I didn’t really have time to build a bed but I needed something in the room (it’s already pretty vacant as it is, without that extra thing it would’ve just been a boring old box with light), so I hopped on Turbosquid and got a free mattress. I then rigged it with some bones (something I don’t usually do, but I found it quite fun and will probably start going deeper into rigging and weighting) and gave it that bent/collapsed feel. The maquette-figure guy came in last, as I had a hankering for really shallow depth of field and needed a foreground object.
A still from the scene that inspired this.
And finally, a screencap of the bone setup which made posing the mattress easy.
Recently SuperHeroes got their own WeTransfer channel.
For those that don’t know what WeTransfer is, it’s a great way to send files (up to 2GB). It also has a nice, clean UI that goes really well with a slideshows feature that they have. Usually it’s artwork that they select, but when you get your own channel you can do personal branding – great for a design company.
Tasked with doing a quick render, I decided to play a bit with some soft-body dynamics and some cloth sims. Once each letter had a good feeling, it’d be cached and ‘paused’, then arranged and made a collider for the next letter. Then repeat until all letters had that nice soft feeling.
Not much to say about these renders except that I was initially playing around with some particles and the tracer object, then added some Hair to the generated splines and somehow they ended up looking like this.
While I wasn’t going for something that looked like a mix of sperm, soy beans, and candy corn, the results were nice so I kept going further until these two renders came out. One of those happy accidents that makes C4D so much fun.
In a previous post I showed some stereoscopic animated gifs that were a series of photographs. Since then Sarah and I have been wondering where to take that technique next, and we thought we’d try it out on some of her illustrations.
These were made using one illustration, and the nice thing about working with digital illustrations is they can be delivered as Photoshop layers. On top of that, I had the added benefit of also getting Adobe Illustrator paths for pretty much all of the shapes/characters, which made my life easier.
Instead of modeling the objects (some bits were modeled using box-modeling, but not a lot), I imported the vector paths and Extruded them to make geometry. From there I used some Cloth simulations to bulge up the geometry and give it a nice organic roundness. There was some tweaking to be done, but considering that modeling isn’t my forté I think this was a nice solution and made for a quick turnaround.
After that, the illustrations were brought in to C4D as a Photoshop files and each layer projected onto its corresponding geometry – that way you could get more flexibility and less of that stretching that happens when you use Camera Projection.
The next step was to add some cameras and then animate a Stage Object cycling through the different camera setups to give it that wobble and push the depth.
Here’s some screencaps of the setups: