It’s been a while since I’ve done anything with attractor points in grasshopper, so I thought I’d take another look at them. In the grasshopper definition for this render I tried to target the individual cell extrusion towards a curve with several points of inflection – which creates the regions with darker shading. The definition takes two geometry inputs; a rectangular curve to be turned into a surface; and another curve which pulls the points of the grid.

gh definition

Chasing Shadows is an interactive display which builds on the project which my fellow peers and I worked on in 2014, Equilibrium. The outcome of the last project mainly worked with a 2 Dimensional data stream, ie. a video feed coming from a webcam, for this project, we wanted to use an Xbox Kinect in order to drive a different kind of interaction. (see video below)

Using the Kinect had immediate benefits, allowing for human skeleton recognition and the ability to extract depth information from the data stream, so we put some of this to work. We tried using processing to create an implementation of this, but ultimately we had to go back to rhino + grasshopper + firefly as it was the system we were most familiar with.

Our first implementation of the system created a head tracking system which would move the projected camera accordingly. We placed the interaction inside a virtual box in order to try and convince and show the user that they had some degree of control over the environment, a key part in the success of the system.

The other key part of this system was the representation of the human figure. The Kinect by default can spit out a skeleton, but for us, this was far too literal of a representation of the manifestation. So we came up with the idea of the materialisation of the digital body. As the video will show, the human slowly assimilates into the projected environment, while full while they have control over the vantage point of the box, the body slowly assimilates, followed by the different body parts as the user initialises them. We felt that this produced a much clearer and more interesting language about the rift between the digital and physical realm, and teased the idea of our understanding analogue and digital environments.

The human glitches into existence, taking the form of something distinctly intangible, a temporal being, slowly gaining clarity as the user explores their new digital realm. This installation ultimately becomes a dialogue between computer and human, it posits our notion of being, querying the validity of the human’s role in the interactive environment. We aim to discover how the digital interaction can begin to inform the human’s actions whilst simultaneously receiving influence from the human’s actions. The perceived prioritisation of the digital is what Richard Coyne is sentimental towards. Coyne’s ‘technoromantic’ outlook on the relationship is intensely relevant when applied to installation design; his text explores the idea of subverting conventional design methodologies and in doing so, provokes notions of identity, interpretation, and space/time. Typically, the human ends in the sense that they are no longer able to have influence over an entity – the obsessive human need to control everything within its power is strange, at best. By giving more power to the unknown, we are digitally counselling the human – teaching it the ability to trust in an entity it has no trust in.

Using grasshopper I created a growing animation of a spiral. The resulting spiral is an Archimedes spiral generated by the equation:

x(t) = R cos(t), y(t) = R sin(t)

Some of the frames in the animation rendered out flickery so I had to remove them. The spiral also starts off with a rather strange deformation, so I need to think about how that can be addressed later.

Here’s the animation.

A few weeks ago, I worked out the underlying motion of the strandbeest in grasshopper, this time I created all the geometries which will generate the strandbeest form, here’s the low res mesh display of it in action. I also rewrote the grasshopper script as it wasn’t very efficient at the time and had no customisability in terms of initial values. Although in principle, to create the strandbeest, it is best to stick to Theo Jansen’s 11 magical numbers in order to create the strandbeest motion, which was developed using a genetic algorithm.

The other day I got working on a capsule design in grasshopper. The basic idea was to create a series of struts between two spheres of different sizes (the logic is shown in the diagram below).

In order to put this together, we started with two spheres, and populated points across them. Then we used exoskeleton (a grasshopper plugin) in order to generate a network of connecting lines between the two spheres. I also worked in an algorithm which would generate a solid ring around the centre so that the two halves could be pulled apart and not sit as disparate elements. The network was then thickened to generate the geometry.

The third render is one done with very thin struts, the second render shows the wireframe breakdown of the model, and the main render is the thickened mesh with a psychedelic looking caustic pattern result, because why not?

I used several programs to create this overall shape. Firstly, the base geometry was created in rhino’s grasshopper, with the help of the exoskeleton and weaverbird plugins. I then took the model into 3ds max, where I generated the housing for a magnet on each side of the model. From there, I took the model into zbrush, because I’ve found that zbrush handles mesh booleans far better than any other program I’ve come across, especially with its dynamesh feature. Then I took it back into rhino in order to slice the mesh into two chunks and close the holes (unfortunately zbrush wasn’t able to do this part properly). Then it went into maya to fill some small holes in the mesh. From there, I brought it back into zbrush to retopologise it with zremesher, and finally brought it into 3ds Max for rendering.

One thing that’s bothered me for a while is the way in which mesh faces are sorted within the mesh geometry. For this set of renders I rerouted the order in which the individual mesh faces are joined together and used then ran the faces through a pre-determined culling pattern in order to generate the base geometries. While my initial plan in doing this was to create a very ordered patterning system, most of the interesting results (pictured below) were completely unpredictable.

As often happens with using algorithmic modeling, while the designer can understand the process he’s gone through in order to create something, the result is largely unpredictable, and this for me is the beauty in some of the things that are created as a result. (grasshopper definition)

The trouble I have had up until now has been how does reaction diffusion translate into architecture going forward, as I’ve said earlier, reaction diffusion has a very strong aesthetic associated with it, and I think it would be wrong to turn that into an immediate representation of architecture. But I suppose the main reason why I was having this issue was that it was difficult to gain any sort of control over the system I was trying to implement. Enter grasshopper. In the last few hours I came across a way to represent a similar form in grasshopper. Whereas before I’d been using Processing, Ready, or Houdini, seeing as I know grasshopper the best, I’m hoping this will open up my opportunities.

Here is the result at the basic level, this is what I have to work with, and this could be the catalyst I’ve needed for some time.

Over the last week or so, we began to look at site analysis of our given site to build on. Here’s the aerial view of the Chamberlain Golf Course which I obtained from the Auckland GIS Viewer.

We typically tend to use contour lines to contour lines to express the shape of the land as they are remarkably easy to find and are a great visualisation tool, but this time I wanted to try something a little bit different and I created a heightmap in grasshopper. In the image below, the darker regions denote the lowest area of the site, while the white areas are the peaks/crests on the site.

This image is based off of contours we obtained from the site, but to me it works better as a visualisation tool for the shape of the land. The trouble I have with contours is that although they might adequately describe the shape of the land, without height values, it’s difficult to tell whether the site slopes up or down (or both). When we overlay this on top of the site, we get a clearer understanding (in my opinion) of the shape.

The trouble with this method however is that it is far more time consuming than contours as I needed to render this image out in high resolution to be able to print it, whereas contour data, being vectors already, is quite easy to work with. I think this image, paired with the matching contours would provide the best result though.

This was another by-product of the voxelisation series I discovered while tinkering with the parameters I’d set in my grasshopper file. I really liked the way the cube decomposes through the animation, here are several of the keyframes that I like the best in this short. This model, (unlike the one prior) seems to hold its overall shape a little bit better, but I think that might simply be due to the voxelised nature of the model.

This post ties in with my previous voxelisation series and the next post. This is a short animation which I rendered out of the transformation the mesh goes through before it gets voxelised. What I find really strange but kind of interesting is the rapid change that happens just before the ten second mark, the moment where we completely lose the description of any sort of cube and it undergoes a massive change in form. Some of the mesh characteristics at this moment are also rather intriguing. The mesh for a brief moment has a very fine/refined edge to it, a sort of crispness. This is typically an effect I’ve come across in something like 3ds max where you add several swift loops light up against eachother, but this is done completely algorithmicly, and hence I’m not entirely sure how this detail developed – something to look into in future.