Context

The internet, in its early days was a freely open platform. It was always on course to change the world and the way we shared information. It started off chaotic and disorganised. Like many technologies before it, it took a centralising force to convert it into a more unified manageable system. Newspaper, radio, television – they all progressed this way. But the internet today is undeniably the biggest communicative influence.

Since its inception, a lot has changed about the way the internet functions.

  • In 1980 there was Usenet. Anyone could run their own server, connect to other computers and start sending messages; its organic design meant no one could control it.
  • In 1990 the World Wide Web appeared; users could host a website on personal servers and buy a DNS domain to direct traffic to it. The websites then link to each other, forming a web of information.
  • Today websites are no longer managed by users, instead information is filed under a monopolised organisation; content manufactured by individuals, is all managed and stored by large commercial services.

However, when you have very few influencing parties in a network, the monopoly on information distribution is incredibly powerful.

  • Governments like monopolies because they are very easy to influence. When one organisation is in charge of information they can easily foresee external threats and monitor communications.
  • Companies like monopolies because they are built to maintain power and wealth; they do just as much data collection as the government, devising new ways to generate additional revenue. Infrastructure companies manage how data is transported; large providers extract fees from smaller networks because they need their infrastructure for transport; this hierarchy means large sections of infrastructure are unused because of access limitations. These administrations continue to grow daily, consuming their competitors; directing increasingly larger revenue streams to a single destination.

Today we see the threat that companies pose to net neutrality. AT&T, Verizon, T-Mobile, Comcast, etc. All these ISP’s pose a threat to the open internet, spending unfathomable amounts of money to lobby governments into bending over, so that they can reap more profits by charging premiums to deliver faster content.

Furthermore, we’ve also seen the threat posed by ransomware, when government departments like the NSA try to install backdoors into the software that is supposed to be secure. Or not highlighting vulnerabilities they find so that they can use them to widen their scope of mass data collection, at the risk of everyday citizens having their hardware infected with malicious content.

The Project

This project visualises an imminent future in which the systems instigated to exert control over the communications and the sharing of knowledge and content have failed. The administered centralisation has led to a mass mistrust and divide between governance and liberties. And this has caused the people to take action, re-harvesting the infrastructure to create and re-hash a newer, more-level, open-source network – free from monopolisation and privatisation.

The idea is loosely based on principles of natural decentralisation found in insect colonies – where control is distributed amongst the homogeneous biological agents who act upon local information – resulting in complex global behaviours which propagate through the entire system.

The network is established as a series of structures that interlink with no converging pathways or destination to monitor. Each node is an assembly of pneumatically constructed rings, and each holds a series of interconnected servers running off applications stored on localised databases.

The servers are transformed into a shared global network that can move data around while maintaining ownership of that data. The network makes full use of unused computing power by stripping ISP hierarchies and ensuring each server is impartial while avoiding the need for data to travel to other cities to get from one network to another.

Our future will ultimately rely on our ability to take ownership of our own circumstances. The ‘internet’ will no longer be dominated by corporate greed, but lead by every people, people who are inherently bound to perhaps our most influential driver of contemporary development. Vertebrae is thus the backbone of our society as we move forward; A system constructed by the collective, a free and open internet for all.

I’ve been on holiday for a while now, and haven’t been able to update my tutorials for the last few months, but regardless i still get plenty of questions about some of the content that I produce.

One of the most common questions I get is about data trees, they seem to be a very tough concept to grasp, and rightly so – so I thought I might have a go at explaining them in this post.

For starters, data trees are the way that information is passed around in grasshopper. They are a useful way to create and manipulate hierarchies, and everything that you do in grasshopper relies on data trees, even if you are just passing one single data point through, or a list of data points, or lists of lists of data points.

Panels and Param Viewers

Two very useful tools that we have to understand data trees are the panel (yellow), and the param viewer (grey, double click to toggle between its two different views). The panel tells us specific information about what exactly is in our data tree, and the param viewer gives us an idea of the structure of our data.

I’ve been racking my brain for the best way to explain data structures, and I think the best way to understand it is to think of it like a street address. We can find out exactly where an item is located (much like how we find out where people live) based on its index, which is broken down into two parts:

  • path index (think of these numbers as the country, ZIP code, suburb, street name, etc.)
  • item index (think if this is the exact street address)

The path index is the string of numbers inside the curly braces, denoted by {}. Each collection of values inside the curly braces is its own list (or path), so when we see that we have a collection of lists, we call this a list of lists.

The item index is the number at the leftmost of every line in a panel, and this number always starts at 0. This is a convention of computer language, 0 is always the 1st item in the list.

So if we wanted to find the first item in this data structure,
We would be looking for {0;0;0}(0)
The second item would be {0;0;0}(1), etc.
The seventh item in this collection however, would be {0;0;1}(0), the first item in the second list. This is because every list in this current data structure has 6 items, so the seventh item is found inside the second list.

Try using more panels and param viewers, and try and pay more attention to the path indices, which as you can see, appear in both the panel and param viewer, so it’s easier to debug your scripts.

Graft, Flatten, and Simplify

So what do these functions do? These are all ways to manipulate our data structure very simply.

Flattening our data structure strips out file hierarchy. As you can see, the first collection of data is organised into eleven lists, each with six items in them, but when we flatten that list, it becomes one list of 66 items. Note that our path index also changes. When you flatten a list, it is moved into a new path which is now {0}. This would be like moving every single house onto one main street.

Grafting on the other hand, takes every item and puts it on its own unique list, this would be like putting every single house in a neighbourhood on its own street. So if we look at our param viewer now, we can see that there are 66 lists, and each of them has a single item. Also note what happens to the path indices, we keep the initial {0}, and a new index is added with a semi-colon,
so our first item is located at {0;0}(0)
and our second item is located at {0;1}(0), etc.

So when is it useful to flatten and graft?

When we flatten data, in essence, we could say that it becomes easier to access. It would be a lot easier for eleven separate posties to deliver their junkmail to everyone if they all lived on the same street. Or in grasshopper, it becomes a lot easier to connect eleven grafted points to each of the 66 points which exist in our hierarchy if that collection of points is flattened.

But if we wanted each of our posties to only deliver mail to their assigned street, we would graft our eleven posties and connect them to our original rectangular grid, and because that is already broken up into eleven lists, each with six items, we have corresponding data structures.

And what about simplify…

Simplify is probably a function you won’t use for a while at first, it’s a way to tidy up your data structure. Sometimes when you’ve strung a lot of components together, they will add placeholder indices, this is something David Rutten talks about in a more detailed blog post if you are interested. So simplify basically eliminates all placeholder values that have accumulated through your definition.

The second image shows when this might be useful. If you were trying to combine two data sets, but their paths were different, they will not merge together into lists, but when we simplify the data coming into the merge component, we can see that it outputs eleven lists, each with the now twelve items we wanted in them.

Additional Resources

Modelab Grasshopper Primer info on data trees

David Rutten’s master class on data trees

One of my early tutorials on data structures

The why and how of data trees – a comprehensive explanation of why we use data trees in grasshopper, by David Rutten

Over the last short while, I’ve been looking at UV texture mapping via grasshopper. Now, grasshopper itself does not provide any UV tools, so in order to do any mapping, the geometry had to be baked, UV’d, and then exported for every single frame.

In order to automate this process, I relied heavily on being able to construct untrimmed NURBS surfaces, as one of the properties of a NURBS surface is that it fills the 0-1 UV space by default. From this step, it was very easy to transfer UV data onto a mesh via the ApplyCustomMapping command.

For the first animation, I tried blending a plane into a sphere (based on a technique I outlined in a previous tutorial), and then ran a wave cycle simulation over the top using kangaroo which gave the wobbly appearance. The texture was also animated using data from the meshes such as displacement from the pre-wave simulation to give lighter and darker spots across the object.

For the second animation, the same wave-cycle simulation setup was used, but applied slightly differently. The raw surface blend was exported with no displacement, and the displacement was calculated per frame and rendered out as an animated normal map. This can be seen in the second pass, the first pass is just a checkerboard pattern to visualise UV stretching. Not baking the displacement into the geometry has the benefit of being able to apply any sort of animated displacement maps further down the line and probably worked better than the first animation.

Furthermore, in the second animation I tried a different method for blending between forms. Instead of targeting each surface point to an end point on the target surface, I aligned several objects along their U direction in order to generate a series of tracks through their collection of UV points, and then set up an interpolation track to create the animated meshes, as visible in the last video.

In this tutorial we dive deeper into the world of polygon modelling techniques within a parametric workflow in order to build a robust and highly customisable definition for vase modelling. We look at driving out definition with a series of inter-related graph mappers, and only partially modelling in order to take advantage of rotational symmetry.

Furthermore, we use low resolution mesh building techniques, so that the models we produce in the end are very cleanly constructed, optimised for subdivision, and easy to transfer into other predominantly mesh based modelling packages such as 3ds Max and Maya.

Pictured below is just an offering of the possibilities of this method that I have implemented on my own accord, showing the true versatility of a well built grasshopper definition.

One of my biggest qualms about methods for geometry creation in rhino and grasshopper is that most users do not take the time to construct good meshes or NURBS surfaces. When learning a program such as 3ds Max or Maya, we try to construct with quads, to consider edge flow, to avoid self intersections – and these are all good practices, but they don’t translate as well into some instances.

If you’ve ever used the marching cubes algorithm for creating a mesh isosurface, or the mesh machine tool, or tried converting a trimmed surface or BRep into a mesh, more often than not you are going to end up with a pretty ghastly looking mesh composed of irregular triangles. I’d ask you to put more thought into your creation, think about the construction of the geometry, avoid booleans, try create objects out of untrimmed surfaces and then compare the results later.

Of course, that’s not to say you shouldn’t use these methods, I’ve found many good uses for them, and there are times when they are unavoidable. However it may be a good idea to look at how you could retopologise these items later, I’ve had to do this many a time and Maya provides excellent toolsets for redrawing meshes, I did so with my latest efforts from mesh machining:

Another reason (and the main reason I want to stress) why it is good to construct meshes out of quads is that it makes subdividing the geometry so much easier! When it came to render time, my triangular mesh needed 300,000 polygons to achieve a smoother look, while the quad mesh could do a much cleaner result with only 10,000 polygons, meaning my renders were much faster and I could spend more time on look development.

Furthermore, you can very quickly see the difference between the way the meshes deal with the caustic patterns, the quad mesh creates a much much cleaner result of the refracted light than its triangular counterpart.

I should also point out at this stage that there are two different algorithms which can be used for subdividing. The first and much more widely used is the Catmull-Clark subdivision method. Developed in 1978 by Edwin Catmull and Jim Clark, the Catmull Clark subdivision method is available in most modelling programs, as it works very well with quad meshes and gives very good very smooth results. On the other hand, if you are still hell bent on using triangular meshes, some packages offer the loop subdivision method, which works much better on triangles and more topologically irregular geometries.

I’ve taken some time as of late to further explore Daniel Piker’s Mesh Machine component addition to grasshopper, and it really is quite a unique tool to have. I’d only ever used it to create more uniform mesh topology distribution, but it also features mesh adaptability tools, mesh point distribution based on curvature, guide geometries to manipulate your meshes and mesh relaxation. (more info on Mesh Machine)

So in this first image I produced a sequence of slightly more and more unrefined meshes as a result of systematic scaling and orbiting curve guide geometries. It should be noted that mesh machine’s goal is likely not to produce obscure mesh results like this, but this is rather capturing states before the simulation is about to break.

Succeeding this, I had a look at retopolgising a slightly different kind of geometry and what information I could extract from that result. So a preliminary mesh surface was built which tracked information about bending, shearing and stretching moments in localised areas of the mesh, producing a result of this kind.

And following on from that, I wanted to see how this information about the mesh state could be pipe-lined into the rendered image. I ended up using the colour per vertex information for reflection and refraction textures.

My latest tutorial looks at principles of mesh modeling in grasshopper. In my experience, grasshopper (and rhino for that matter) are quite bad at mesh modelling. Of course, given that rhino is predominantly NURBS based modelling program, it is understandable that the mesh capabilities are somewhat lacking, but that doesn’t mean that we should sacrifice quality of meshes when we are actually trying to build something, in fact quite the opposite.

I would highly recommend that any rhino/grasshopper user have a go at using 3DS Max, Maya, Cinema4D, or some other mesh based modelling program to at least gain an understanding of how to model objects using meshes, as I have found it an absolutely invaluble way to further my use of grasshopper. Apart from inherently just being more pleasing to look at a well constructed mesh, it is also beneficial for debugging issues with your model, and very useful if you are planing on transferring it into other programs.

So in this tutorial, we take a look at some of the tools available inside of grasshopper to address mesh modelling, translating from surfaces to meshes, face-normal issues, how to construct with quads, basics of subdivision, and other techniques.

During our processing design paper last year, we looked into the material properties of wax, specifically how it reacted to being melted and then cooled to varying degrees. And in the last few days, I have been working on a technique in grasshopper which is to some extent reproducing those results. So I did a few iterations and thought I’d post up the results of the digital wax models.

This technique was birthed out of my trying to intentionally ‘break’ kangaroo simulations, something which I would encourage you to try if you are an avid grasshopper user. When you try and push the boundaries of a simulation, interesting things can happen. in essence, I tried to rapidly change the tension of a mesh in kangaroo. This, in conjunction with a bit of magic to avoid self collisions in the engine, produced some much more raw results. Then it was just a matter of applying a little bit of smoothing over the top of the mesh (note: iterative smoothing, not subdividing) to even out some of the larger creases that had formed, and voila! The following results were produced!

In this tutorial we look at using the path mapper in grasshopper extensively, in order to create the following piece of geometry. The path mapper is a reasonably difficult tool to get used to, but really helps us break down data trees and learn how to manipulate them to achieve what we want. We also take a look at the very under-utilised construct mesh tool in grasshopper and how it helps us achieve this technique. Also be sure to take a look at this handy path mapper cheat sheet.

Data trees are amongst one of the tougher concepts of grasshopper to learn, if you want to learn more about them, check out this newer post in which I give a basic explanation and provide some of the fundamental concepts.

In the first of my advanced tutorial series we take an in depth look at how to create the parametric bench as seen below. Amongst the creation parameters for this bench, we look at the overall shaping, profile blending in order to create different seated areas, and eventually how to convert that form into a series of contours which match the curvature of the bench itself.