Rotterdam, March 29, 2007
by Arie Altena
Thursday afternoon, March 29, 2007. The Dutch artist Marnix de Nijs is hard at work realizing the first public presentation of his project EI4, Exercise in Immersion 4. A demo version of this work will be on display in the exhibition at DEAF07, the Dutch Electronic Art Festival. EI4 is an art game, a technically complex project for which De Nijs is developing software in collaboration with in V2_.
AA: What can we see during the first “user test” of EI4?
MdN: EI4 is a game you play in a big room. When you play you wear a crash suit and goggles that give you a stereoscopic video feed of the hall, which is recorded by a camera attached to your head. When you walk onto the playing field, you see “bions” flying around – they’re kind of intelligent blobs of mucus. You have to collect them by walking up to them; then they stick to you. The bions help you navigate in the higher levels of the game. There are concrete pillars in the hall, but you can’t always see them because of the goggles. The bions always fly around them, though, so they’re an extension of your self that helps you navigate. The game thread of EI4 is “assembling” an instrument you can use to survive in a changing world. The world changes in the game from a video feed of the actual space to a modified and adapted 3D copy of it. The gameplay focuses on the boundary between virtuality and reality, between real and unreal. I place myself at that boundary and play games with it, like taking away pillars in the game world that are there in reality, so you run into them. It’s a physical way of showing that boundary. It doesn’t get any more physical than that. You smash right into the pillar, and out of the artificial world. That’s why you’re wearing a crash suit. The dividing line between reality and unreality, immersion and non-immersion, also plays a part in my previous work, Run Motherfucker Run (RMR). In that piece, the treadmill hurries you along; you can gain control over it, but if you don’t, then you’re literally flung off the treadmill, and thus physically thrown out of the gameworld.
In your installations, you often take the person who’s experiencing the
work to a point where their senses, or at least their eyes and ears,
are receiving different information from their body. In this you seem
constantly to be seeking an extreme: crashing into a pillar, getting
really nauseated...
You can see them as tools you have to learn to use within the span of
time when you’re experiencing the installation. I always try to program
it so that within a reasonable span of time you can find a new balance
between all your sensory inputs: auditory, visual as well as that of
your organ of balance – those are the three basics in my work. They are
the ingredients between which I create a new balance, and you either
find it or you don’t...
You’re building a machine that creates an imbalance – and the job of
the person who enters the installation is to find a new balance.
Exactly.
In that sense it’s about the incorporation of technology. Much of
your work has a heavily physical presence: big machines, a big speaker
thrashing around on a six-foot arm, hovercrafts, a chair spinning
wildly inside a projection. At first glance, it makes it seem as if
your work is about machines, like the machine art of the 1980s. But
when you experience it, you find out it’s about the power of the human
being, who triumphs over the machines by adapting.
But a lot of people aren’t able to. It’s an investigation into that
theme. That’s how I program the balance of the installation. It would
be very easy to make the machine much meaner. I could set the treadmill
in RMR so that everyone would fly off. But I don’t. I try to find and
indicate the boundary between balance and imbalance. I’m not
pessimistic about technology; my work is not a critique along the lines
of “We’re weak little humans of flesh and blood, and we bleed, and the
violence of technology is killing us.” If you’re thrown off the
treadmill, you might think that. I’m looking for a balance – and
sometimes I lean more to one side, and sometimes the other – because I
don’t have a very comfortable relationship with technology. That might
sound strange coming from me, but I’m really not that happy about
spending so much time sitting in front of those computers (he points to
his desk) there on my work table. And it causes me physical problems if
I sit there too long.
But you are fond of technology...
I like tinkering with cars, and here is my invention for today: a thing
you can use to make a spirit level aim a laser beam straight up in the
air. I made it today, because for EI4 we had to hang sensors on the
ceiling in very precise positions. We used this to measure out a
diagram on the floor and then project it onto the ceiling. I make tools
for things like that. You could buy it in a store, but I like making
things myself. I also like fixing things. It’s a general interest of
mine, which led me to this kind of art. My first pieces were machines,
but they weren’t interactive. At a certain point I wanted to make them
interactive.
Why?
Because you can generate greater engagement. I don’t find interactivity
that interesting in itself, certainly not as a way to offer an infinite
number of variations for the viewer to choose from. My installations
tell the story I want to communicate. But I find engagement interesting.
The balance you seek is a very precise thing. How do you find that point? How do you set the balance?
With the RMR treadmill it was just trial and error. I do user tests.
Not in an academic way – I’m not going to invite a hundred people in
and ask them to fill out questionnaires. If you have a good variety of
people, then after ten tests you know exactly what’s going on. The
person who’s made the thing is the worst tester: you know how the
sensors react, how the motor is controlled, and, in the case of RMR,
how all the parameters of delay, REM curve and filtering have been set.
Even on a badly adjusted treadmill, I can anticipate what’s going to
happen. And it’s the same with the visuals. I originally had someone
write a script for the RMR film, the one you see in front of you as
you’re running on the treadmill. It was going to be an interactive film
with all sorts of things happening in it. I was sure it would work. But
in the user tests I found out that it wasn’t at all believable. The
testers were just seeing pictures, they were not experiencing it as a
world they were running through. You find out these things by making
demos and testing them. Since you can only shoot footage from one point
of view, the viewer sees every event from a long way off – after all,
on the treadmill you’re always moving forward. Imagine that a group of
people is smashing up a car in a pedestrian tunnel in the film. First
of all, in the real world you probably wouldn’t run toward them;
second, as soon as you get there in RMR you immediately pass them. You
lack the cinematic means of making that kind of scene interesting, such
as close-ups. Because of the treadmill, you don’t have montage or
rhythm. Tim Echells’ script ultimately wasn’t used. All that’s left of
it is some of the atmosphere.
With EI4, the opposite happened, didn’t it? More and more script came
in – I’m thinking of the different levels of the game. How did that
development proceed?
The original description of EI4 for the first grant application was
very general, although I knew exactly where I wanted to go. With RMR,
the grant providers and coproducers were disappointed that the script
wasn’t used. I kept the initial description of IE4 very general, so as
not to create overly high expectations. That’s why it didn’t say
anything about game levels. I’m working on the first user test now, and
there needs to be something visual, so I have to build the levels.
There are five right now, one of which might drop out if I’m unlucky.
Couldn’t you have decided to make just one level?
Each of the levels I’m using now contains a research element. There are
a number of things I wish to investigate, and I have distributed these
among the various levels. That way I can better determine what works
and what doesn’t. In level one, you collect bions and go from a regular
video feed of the space to a half-mix, to augmented reality. In level
two, you go from augmented reality to a totally 3D world. In level
three, the whole space starts to spin on its central point. Level four
is kind of a network of corridors. That’s an experiment. The question
is whether I can create a completely different world that has nothing
to do with the real room – will you be swept up in that world as you
walk through it? Will you have a filmic experience that’s relevant? In
the last level, you come back to the real room, which then slowly
becomes infinite, and objects come flying out of that infinity, and you
try to beat them off the playing field. They’re not really there in the
room; that would be too theatrical. At the very end, I want to end up
in a kind of desert, an empty open plain with a Daliesque feel.
What software are you using for the execution?
A V2_ intern is drawing the 3D spaces in 3D Studio MAX. I give him
general instructions, I make objects, and put in the light. Then he
renders it into files that can go into the game engine. I used 3D
Studio MAX for six years, but that was before they put in all the
features we’re using now. For the past few years I’ve only used Cinema
4D. I know what’s possible, how it works and how to make light, but
it’s not my field at all.
Boris Debackere is doing the sound. How is that collaboration going?
I’ve worked with Boris before, on RMR. I like to work with him because
he’s very good at using sound to evoke a cinematic atmosphere. He and
his brother Brecht are also working with spatial sound in their project
ROTOR. Spatialized sound is a great thing to use when you’re walking
virtually through a space. If the whole space is spinning, that’s
literally Boris' ROTOR, at least if you put his sound sources in.
That’s the reason he wanted to be involved. Artem Baguinski at V2_ is
building in an audio engine that makes this possible: it lets you hang
invisible objects in the game space and link sound to them. For Boris,
it’s an interesting way of composing that’s immediately very visual. It
could be useful for him in the future, too. His role in the project is
pretty autonomous. I indicate more or less what I want, and he fills in
the details. But if he makes something I don’t like, of course I tell
him so. Otherwise, the sound is his thing.
How is the collaboration with V2_Lab going? Does it make it more
complicated that you’re using technology that’s still under development?
EI4 makes use of technology that isn’t yet available to consumers and
is a long way from being fully developed, even within the academic
research institutes. We’re working with prototypes, so we run into
problems on every front. We’re actually beta-beta testers. We’ll just
keep testing all the way up to the opening. I’m using 25 Hexamite
ultrasound receivers for IE4 right now; they’re location sensors. The
system is theoretically more or less indefinitely loopable. In the room
where you play EI4, there’s a grid of receivers hanging from the
ceiling that receive sound pulses. On the player’s head is a sensor
that sends out pulses, and you can use triangulation to determine the
person’s position in the space. It works pretty well. I track a
seven-by-twelve-meter space, but in principle I’d like to be able to
track a bigger one. I need a lot of sensors since ultrasound doesn’t
carry very far. GPS isn’t precise enough by a long shot, and neither is
the military band – I track to an accuracy of four centimeters. The
triangulation gives an xyz position in the room. Those data are
combined in a filter with data from gyroscope sensors and a compass
sensor that measure the turning of the player’s head. This is used to
steer the virtual camera in the game engine.
When you know what kind of system you need, first you do some research
to see whether one exists. V2_ started doing that a year ago. Not full
time, of course. A lot of research has been done on sensor systems –
everyone’s doing it, actually. You find systems with specifications
that look promising, but then you find out no one’s done anything with
them for two years. It finally turns out that no one has a system that
really works. There’s just one outdoor system, which we might have been
able to get hold of from one of the partners of MultimediaN (one of the
main sponsors), but it wasn’t transportable. It only works outdoors,
and also it would have been much too expensive, it wasn’t a realistic
option; they build this thing into sports stadiums. So we started
working with a number of alternatives that were not optimal. We tried
to improve some of them, but that didn’t lead to a satisfactory
solution, either. Finally, we decided to use the Hexamite ultrasound
sensor system with the software provided; it turned out not to be so
bad after all, we thought. In terms of hardware, the system is almost
infinitely expandable. You can put up as many sensors as you want.
Except that now, a week and a half before the first user test, it turns
out that the software hasn’t been adapted for that at all. It’s never
been tested for a setup of more than six sensors. A programmer at V2_
is writing the code full time right now.
Fortunately, we’ve stated clearly that the presentation at DEAF is
going to be a demo, but it’s in my interest to make sure there’s as
much as possible to see. I have to get support at DEAF07 to make the
follow-up version. That’s why I almost always do a complex project like
this in two steps. First there’s a demo phase. At the end of the demo
phase, I establish the terms under which I want to continue working. By
then I also have enough visual material to show the art people, and I
can ask the film foundations for money. At the same time, a demo is a
very speculative thing. Will I be able to deliver on it? This is a big,
somewhat pretentious project. It uses theatrical lighting; we’ve made
crash suits; we’re making visuals. But right now it’s all just loose
ends; there’s no integration yet. There does have to be a balance
between what you’re presenting and what you’ve announced. I have
possibly the biggest room at the DEAF festival; there’s fancy
theatrical lighting. If there are bugs at another level of the
production, I’ll fall on my face.
On V2_’s server I ran across a document about the software
development for RMR. It mentioned similar problems: Macs that weren’t
fast enough to read all the data in real time; an overview of the
various solutions that were developed; which ones worked and which ones
didn’t.
That’s right. With RMR, the problem was that the laser scanner we were
using – the same kind as the one on the Pathfinder (I had access to it
thanks to ZKM) – was sending the Mac way too much data – more than the
computer could handle. One thing we did was to make something in
MAX/MSP. Of course, MAX is a handy tool for building things, but if you
really want something good you’re better off writing it in C.
Because then you have fewer problems with reading data, less overkill?
Yes, with MAX/MSP you make a shell within a shell, and that causes delays and can make strange things happen.
Are you a programmer yourself?
No, but I can build things in MAX. I can’t read a scanner. I understand
what’s going on at the architectural level. I did talk to Stock, one of
the programmers at V2_, about how to read the laser scanner. But I
can’t do it myself. I need the V2_ Lab for those things. For me, the
most difficult thing about the EI4 collaboration is that in this phase
it’s solely about developing the technology, the hard- and software. I
could actually write down the specifications on a piece of paper right
now. They haven’t changed in the past year. But I really want to move
on now, and I need to. There has to be something to see. The art
foundations that are giving me grants won’t be happy if all I do is
develop technology, of course. And I wouldn’t want that either.
Sometimes I really feel a discordance in these collaborations. They’re
about developing technology – as an artist, what do I want with that?
While the technological development is going on, don’t all kinds of
adjustments get made under the influence of that process that have an
effect on the project’s artistic aspects – how the piece needs to
function and look?
I play a part in the technological development, of course. But in fact
I always want just a little bit more. All the time, I’m on the edge of
my chair, ready to start working on the visual details – and I can’t
yet.
That’s the delaying effect of technological development. One always wishes to be two steps further.
Yes – please, let me do a user test, so I can move ahead! There’s no
point in developing 25 levels for EI4 yet, even though I’d like to,
because I don’t know yet which game threads work well and which don’t.
I won’t know that until after the first test; but before that can
happen, the technology has to at least work. In practice, EI4’s
artistic and technological development are intertwined right now.
That’s not ideal. And you can see it in the way we work. The V2_ Lab,
of course, works everything out very precisely. A rapid prototype would
actually be more helpful to me right now, but there is no
rapid-prototyping tool for sensor systems. It’s no accident that all
the technical universities are tinkering with it, and there’s no good
solution yet. I do sometimes wonder: Should I bother with a long
developmental process like this?
Don’t things move faster in the commercial game world?
There, they work much more within established frameworks. When you
start from scratch, you get the chance to make something really new.
That never happens in the game world, or almost never. I like the story
of how EI4 started. I’ve been working with V2_ for quite a while, and I
know the people there. Now and then I email them with a technological
question. For instance, there’s a technique for interpreting
stereoscopic video footage and generating 3D material. My initial idea
was to have someone walk through a room wearing stereoscopic goggles; I
wanted to record the footage in 3D and manipulate it. They use this
technique for making moving images from Mars. But you can only make one
frame per second, or even 25 seconds, that way. I had a question about
this, and I exchanged some emails with the V2_ programmers; we were
thinking about solutions and soon we hit on another idea. How exactly
it reached management, I have no idea. I think Artem Baguinski told
Anne Nigten about it. It turned out that it fit perfectly within a
research project of MultimediaN’s. I came by to talk about it, and we
were able to start work immediately, and pay the programmers. There’s
normally a six-month period of getting funding first. EI4 is a project
that suddenly took off in a big way after some brainstorming and a
little sparring with one of the programmers. That’s why I’d rather not
continue the development right now; I’d rather see a user test first. I
want to go back and reflect a little bit before rushing on with the
technological development.
You usually show your work at media festivals. Are you also getting
opportunities to show your work in museums and in a visual-art context?
That’s slowly been happening more. The Shanghai Biennale, which I
participated in, was really a visual art show. People from the art
world go to see it; you meet art-museum curators there, and critics who
write for art magazines. But in general, there’s usually not a lot of
interest from that side. I did try once to interest the Museum Boijmans
Van Beuningen – and it wasn’t that they weren’t interested, but it
wasn’t a priority.
I ask partly because it strikes me that there seem to be different
views or interpretations of your work. A visual-art curator sees
something different than someone from the new-media world. For example,
in articles about the revolving speaker arm (Spatial Sounds, which you
made with Edwin van der Heide) I’ve come across references to
“surveillance” and “enclosure.” To my mind, the installation doesn’t
have much to do with that. Those ideas seem to be triggered by the
fence that surrounds the installation. I’m inclined to see the fence
merely as a necessity and not as part of the installation. But
visual-art people see it differently, perhaps rightly so. What do you
think?
In Spatial Sounds I was never interested in surveillance; that’s not
what it’s about. Surveillance is Big Brother spying on you from a
distance; this is a machine that explores the space and interacts with
the viewers. At a certain point it can go crazy, but it doesn’t have
anything to do with surveillance. But when they mention the theme of
demarcation of space, they do have a point. For my graduation project,
I made small, sculpturally designed architectural spaces that alluded
to the contrast between private and public. There were speakers in
there; one was full of kebab broilers – that kind of thing. That piece
was about defining place, and so is Spatial Sounds. So that
interpretation is on the right track. And EI4 is also about two ways of
experiencing space, although other layers also come into it. Whereas
before it was about physical boundaries – a kind of shower-stall
installation as a representation of your own private space – now it’s
about boundaries in a media-dominated society: the boundaries between
virtual and “real.”
Are there perhaps also differences in the way people experience these
installations? Whereas someone from the visual art world would mainly
be concerned with the installation’s visual presence and would look
closely at that, I sometimes think I’m practically blind to it. In
Spatial Sounds, I immediately notice the sound and how the speaker
reacts to me; I pay much less attention to how it looks.
That’s how I designed it. There’s nothing superfluous about it. I could
give it a design, but right now it looks like a machine that does what
it does. That’s precisely in order to solely emphasize the speed,
movement and sound – if it wasn’t for the fact that it’s already a very
sculptural object. And it’s no coincidence that I put the
interpretation of someone like Olga Majcen – she’s a Croatian curator
who emphasizes the visual, and to whose interpretation you’re referring
– on my website. It’s an interpretation that’s easily missed. I really
love the kind of exhibitions she organizes. In really hardcore
new-media exhibitions, a lot of the time everything still tends to look
the same. I always think, pay some attention to the visual – I’ve seen
interactivity before. That was one of the things I investigated in RMR,
too: seeing if it was possible to create a genuinely cinematographic
experience. I’m trying to do the same thing in EI4. I want to create a
filmic quality. That’s why I’m working with people from the film world.
Lack of visual quality is often a reason why art people aren’t that
interested in new media exhibitions. The new media world is sometimes
also difficult to pin down. Some exhibitions place science and art side
by side without distinguishing between them. One installation might
give you a physics-related experience, and the next might give you a
carefully thought-out artistic experience. Those are two different
things. And unfortunately, it sometimes happens that technologically
innovative works of art get exhibited that are not at all strong in
terms of artistic content.
What is the defining factor in artistic content? Is it a balance
between the subject that’s being investigated and the way it’s
presented visually?
To me, a work of art – regardless of what it’s made of – is successful
when there’s a harmony, an agreement, a balance, between the way you
present it and what you’re trying to say. It could be conceptual art,
where you hang a few sheets of paper on the wall, or it could be a
sculpture, and the artist starts stammering whenever someone asks him a
question about it. The thing the work of art is trying to accomplish
has to correspond with what it’s communicating. But that’s so general –
that rule leaves you a lot of leeway. It also applies to technical
gadgets that don’t make any claims. I’m definitely interested in social
and political connotations. I comment on the world around me in a very
basic way. I have my impressions, and I incorporate them. I started
working with computers because they were around; that has an effect on
me. You can look at RMR as a metaphor for how I feel in the city. I’m
definitely interested in image content. That’s why I remade the
Accelerator. In that piece, the images had to meet a number of
requirements, since in the installation, to avoid getting sick, the
viewer has to synchronize the images while he’s spinning. I could have
gone into the city and filmed in the middle of an intersection, but I
thought that would be too literal. A forest would have worked, too,
since they look the same everywhere. For the first version, I ended up
quickly shooting some footage at Hoek van Holland. Those images
accidentally turned out to be very usable. I was there on a pretty
windy day, after the end of the beach season; the umbrellas were still
out, and while I was filming dark clouds started to gather. It was all
by chance. In the first version of Accelerator every level got darker.
It worked. To be honest, I was much more concerned with interfaces and
machines in those days, and that’s partly why I thought the footage was
good enough. In Beijing Accelerator, the remake I was asked to do, the
images are much more meaningful, and the visual interface, which Brecht
Debackere designed, is much better. There are shots of Beijing with
high-rise buildings that they knocked over entire old neighborhoods
for. In this version I avoid the romantic shots, the kind where you’re
standing on a corner in the beautiful Hutong district looking over at
the advancing skyscrapers. But that theme is in there. Because of the
mix of interest in technology and concern for the visual, programmers
and curators don’t always analyze media art deeply enough. I think
people should pay much more attention to this; it will lead to better
exhibitions. Then the discrepancy between visual art and media art will
stop being relevant.
Is it easier to make art about the modern world using current technology?
To me, yes. To me, it’s always about creating situations –
installations that are real. They aren’t representations; they do
something. An artist has to engage with contemporary discourse. For me,
life in the technological society of today is the only source of
inspiration. Sometimes when I travel, I don’t see a single reason to
make art. Away from the contemporary technological world, I don’t find
a reason to create work.
[translation: Laura Martz]
|