Hi, my name is Or Fleisher, residing in Tel Aviv, Israel. I am a director, creative developer and artist at Phenomena Labs.
How did you come to the digital area? What is your background?
Growing up, I found my way into the musical world through studies of classical piano and then drums and developed a passion for music and audio in general. down the road, while working in a music studio, I further developed this passion in recording and mixing music. Following that point, I studied sound design at Future Rhythm (SF, CA), and Cinema studies at Tel Aviv university.

All the aforementioned came into place once I started learning at the university. Between my passion for audio, love of cinema and the ever-growing need to investigate and develop new practices for storytelling, I suddenly had a “eureka” moment and realised the combination of visual and audio using emerging technologies was what I had strived for all along (and still do). It didn’t take long before I started studying 3D animation, audio-reactive programming, realtime art techniques and javascript and began experimenting with these, creating experiences, installations and tools which are at the intersection of code, art and storytelling. In the last couple of years I have been working under the studio I Co-Founded with Ronen Tanchum and Guy Fleisher, Phenomena Labs which fuses these same disciplines I mentioned early. The studio is home to many projects ranging from interactive installations to VFX and feature film work, and is also serving as a base to project developed in-house.
"Digital creativity is heading towards a brighter future."
We had a very creative decade. It seems that digital creativity is more and more focused on efficiency. Do you think there is a brighter future ahead?
I am an optimist, so I’ll have to say yes.
The projects we take part in, are to some extant, possible due to the work of many other creative professionals from around the globe. This open source creative mentality which is growing bigger by the day and the shift in popular culture giving stage to different types of creativity, are just naming a few changes that drive digital creativity towards a brighter future. As for the efficiency persuade, personally, I think that as long as this ongoing challenge is counter-balanced by artists and creatives working using unorthodox approaches and tools, it’s not a negative cause. Moreover, as most of the work we do at Phenomena Labs art fuses code, technology and creative work, efficiency is often required even at the conceptual levels.
"I do not come from a formal computer science background."
Take for example the process of concept art creation for mobile devices, this process rises the requirement for efficient concepts, in my experience, often resulting in a more focused and coherent art pieces. Another point to note, is that the bar, these days, is being lowered allowing an increasing number of artists and creatives to jump into the digital world without (or with a lower) steep learning curve, thus creating faster and more creative work. This has definitely helped my case, since I do not come from a formal computer science background, Nowadays I am able to learn and use concepts faster then before.

How came the idea of "LIVYATANIM: Myth"?
‘LIVYATANIM’ (which in hebrew means whales) is a band composed by producers Aviv Meshulam and Tomer Baruch. As this album was made over a rather long period of time, I had the opportunity to see it evolve on my often coffee breaks in Aviv’s studio. The concept behind “Myth” was conceived by me and Aviv, over a period of about a month, in which we discussed topics of interest to us (Astronomy, audio-reactive art and VR to name a few). These manifested into the film after I sketched the guidelines for it, and started working on a more coherent look, concept art and narrative.
How did you proceed ? What were the different stages of creation and development?
The first part of creating this film was laying the technical foundation on which the whole thing would build upon. Coming from sound education and experience, sound plays a major role in many of the projects I do. So it did on this one, as we had the idea of controlling animation and compositing effects in realtime via MIDI notation. Basically we wanted to “Animate with MIDI”, giving a certain twist to the more formal concept of visuals reacting to audio analysis. As both me and Yannis Gravezas, who was leading the development, share a special spot for music and audio in general, we tackled this thing first. Another component that had to be laid out early on was webvr. This could have been done at a later stage, but was crucial for reviewing and shaping the experience in to it’s desired path.
"A core value was to bring this film to as many devices as we could."
After these initial foundations were laid, we started working on each scene of the film, integrating animations, developing shaders and interconnecting MIDI notes to control specific elements in the film. To begin with, I imagined the experience as riding on a rail that is often manipulated by the actual music. The connection between the sonic and visual worlds, puts the film at state of full immersion into the sound.
As a core value, that was set early on the project, was to bring this film to as many devices we could, we had to work efficiently, saving as much resources as possible, and optimising art concepts to match that guideline. A good example for that could be observed in the glitch, grained atmosphere of the film, that helps smooth out visual inconsistencies between different machines.
Did you run into any obstacles while working on the project ?
We definitely had some obstacles to overcome along the way. As mentioned before, designing concept art in respect to the platform was a challenge in a more conceptual level. A more technical challenge was actually developing, testing and debugging so many devices. As an extension to that, as the project uses webvr-polyfill to render stereo VR to desktop & mobile based headsets we had to work within the bounds of it being at an experimental stage (at times unstable and/or unpredicted). All in all, through a dev perspective these are normal phases in any project, and should be regarded as an essential part of the project.
"I served as the film’s director, art director and developer."
What are the different skills that worked on it?
Firstly, Aviv Meshulam who created this really beautiful track (and album) and also helped shape many of the film’s concepts and art. After concept was conceived, we continued into assembling a team composed by me and 2 individuals to bring this film to life. These artists, we’re, what you might call ‘love at first sight’, the moment I explained the concept, and heard they’re feedback, I knew these are the people I want to work with on this project.

Leading the development and programming is Yannis Gravezas, a brilliant developer with a keen eye on aesthetics. Rigging, modelling and watching countless youtube reference videos I had sent him was done by 3D supervisor Tomer Rousso. I served as the film’s director, art director and developer and Aviv also served as the executive producer of the film. We got help from a couple of other people and so I would also like to thank all who made this film possible.
"VR allows the user to fully commit to the world “Myth” has to offer, without borders or interferences."
What does VR bring in particular to the user experience?
The film’s art fuses natural and digital influences and concepts. Virtual reality, enables to connect to the film’s world in a direct manner, not thinking about these distinctions, but feeling the atmosphere. This immersion process happening when viewed in VR allows the user to fully commit to the world “Myth” has to offer, without borders or interferences.
How would you define interactivity in VR?
Simply put, the power to control and manipulate events and occurrences in virtual reality space. Interactivity in VR (and even outside it) opens the door to many levels of control. An example of a base level of interactivity in VR would be the way head orientation controls the camera rotation. Nowadays, we see combinations of data sensors such as mounting a leap motion controller onto an Oculus headset, allowing hand tracking during the VR experience itself. These combinations and tools, are starting to pave the way into a very interesting future for both VR and interactivity separately, and combined.
"VR storytelling and content are at a very experimental phase, which in return helps push the limits and knowledge forward."
Is there a particular process involved in creating experiences in VR?
There are so many approaches, tools and methods regarding VR content creation it is hard to put thumb rules that apply to all. I think to many coming from 3D and animation, some concepts of VR are trivial, while others are just could be learned through the act of creation. VR storytelling and content are at a very experimental phase, which in return helps push the limits and knowledge forward. With major players support (Unity and Unreal) and the counterbalance of the open-source projects (WebVR, OSVR), it seems creating VR content will reach the point where i can advise on specific routes of action. In the meantime, if your willing to be adventurous and have a good concept for a VR world, I suggest you have a go at it.
Do you think WebVR will be a standard ?
I sure hope so! Personally, I find the concept of democratising VR content consumption inspiring. From a consumer point of view, the amount of app-stores, downloads and compatibility issues other proprietary formats have to offer, it is becoming increasingly hard to just “see something”. From an artist’s point of view, being able to create experiences that render on multiple VR headsets with minor modifications, if any at all, is precious. WebVR content is also very promising in the respect it can be consumed with no attachments, meaning no download has to be made, just a solid internet connection, headset and a supporting browser and your good to go.
"Realtime 3D web content is getting easier to produce."
Do you think people want to see more 3D content on the web?
Judging by the amount of web 3D projects we do lately in Phenomena Labs, seems very likely to assume so. As 3D web renderers will progress, it will make rendering even more complex imagery online possible, allowing experiences that are now considered offline renderable only, to play realtime. Another point to note, is that since realtime 3D web content is getting easier to produce, we see an increasing number of works from different fields being published openly and freely, so will it continue growing? I can only hope so.
Is there a new technology that is particularly exciting for you?
What really excites more then the actual technology, is the creative use it could be put into. We have been researching the combination of different bio-feedback tools such as brainwave sensing technologies and others to drive art projects, it is very interesting and am sure we will see more of that coming in the future.
What are you working on these days?
Well, the team has been busy working on a couple of VR experiences and feature films. To note one, we are currently working on a musical performance shot in cinematic VR that has a 360 sound engine that mixes the binaural audio in realtime, based on head position, a truly immersive performance that is.
Discover the Livyatanim, 'Myth' experiment.