Wednesday 26 March 2014

BRRRTZZL! Or: How Electrical Engineering Can Be Fun.

Well, it took me a couple of days to get this one up, because there is quite a bit to write and also because there's not much to write. I could write loads and loads about the teaching material, but I don't want to go too much into details of topics that - as important as they are for my future trade - are mainly basics for the function of the equipment we're going to use or are already using, and neither you nor I expect that I teach you what we've learned in 4 days in just a few paragraphs. Where it is important I will try and explain as best as I can though. But I reckon this will be in the future, when we are learning about effects (filters, EQs) and microphones (condenser mics in particular). So, sit back and enjoy the ride.

Wednesday brought a change in our curriculum. We got a new lecturer and we started a new big topic. For 4 days we had the first block of electrical engineering (Elektrotechnik), presented by Patrick Newman, an audio engineer and electrical engineer with a great sense of humour and a very friendly vibe around him. Half of the time he's laughing and cracking jokes and it's definitely a lot of fun listening to him explaining electrical engieering. The somewhat barren subject is quite enjoyable when Patrick humorously remarks what dangers there are when working with electricity and what we should keep in mind. Those kind of lessons tend to stick. That's how  physics and natural sciences in general should be taught: interestingly.

First Patrick taught us about basic stuff - units, definitions and a basic principle to be precise.
1. electrons and their purpose
2. electric voltage/electrical potential
3. electric current
4. electric resistance
5. Ohm's law
6. electric power
7. level of efficiency

But then he went deeper into the topic and we learned about direct current, alternating current, AC voltage and how AC voltage can be describe. Which is pretty simple: through amplitude, period and phasing. But of course, it's not that simple. There are two ways to measure the amplitude: peak value (Spitzenwert) and peak-to-peak value (Spitze-Spitze-Wert). Well, and then there's RMS value (Effektivwert), which is approx. 0,7 times the peak value. After that we went on to the protective devices in electrical engineering: ground/earthing, protective insulation and extra-low voltage.

(1) peak, (2) peak-to-peak, (3) RMS, (4) period

The next big topic which used up the remaining time of the electrical engineering class was "component". Not just any components but 2 very basic and important ones: resistor & capacitor (Widerstand & Kondensator) that are used in various ways in audio engineering through electrical engineering. With two resistors connected in series you can build a "potential divider" which you might know as "fader" and "potentimeter" (or poti). Of course there are also "current dividers", but they are rarely used in audio engineering, so I leave them out.

Capacitors can, like resistors, be parallel- or series-connected, but the formulae are inverted: formulae for parallel-connected resistors match series-connected capacitors and formulae for series-connected resistors match parallel-connected capacitors. So, it's not that bad to do the maths with those components.

There's one speciality all of us will need as audio engineers: frequency-dependent potential dividers (frequenzabhängiger Spannungsteiler). These interesting elements are built with a resistor and a capacity AC resistors (kapazitativer Wechselstromwiderstand). First I have to explain what a capacity AC resistor is or rather does. Normally, a resistor has the same resistance value regardless of the frequency. But this bad boy is different. He acts on the following principle: the higher the frequency, the lower the resistance he puts up. 
You might be asking now, why this is important in our future line of work. Well, it's quite simple. Depending on the setup of those two components you either get a low-cut or a hi-cut filter and you can't argue that those are not important for an audio engineer.

But I won't go into further details. If you're interested, you can find most of those things in a good physics school-book... or all of it at the all-encompassing, omniscient entity called "the internet" and its messiah [enter search engine of choice]. But I have another suggestion: start studying at SAE or become an electrical engineer. *wink*

I hope I'll get to write a bit more, but those electrical engineering basic aren't really fun for you to read and if you had electrical engineering in school or your professional education, you might know this stuff even better than me. So, if I got anything wrong, don't hestitate to shout at me what an idiot I am and how I could make such blatant mistakes. But keep in mind: I was studying law until January of this year and my knowledge of technical shizzle wizzle like this isn't what I wish it would be. But it will change after reading up on this subject. Promise.

Well, that's all for today. Cheers!

GIVE ME FOOD! NOW!

Thursday 20 March 2014

Mix that Mic, DAWg!

Two weeks done, 50 more weeks to go!

As I wrote in the last regular entry we started to learn about mixer consoles last week and I will elaborate a bit what we've learned so far about a mixer console and its components.

Let's start of with its purposes. What is a mixer for? Well, sounds like an easy question to begin with.
1. Summing up the single tracks to one stereo track
2. adjusting the ratio between the volumes of each track
3. frequency editing
4. effects
5. adjusting the balance/panorama or each track
Moving on to the mixer itself and its most important parts.
1. Gain (Eingangsverstärkung)
2. 48V Phantom Power (48V Phantomspeisung)
3. AUX Send
4. PFL (or "Pre", "Pre Fade"; = Pre-Fader Listening)
5. Insert Send/Return
6. EQ (= equalizer)
7. Subgroup Routing (Subgruppen-Routing)
8. Solo Bus (Solobus)
9. Pan(orama)
10. Faders
11. Faders for Subgroups (Subgruppenfader)
12. Control Room Pot(entiometer)
13. Phones Jack (Kopfhörerbuchse)
14. Master Fader
15. Talkback
16. Metering (VU-meter)
Of course, there are more components which differ from mixer to mixer, but at least those 16 components should be present.

We also had a short introduction into microphonics, the last topic on Monday.

Firstly, the purpose of a microphone is to converts fluctuations in air pressure (which we perceive as acoustic noise if the fluctuations are between 20Hz and 20kHz) into electric energy, so that a mixer console has something it can work with. Secondly, there are so-called "directional characteristics". They define in which area (front, back, left, right) a mic records and which frequencies it records in which area.
We've just been talking about 6 directional characteristics, but there are some more. So, what are those 6 directional characteristics?
1. Omnidirectional/Undirected (Kugel/ungerichtet!) [pic]
2. Cardioid (Niere) [pic]
3. Hypercardioid (Hyperniere) [pic]
4. Supercardioid (Superniere) [pic]
5. Shotgun (Richtrohr/Keule) [pic]
6. Figure 8 (Achter) [pic]

All of those directional characteristics have their fields of operation. Some are better for interviews, others work better with certain instruments. It really depends on what you want to record and how the room you're recording in is designed and sounds.

An important distinction for mics is whether your mic is a "condenser" or "dynamic" mic. You could say that this distinction isn't just important, but vital for the lifespan of it.

You might ask, "But Hörlöwe, 'vital' sounds pretty exaggerated. If it really is vital, why is that?"

That's a very good question and I'll gladly answer. Handling a mic incorrectly can destroy it. Simple as that. And if it's a very expensive mic, i.e. Brauner, Neumann, DPA - other expensive mics are available, it will probably gall you to no end.

So, why exactly is this distinction important?

Condenser mics are much more susceptible to vibrations than dynamic mics. If you drop a dynamic microphone, nothing much will happen. Hell, if you drop a dynamic mic from a height of 2m, it would still work fine just like before. But if you drop a condenser mic or just give it too much of a physical shock the diaphargm/membrane inside can go tits up. The microphone would be no more. It would have ceased to be. Expired and gone to meet its maker. Shuffled off its mortal coil, run down the curtain and joined the bleedin' choir invisible. It would be an ex-microphone. So, please, don't drop condenser microphones. Handle them and all other equipment with care.


Mass grave. RIP.

There are certain other differences between those two types of mics. While condenser mics need 48V phantom power, dynamic mics get along just fine without those 48V. Dynamic mics are more resistant to feedback and have built-in impact sound insulation. Condenser mics are more delicate. They can record smaller air pressure fluctuations (quieter acoustic noises), their frequency spectrum is more linear and the resolution is higher. But as I wrote earlier, they tend to break more easily if you handle them incorrectly.

This is just a short summary of what we've dealt with in class. Even so, you can see what the scope of those mics are: dynamic mics feel right at home on a stage, while condenser mics prefer the warmth and safety of a recording studio.

The last topic we touched on Monday was "interfering influences while recording with a microphone" and this concerns ALL directional mics, not the omnidirectional though. Oh, and dynamic mics compensate for those influences pretty well.
1. Proximity (Nahbesprechungseffekt)
Proximity effect comes into play if you get to close to a mic while speaking. The nearer you get, the more pronounced this effect will become. What's happening there? The bass or lower frequencies get boosted because the mic picks up in-phase sound in front of and slightly out-of-phase sound behind it.

2. Wind & Plosives (Wind & Explosivlaute wie P, T, K, ...)
Rushing of wind, plosives and other wind-related noises can disturb a perfectly good recording session. The most common way to prevent that is a pop filter or windjammer. Just place it in front of or over the mic and all's well that should be well.

After school we went to grab something to eat and since I wanted to do the first exercise today my lunch break was quite short. I started with my first "mini mixdown" where I had to mix 8 tracks. Getting the volume of each track in a good relation to the other tracks was the fist big task. After fiddling about with the gain pot and the faders I decided to send the drum set to a subgroup and all the string instruments (3x guitar and 1x bass) to a second subgroup, so I could simply raise the volume of each subgroup, since the relation among the tracks of each subgroup were good. Next up was the EQ. I took my time adjusting the EQ for each track, looking for frequencies to be brutally cut or gloriously boosted. Effects-wise I just used a vintage phaser for the bass guitar and two out of three guitars. I applied some final touches with some reverb and panning of the tracks. Voilà, the first mixdown was done... and now Supervisor Marco made his appearence and had to listen to what I mixed. Well as expected, some errors were made, especially a routing error. I double-routed some tracks via main L/R and via a subgroup, which boosted the volume of those tracks. But I also got praise for my choice of effects and the rest. I haven't done too bad for my first mixdown, if I may say so myself. But it was far from perfect and I still have a long way to go to become a good audio engineer.




Tuesday was the day of the "Recording Systems" or DAW (digital audio workstation). Michael explained in depth the two different recording systems that are in use today and that DAW is a very vague term, which only means a transducer with memory/storage and editing possibility. The two main recording systems are:
1. Standalone systems and
2. Computer-based systems
Standalone systems have a very sturdy casing and are extremely reliable. The software running this system is stripped down to a bare minimum to ensure maximum stabiliy. Plus, they are easy-to-transport.

Then there are the computer-based systems which are basically computers (Mac, PC, Linux) and maybe additional hardware. They can be divided into native systems and digital signal processing-based systems. Native systems run on a computer's CPU power alone, making it very resource-consuming. DSP-based systems require further hardware, either internal or external. It's a special hardware just to support the host computer and processes all signals. The computer is just using its computing capacity to display the GUI. The latter of the two systems is more reliable.

Besides that he explained some terms that have to do with recording systems:
linear/non-linear and destructive/non-destructive recording, audio interface, sequencer software and latency and how to avoid it through direct monitoring & buffer size. But that's something for you to look up. If you want to know more, leave a comment. =)

And Tuesday was, of course, aural training day. I started CD #4 and guess what the exercises were about. No, not frequencies. Well, not really. Who said "effects" just now? You! Yes, you over there! You're right. Effects were on today's agenda again. I should mix in some frequency exercises. It would make for a nice change and I would do better in the drill sets that involve the effect "equalization". Either next time or next week. I'll see.

Well, that's all for today. Cheers!

Monday 17 March 2014

Wow. Such Weekend. Much Fun. Doge Approve. Wow.

Weekends tend to be a bit dull, if you're in a new city and don't know enough people there yet. So, on Friday I decided to do the next aural training unit and to have a looksies at our main DAW, ProTools.

The last batch of exercises on CD #3 were again about different effects and pinpointing them in A/B drills. Same exercise, different day. I won't bother you with more details about it. Unless you want me to.

After aural training I messed around with ProTools for a bit and tried editing a practise file with speech. What I saw and used was pretty intuitive and except for using the wrong tools the result was okay. Not anywhere near great, but okay-ish, sort of. Well, learning by doing or trial & error, what did you expect? Failing until you succeed. But this day's supervisor Marco helped me out a bit and showed me what I've done wrong. Mental note to myself: don't use the loop tool, not even by accident, if you want to trim a clip.


A pack of wild ProTools stations appears!

Saturday was pretty exciting, even though it started out without being anything special. Sleeping long, feeding the bearded dragons some fresh salad and just waking up in general. Around noon I wanted to have another go at ProTools, but this time I would use the right tools and think about what Marco told me the day before. As you might expect the result was different from Friday's mucking about with ProTools. I was already much faster with editing, since I had an inkling of what I had to do, and I took my time to do better crossfades, fade-ins and fade-outs. Working on the breaths and breaks was something I focussed a bit to get them to sound natural and believable. 

In the end, the outcome was a lot better than the last. Not something I would send a radio station, but for using ProTools for the second time I'm content. Well, no, not really. One should never be completely content with what one has been doing. Having a critical point of view of one's work and just trying to do better than the last time is the only way to become better. I apply the same principle when learning and practising an instrument, because the moment you're content with your own skills you stop being critical about said skills and as a result you stop becoming better. It might sound obvious and, let's be honest, it is, but most of the time the most obvious answers are the hardest to spot. Ever heard of Ockham's Razor or Lex Parsimoniae? It's a similar principle. Google it, if you've never heard of it.


Button goes in! Button comes out! Button is stuck.


While working with ProTools I heard some nice electronic music coming from another classroom and then from the lounge, but I wasn't done in the Edit Area. After finishing what I set out to do I was curious as to what was going on over there and I moseyed over. A couple of students and our superviser Daniel rigged up some synthesizers, a drum computer, turntables, a Tractor Kontrol, some Korg monotrons and a mixer and were having a small electro session because they completed the Electronic Music Producer course on this very day, presentation of the certificates included. Celebrations were in order. 


Less light, more sound!

But I wasn't just there listening to the soundscapes and beats they created. Daniel asked me if I knew how to operate the mixer and since I used to operate our small on-stage mixer when I was playing concerts with my band »Spielleyt Ragnaroek« I dared to do it. Besides it being a lot of fun, I could try and apply what we've learned and practised (aural training) up to now and gather some experience in mixing in a live situation without embarrassing myself too much if I did anything wrong.

Time flew by and one after another graduate left because it was some hours past closing hour. So, what else was there to do? Righto! Getting a drink as a finale for this day. And thus, with the last three graduates, Daniel and myself grabbing a drink before heading home, ends this tale. And they all lived happily ever after. Or something like that...


DOGE APPROVE. WOW!

Sunday 16 March 2014

More Physics is Effectstastic!

On Wednesday we continued where we left on Tuesday: sound propagation in air, especially in a diffuse sound field. Four very distinct possibilities exist here:
1. reflection (Reflexion)
For a surface to reflect sound it has to be acoustically hard (smooth and hard) and the angle of incidence is the same as the angle of reflection.

2. absorption (Absorption)
Absorption takes away sound energy, either by converting it to warmth (absorbers made from foam), by converting it to kinetic energy and warmth (bass traps/Membran- und Plattenschwinger) or through so-called "Helmholtz-Resonatoren".

3. diffraction (Beugung)
Sound waves can travel around obstacles, but it depends on the wave length and the size of the obstacle.

4. rebounding (Brechung)
Sound waves change the direction of propagation if they transition into another medium, i.e. air to metal.

Welcome to SAE Zurich! You are here: Lobby.

 Next up was sound localisation, which can be differentiated into:
1. left/right localisation
This type of localisation is very accurate. You can hear deviations of approx. 3° from the centre. Because the sound waves don't hit both ears simultaneously, there are differences for both ears in level, phase and timbre.

2. up/down & front/back localisation
Levels and phases don't have any impact here, since the sound waves reach both ears at the same time. However, two factors are relevant: the form of the outer ear and experience/expectation.

3. perception of distance
Perceiving distance solely by sound is nigh impossible and is described as "very inaccurate". It works better in rooms (through reflection).

After that we took a quick look at what rooms a recording studio would ideally have and what is important in those rooms. Just to mention to rooms: control room, recording room (live 1), drum booth, vocal booth, control room just for editing, computer/machine room.

Wednesday is, of course, also "aural trainig day" and since I finished the first two CDs I was excited to start with the third one which introduced "A/B drills". In an A/B drill you have to listen to the original example A and the changed example B and then you have to find out what the change is. The third CD deals with effects and to train recognising different effects the effects got divided in 6 categories (amplitude, distortion, compression, equalisation, stereo and time delay/reverb) with 31 possible changes. Oh boy, the fun I'll have mastering this... =)

Be vewy, vewy quite. We're hunting effect.

Rise and shine! Thursday was effects day. Our lecturer Michael introduced us to effects and outlined the 4 main categories and its most important effects.
1. time processing effects
As the name suggests these effects manipulate the time element. Examples of such effects would be delay, reverb, hall, chorus, flanger, phaser and stereo enhancer.

2. dynamics processing effects
This kind of effects have a big impact on the dynamics (difference between the loudest and the softest sound) and can either extend or reduce the dynamic range. Typical dynamics processing effects are complressor, gate, limiter, expander or de-esser.

3. frequency processing effects
Effects of this category bring changes in the frequencies, i.e. filter, EQ or audio crossover (Frequenzweiche).

4. special effects
Effects that can't be lumped together in any other category end up here. Only the biggest, the baddest and the most notorious effects are part of this group, like distortion, tune pitch, frequency shifter or harmonizer.

Lastly we started learning about mixer consoles and what the console itself and all the knobs, buttons and faders are for. But I'll elaborate this at a later date.

Well, that's all for today. Cheers!

Artist Spin2Win featuring Carpet Floor!

Wednesday 12 March 2014

Week Two - Engage!

The second week of our audio engineering studies started as good as the last week ended. We continued speaking about hearing damages on Monday. The Incredible Tinnitus made his return once again, accompanied by his evil cousin Sudden Deafness (Hörsturz, Ohrinfarkt or Managerohr).

But a big new topic arose: oscillations, which can be described as "maximum displacement from equilibrium repeating itself in certain time intervals", which sounds awfully complicated at first, but believe me when I say it is not. Inherent to this chapter in audio basics are the parameters to describe oscillations, of which there are 5:
a) main oscillation type (sine, saw, triangle or square - hybrids are possible but not relevant as of now)
b) frequency
c) amplitude
d) cycle duration
e) polarity (phasing/"Phasenlage"). 

Lo and behold! The same edit area but from a different angle.

The next big topic we addressed and discussed were the different kinds of crosstalk between two frequencies. We started with something easy, superposition of two identical waves (same frequencies, amplitudes and polarity/phasing). The result is quite unspectacular: the consequent wave is the same but louder by 6 dB. This is the maximum in amplitude a wave can gain by doubling it with an identical wave. So, take two waves with 100Hz & 10dB each for example and what you get is a wave with 100Hz & 16dB. Logarithmic scales are the secret when talking about absolute loudness. If we're talking about perceived loudness a gain of +10dB is needed to double the loudness of a wave. You'd have to nearly triple a wave to get the desired result. I want to point out once again, that this is for the PERCEIVED loudness, not the absolute or physical loudness.
The next kind of crosstalk was as easy as the first one: wave cancellation. This happens if you have two identical waves but the polarity of one wave is reversed by 180°. As the name suggests both waves cancel each other out. The wave has ceased to be. It has gone to meet its maker. It is an EX-wave. Nothing will remain of both waves if the polarity of one wave shifts by 180°. 
 The second last kind of crosstalk was "polarity shift" (all shifts, but 180° or 360°!) which results in some cases in massive changes in the sound. It would be too much to explain it all. As cool as a blog about sound/acoustic physics would be, it's not what I set out to write.
The last crosstalk is called frequency modulation or beat ("Schwebung"). This happens when you have two waves with different frequencies, i.e. 100Hz + 105Hz. What you get is a constant change of constructive and destructive polarity, meaning some frquencies are boosted some are cut and this changes permanently as time goes on. What you will hear is a kind of wibbly-wobbly that accelerates if the difference in frequencies increases. If anyone of you is playing any kind of instrument with a drone (i.e. bagpipes, hurdy-gury, ...) you know this sound. If you don't, switch on your drones and try tuning them for Cthulhu's sake!

After school I did the next aural training unit. And you can guess what it was about. Did I hear "frequencies" in the last row? Right. Frequencies again. Pinpointing one out of 10 possible frequencies and whether it is boosted or cut is something I need to practise some more. But as they say: no one masters anything without hard work. And that's what I'll do: work hard so my not-to-shabby hearing will become better.


This is where aural training takes place. A comfy couch.

Tuesday started out pretty relaxed. The first topic was "different auditory events". Since I don't know the correct English terms for the six events I'll just give you the German ones:

1. Ton
A pure sine sound with just one frequency. A "Ton" doesn't occur in nature at all.

2. Tongemisch
Sound comprised of two or more "Töne", but then again: pure sine. Nothing else. For example: 100Hz + 147Hz, 100Hz + 231Hz + 387Hz, ... . You get the idea.

3. Klang
"Klang" is a special type of Tongemisch and has a harmonic spectrum. Its ingredients are: 1st harmonic (or fundamental) and several other harmonics (or overtones), which are multiples of the 1st harmonic/fundamental, i.e. 100Hz (1st harmonic/fundamental) + 200Hz (2nd harmonic) + 300Hz (3rd harmonic) + 400Hz (4th harmonic) + 500Hz (5th harmonic). Instruments always produce "Klänge".

4. Klanggemisch
A Klanggemisch is, as you can possibly imagine, produces by mixing two or more Klänge. They don't have to result in a chord. Soeven the combination of C + C# + D is a Klanggemisch.

5. Geräusch
This is a special kind of Tongemisch with a continous spectrum, i.e. you have a sound with 500Hz as the lowest frequency and 12kHz as the highest frequency. A Geräusch contains ALL frequencies in between 500Hz and 12kHz, usually with varying amplitude.

6. Rauschen (or "noise")
Again a special type of Tongemisch with a continous spectrum, but with ALL audible frequencies (20Hz-20kHz). There are different kinds of noise, but the most important ones right now are "white noise" (all frequencies have the same amplitude) and "pink noise" (every octave/doubled frequency is cut by 3dB).
Following this block our teacher Michael introduced us to "waves", what waves are and how we could calculate the wavelength. Just to sum it all up: waves describe the propagation of oscillations in an elastic medium. The speed of sound is quite vital for this and we've learned that the density of a material, the temperature and even humidity and CO2 content have more or less influence on the speed of sound, which is important if you're working at a big open-air festival.

Lastly we started learning about sound propagation in air. To start with there are two types of sound fields:
 - free sound field and
 - diffuse sound field.

Let's take a quick look at the free sound field. In a free sound field there are no obstacles for acoustics noises whatsoever. Sound can propagate freely. As with the "Ton" it doesn't occur in nature, because there are always some obstacles. The closest you can get in nature is a snow-covered area. If you've ever been to the mountains and stood on such a snow-covered area, you have surely noticed how hollow and strange everything sounds.

Next one. The diffuse sound field is what you usually encounter in your everyday life. As I wrote above sound always encounters various obstacles in its path, i.e. people, lamps, desks, chairs, trees and so on. Because of those obstacles, the material they are made of, their size and various other factors certain interesting things happen. To be precise those things are: reflection, absorption, diffraction, rebounding and interference (Reflexion, Absorption, Brechung, Beugung und Interferenz). But I will talk about those in my next entry.

Well, that's all for today. Cheers!
Yer poking fun at us audio engineers, laddie? =)

Sunday 9 March 2014

First Week - check

Thursday brought more audio basics. Well, not just any old basics but the real deal: how does our sense of hearing work on a mechanical level? How does a difference in sound pressure level becomes an impulse for the brain? What are all the parts responsible for this procedure? Beside the mysterious workings of the ear we found out about its limits. 

First there is the limit constituted by the frequency range, which reaches from as low as 20 Hz to 20.000 Hz (or 20 kHz). The acoustic region below 20 Hz is called "subsonic noise" or "infrasound" and above 20 kHz "ultrasonic" pitched its tent.
Fun fact: Did you know that the supermassive black hole in the Perseus clusters emits sound? It does and it rumbles away with an astonishingly Bflat 57 octaves under middle-C. This is the deepest note ever detected from an object in the Universe! Kudos to you, mate!

The second limit is the sound pressure level (SPL for short) which is related to the volume of a sound or noise. Our sense of hearing is a very finely tuned apparatus and can only cope with differences in SPL of 0,00002 Pa ("hearing/auditory threshold", HL) and 20 Pa ("threshold of discomfort", TD). Below or above those limits we just don't hear anything. Last but not least we were starting to learn about hearing damages, with age-related hearing loss leading the way and the famous "tinnitus" following in its wake. Still missing in action and expected to appear on Monday: auditory trauma.



 
After school Tom, Tosi, Sina (a friend of Tosi) and I went to grab a drink at Turbinen Bräu, a local brewery with some excellent beer, if you can believe my classmates. Since I don't drink any alcohol, I got a "Gazosa", a grapefruit flavoured lemonade from Ticino. If the beer is as good as this lemonade is, it must be overwhelmingly good.

But Thursday was also the day I moved from Winterthur, where I was staying for a couple of days with two friends of mine, Tamara and her husband Christian, to Zurich. A friend of mine, Thiago, and his partner Gregor went on vacation to Brazil to Thiago's home town and in exchange for living in their flat for about a month I volunteered to look after their pets: 2 bearded dragons and a couple of snakes. Luckily, it's not scorpions and spiders. =)



 
Friday was the first day I could sleep long. And so I did, until 10am - with a short intermission at 0730am to switch on the light for Thiago's bearded dragons. After some faffing around in the morning I went to SAE around noon to do some aural training again. After the disillusionment of the last units I didn't expect much. There are still rogue results going about, stabbing my yet average to good scores, but it was better than last time. Aural training is pretty taxing, even if it's just one hour. But it is concentrated effort you put in during the whole of this hour and after a time the sense of hearing tires out. So, after completing the training unit I decided to grab a book from our school library and relax a bit in the lounge area. One of the deep, soft leather couches immediately invited me to sit and stay there for another hour and to have a drink. I like those kind of invitations.



 
Something else happened on Friday, which makes me very happy. I found a room in a shared flat! One that I would have liked to stay at too! With nice flatmates, a big room, roof-top terrace to have a barbeque on and lots of musicians in the whole house. Seems like stuff finally seems to work out.

Well, that's all for today. Cheers!

Wednesday 5 March 2014

First Class: Audio Basics 1

The third day came and went quicker than expected. Our instructor for "Audio Basics 1" is Michael Feller, a sound engineer running his own studio in Biel/Switzerland who will be teaching us for the next month. You might think that listening to someone talking about frequencies, amplitudes, (system) dynamics, physics, the setup of a recording studio and how hearing works isn't interesting, but I have to disagree there. It might be "just basics", but those basics are key and Michael's style of teaching and involving the students is so very relaxed. I felt comfortable from the moment he started talking. No sign of nervousness or fear of giving a wrong answer at all. His insight into audio and willingness to answer all matter of questions even outside classes is remarkable and he seems to take all of us seriously, even though we're just students. When I was still studying law in Innsbruck/Austria I sometimes had the feeling that some of our professors, especially the older ones who were teaching for 3 or even 4 decades, didn't take us too seriously. But this could just be my imagination.

After classes Tosi and I took our next aural training unit. Listening to just 5 frequencies being cut or boosted was way better today than yesterday, but when they introduced ALL 10 main frequencies I listed in the last entry into the exercise, things went pear-shaped and after one hour of intense listening to frequencies my sense of hearing is getting tired. Another sobering turn of events was the result.

Tomorrow I'll have to move house to a friend's place in Zurich. He's going on vacation to Brazil for a whole month and since I still haven't found a place to live we stipulated that in exchange for living in the flat I'll take care of his animals, two bearded dragons and a couple of corn snakes. Luckily none of the snakes are poisonous. Well, another month to find a flat or a room in a shared flat. And if you're wondering where I'm staying at the moment, don't worry. I don't have to sleep under a bridge or in one of the recording studios of SAE Zurichh. Right now I'm living at another friend's and her husband's place about 35km away from Zurich. Here's to hoping that
I can find a nice flat-sharing community.
 

Well, that's all for today. Cheers and good night!

Tuesday 4 March 2014

An Insight in my Studies of Audio Engineering at SAE Zurich - or "How I finally found my vocation"

So, that's really it. I still can't believe it, but I'm officially a student at SAE Zurich in Switzerland now and my second day there has been a lot of fun. Slowly but surely our class of 12 guys, all coming from different musical backgrounds, is getting to know each other better and first friendships start to form.

Even though yesterday and today were just "orientation" and "introduction", there's so much to talk about right now, as it is the case with every new school and/or studies, but I can't possibly fit all I want to write in this blog entry. 

Starting tomorrow we'll be hearing about audio basics, especially how sound works, how hearing works, what frequencies, amplitudes and phases are and all that funky jazz. But what I'm really looking forward to is the first practical work, even if it's just at one of the workstations in the "Edit Area" (EA for short). But I am fully aware of how important theory and basics are.

Apart from that I've started with aural training today. We have to do 10 hours of aural training in the diploma course, but I reckon that starting early and doing more aural training than necessary isn't the worst idea I've had. 

So, my classmate Tosi, a DJ from Switzerland, and I were checking out the aural training after a short lunch break. The first exercises were about hearing boosts and cuts of 12dB at certain frequencies (31, 63, 125, 250, 500, 1000, 2000, 4000, 8000 and 16000 Hz) in pink noise and in music. I can tell you, it's hard. A couple of times I got it right 8 or 9 times out of 10 and one time I got it right only once. The average was about 5 out of 10 though. I'm coming from the performing side of music and I never had to worry about this. Considering that I have never done ANY aural training at all and I never consciously listened to certain frequencies the result isn't that bad. On the other hand there's still a lot of aural traing ahead of me until I'm content with what I can hear and how good I am at it.

Anyway, that's all for today. The blog might change in the next few days or even weeks, but bear with it.

Cheers and good night!