Category Archives: tech

In search for the perfect headphone – noise canceling and wireless

I do a fair bit of travelling. I’d guess at about 80-100 days per year are spent in some mode of transportation. I don’t commute since I do freelance work, though.

To allow a bit more concentration on journeys I started looking for a wireless (read Bluetooth) and noise cancelling headphone.

The journey starts with AKGs N60 NC wireless.

I got those as refurbished from Harman on EBay for 180€ instead of 280€ so this was a big plus.

Straight out of the box, I liked the case, and the extremely compact size these could be folded to.

I then switched them on, and down-pressed the on-off switch.

Pairing with my iPhone 6s was a breeeze, just worked straight away.

Noise Cancelling is pretty good. I tested them with some white noise, and some sine sweeps. There seems to be a hole in the NC around 3-5kHz. It still atenuates, but not as much as in the other frequencies. Probably on purpose to allow warnings and shouts to pass.

I’d guess the NC does about 30dB of reduction. It really works quite well without giving you a sucked in and hollow feeling on your ears.

The comfort of these headphones is good, but not great. Its winter here but still I can feel my ears sweating below the leather. They’re pretty light and don’t clasp you head, but I wouldn’t want to run with those on. I guess they’d fall of.

Mechanics seem okay too. Quite well built. Not on paar with Beoplays or my trusted NADs, but good. Will hold up for a few years I guess.

Now, the sound: Well, the sound is amazing. Even on Bluetooth, which is not the iPhones forte. Just very well rounded. Beautiful voices, no artifacts from NC. Amazing really. A joy to listen to.

Bass is just right. Good fundament, not overwhelming or inflated.

This really is a terrific headphone.

Good news: My room-mate ordered the Sony MDR1000XM2, the Sennheiser Momentum Over-Ear BT and the B&W PX… So soon we’ll know much better which is the ultimate!


Bitcoin hard fork security – thoughts about

Bitcoin Gold (BTG) hard forked the Bitcoin blockchain at block #491407 to create an alternate chain, where miners can use GPUs instead of single-purpose ASICs for mining. BTG also introduced some improvements like bigger blocksize and shorter time between blocks for higher transaction bandwidth.

While the whole motivation behind BTG seems to be profit, especially for the developers, who have been widely and loudly criticized for that, I personally don’t think this was necessarily a bad move. In the end every holder of Bitcoin got the same amount of BTG. So, theoretically, a lot of value can be created here.

However, through the lens of security, the picture is very different.

For those not familiar with the matter lats recap what hard fork means:

  1. every transaction up to block #491497 is copied, meaning:
  2. every Bitcoin address at that point is also BTG address
  3. every private key is the same
  4. every address has the same value (numerically)

This means, that if you want to spend your BTG you can use the same password, or private key, that you use to spend Bitcoins.

You probably know what’s coming:

It means that if someone would program a very comfortable wallet that phished your private key, he could instantly access your Bitcoins as well, since both private keys are necessarily the same. (Hard fork). That’s a huge security risk!

what can you do:

  1. Wait! Wait till a trusted BTG wallet emerges, then use it.
  2. If you cannot wait, but want to get into the possible fire sale of BTG now, transfer your Bitcoins to a new address with a new private key before accessing your BTG

I know that transfering your Bitcoin might be hard. They might be on a paper wallet, or on a wallet encrypted somewhere.

I’m also not saying that BTG’s wallet is a huge phishing scam. Actually I believe they did all they could to make that secure.

Still there is an inherent security risk here. And, at that, one that affects every single Bitcoin address out there.


mixing basics, collection of

mixingI’ve been mixing music, film and shows for 14 odd years now, and learned a lot during this time. Recently I’ve been thinking about the basics of mixing. The “how” to mix, and I want to share what I found with you here on my blog.

The most important aspect of mixing is setting levels. For me this is where good can start and end. Have your levels wrong? Your mix will suck, no matter how great the individual sounds are, no matter how much time you spent agonizing over the low end content of your Kick.

So how do you set levels? This was something that vexed me. Nobody seemed to be able to give an answer.

Some engineers pushed the faders up very slowly, 1db more, listen, 1db more, listen. Others threw the faders up, wiggled a bit, and tore them back down if they didn’t like what they heard.

I think the slow approach is great for getting a feel of how the mix changes with a given signal (especially lead vocals). The second approach is very instinctual and makes for bold mixes. Good.

But what do you listen to, when listening while setting levels?

And here is where I found a gem: I used to listen to the signal on the fader I was pushing. “Where’s the bass at, can I hear it, does it sound good?” Then I switched, and that changed the mixes for the better.

When pushing up a fader, I listen to the rest of the mix, most importantly to a signal that gets directly affected by the one I’m pushing.

Let’s say I’m adjusting Bass level. Then I would listen to the Kick, and the Lead Vocals. Is there a level where the Kick sounds better because of the Bass? Usually yes! Is there a level where the whole mix sounds louder or softer? Yes, again!

Now, let’s say I bring up the Guitars. When does the Vocal suffer? I go slightly below that. Now those guitars might sound a bit tame, if it’s a rock song. Then I reach for EQ, and do the same:

I scoop out some mids, let’s say I start at 1,5 kHz. And then I move the frequency, while listening to the Lead Vocal. Is there a frequency where the Lead pops out better? Usually, yes, again! (Warning for plug in users: don’t look at the screen, look away, just listen, you’ll be surprised!)

Great! I push the guitars up, now that I have more space.

Now, let’s say, I have a synth layer that I want to use to create space. At what level does the mix sound most spacious? Can I even hear the synth directly? Probably not!

This brings up another important truth about mixing (and life, hehe): Things come in opposites. For some things to be loud, others have to be soft. A phat mix usally has just one or two sparse signals that have bass; the mix is still pretty flat, frequency wise. The human ear adapts very quickly and adjusts frequency content to “normal”, so if every second signal has a lot of bass, the mix will actually sound flat, muddy, and not phat at all.

So, to sum things up: When mixing, listen to what you’re NOT doing at the moment. Nobody is ever going to hear anything in Solo out there. And everything affects everything else. So account for that.

Comments more than welcome.

Is Google harming its search results with its online ad business model?

A few years ago, the internet was a lot simpler.

Information was to be found on websites, for the most part. Shure, Internetsome obscure ftp and telnet sites, some chat channels, and a lot of newsgroups.

Google just had to crawl usenet plus the http space, index all the information it found, and make sure users found what they wanted. Google did an amazing job here, and was rewarded by stellar growth, incredible user loyalty, and thanks to its superbly implemented way of auctioning off ad slots, a massive income stream.

Today, the internet is a lot more complicated. Information is often stored in Apps. Some of them are web apps, some aren’t. Some of the Apps’ content is searchable for Google, a lot isn’t.

And here is where it becomes difficult for Google: Why would a company make its App searchable, just so that Google could display more relevant search results, and as a return sell ads better.

Content providers not only get no share from ad sales, but actually have to pay, if they want an ad to link to their content.

Right now this paradigmt becomes questionable. Google’s search results are only as good as the amount of content they can access. Would you use a search engine that only had access to every second page?

Google has to start sharing some revenue with App devs if they want to have access to their content.

App developers do not rely on Google for the promotion of their apps. Here App stores have become the main channel. And this weakens Googles biggest strength as a gateway to traffic and users.

This becomes increasingly pressing as the Internet transforms from a very homogenous WWW to a polymorphic interconnection of Apps, Channels, Websites and streams of data.

Maybe its time for a new competitor in the search field. A disruption, to further abuse this horribly strained word. A company that builds its paradigms in the present form of the internet, Maybe even a collective effort. The ‘Net would be better of for it.

Limits of AI – inklings of

I adore Jeff Jonas work for IBM, and his take on Big Data. So from time to time I check his blog. I stumbled upon his update on the G2 sensemaking engine a while ago. As I reread it today a thought struck me: One of the limits to AI stems not from the algorithms deployed, or their processing power. But from their access to input, to data. From their lack of senses, if you will.

AIA human infant is born with all 5 senses wide open, and an infinte stream of information constantly available, or, more concise, unmutable. Human senses seem custom tailored to interface reality. Much has been written about the ability of the unconcious to parallel-process Megabits of information vs. the 7 or so bits the concious mind can access simultaneously.

Computers on the other hand have to rely on humans to feed them information. Now we have two problems at hand here:

1) Translational loss: As information is digitized, a lot of context gets lost and left out, equaling a substantial bandwidth reduction.

2) Selection bias: In decinding what to feed an algorithm, we choose what’s important for us, vs what would be optimal for AI performance. A nontrivial issue as algorithms scale in complexity.

This in turn severely limits an AIs ability to truly learn and scale. Now I don’t claim to be an expert on AI. But this clearly merits some consideration. If you have any input or information on how this is addressed please share.

messaging – ramblings about


It’s been a while since the news of Facebook acquiring WhatsApp for $19bn astounded me. That worked out to a P/E of 950! Wow, just wow! Talk about value investing! (pun intended).

But it also made me think. Why could a messaging company be deemed so valuable. And as I walked in the Bavarian forest today, without a smartphone in my pocket, it dawned on me.

But let me tell you a little  bit about my messaging preferences first. I used to do phone calls, in times before mobile phones. I liked keeping them as short and concise as possible. When mobile phones arrived I took to SMS immediately, aided in no small part by a short-lived but passionate SMS romance. What a thrill! 160 character love poems.

But the permanent availability takes its toll; and so now, I often find my phone on silent mode, calling back, or using messaging apps. WhatsApp, Facebook Messenger, and email or SMS mostly. With some Google Hangouts thrown in for my family.

Even after years of using it, I still don’t enjoy typing on virtual keyboards, and never got as fast as on my BlackBerry. I started using voice messages on WhatsApp about 5 months ago. I find them fascinating. It’s easier and faster than typing, and also a much richer experience. Whereas small inflections of irony where often completely lost in SMS or WhatsApp, they are now clear.

So here comes my point: We are moving towards a richer messaging experience, as users slowly abandon the notion of: asynchronous communication means written communication (stemming from letters) and synchronous communication means face to face or phone calls (or IRC chats or the like).

Here is my distinction:







That means people will move more and more to asynchronous forms of communication, because these can be fit into their schedules as they see fit. As they become richer and easier to use, these will replace phone calls, and, to a certain extent emails.

Because they are quicker to compose, and offer a more honest and direct experience of the sender, a lot of times, people will gravitate towards using messaging apps.

Messaging in the future will mean rich messaging. Voice, or probably video or 3D video, or who knows what comes next.

And this makes WhatsApp, that, in my opinion, got the Messaging work flow and user experience completely nailed down, a very valuable prospect. Worth $ 19bn? Only time will tell.

Hearing – remarks about

The question: “Does HD Audio matter, and why do we need to produce frequencies above 20k” still nettles me.

In the woods

Today, while taking an exceptionally beautiful walk through the Bavarian forest near my house, I head some insights, that I’m about to share with you.

1.) A lot of research points out, that the human ear is incapable of hearing anything above 20k and hence reproduction, recording and transport of frequencies above that are unnecessary.

2.) David Blackmer pointed out here, that 2/3rds of the hairs in the human cochlea are used to detect not the frequency, but the waveform of incoming audio.

Blackmers rearch has apparently been refuted, or could not be reproduced. His findings still make a lot of sense to me, seen from  biological or evolutional viewpoint. Now I am very well aware of the dangers of biologisms and other half-baked cross references, still, consider this:

Say, you are in the woods, looking for food, or an animal that you might hunt, at the same time, aware that you need to stay clear of wolves, bears or sabre-tooth tigers. Hearing is very relevant to that task. Specifically, the transients of the incoming audio make all the difference. A sharp, short spike, when an animal breaks a twig or steps down upon dry grass is very different from wind blowing through trees. This difference is in the micro-second time domain and is vital to your survival.

So to have a lot of capacity dedicated to distinguish waveforms, transients and rise time, makes clear sense.

Why can this not be detected in tests: I believe that these tests ask questions from a frequency domain point of view. Not from a time domain point of view.

And here comes the hard part: some argue that since frequency requires time, there is no need to look at the time domain. This seems correct, but on closer inspection puts the horse before the cart. Time is primary, frequency secondary. With time comes frequency, but frequency requires time!

So, yes, the human ear cannot hear frequencies above 20k. But also: waveform detection, rise times and transient response are of utmost importance to natural sounding reproduction. I could detect that time and again, without knowing about the rise times of the equipment being used.

I suggest that the human ear has different windows for the frequency and the time domain. Frequency is limited to approx. 20Hz to 20 kHz. Time is very sensitive to small differences in waveforms and transients.

I would be very interesting in any research about that, that does not stem from a frequency domain point-of-view and would love to collaborate to develop adequate tests. So if you are thinking about hearing, working in the field of hearing research or just want to team up, please do not hesitate to leave your comment below or contact me.

SysTech for the Wise Guys – Schwabenhalle Augsburg 30.11.13

I got a call about two weeks ago, asking me, if I’d be available on the 30.11.13 and 1.12.13 to do system alignment and design for 2 shows by the Wise Guys, Germanys premiere a capella group.

One show would be in the Schwabenhalle in Augsburg and one in the Liederhalle in Stuttgart. They had recently hired a new FOH Engineer, Jan Karlsson, and since the engineer on the rest of the tour also handled system design, they now needed a systech as well, so not to overbear Jan.

I agreed to do the job. I like the group and the company that asked me – Ostalb PA.

Schwabenhalle in Augsburg has 4000 seat capacity, and Liederhalle in Stuttgart 3,000. Seemed like a mid-sized job. Then I got the material sheet for Augsburg:

  • 16 Meyer Sound MILO
  • 20 Meyer Sound 700 HP
  • 38 Meyer Sound M’elodie
  • 4 Meyer Sound CQ-2
  • 4 Meyer Sound UPA 1-P
  • 2 Meyer Sound Galileo System Processors.

That blew me away, I had done festivals for 15,000 people with considerably less. So clearly the focus here was not on economics, but on sound. I immediately liked that.

They kept referring to my task as “mapping”. A term I was unfamiliar with, and still am. However, I thought, if I delivered a good design and alignment, I could make them happy.

Communications with the venue proved to be difficult, with the person assigned to deal with me insisting he had long given all the relevant information to the Wise Guys crew, After I assured him, this had in fact not happened, he asked me, what the hell my job was, and why a systech was needed anyway.  I managed to procure the desired information via the Wise Guys management.

Arriving at the venue on the day of the show at 9am, I was pretty nervous. They wanted to be ready with the PA at 2 pm. Quite a task, I thought, given the sheer number of loudspeakers.

But, thankfully, Eric Neubert and the wonderful crew of Ostalb PA were allready there for some time when I arrived at 9. They had handled all the rigging, and even had the left and right MILO and flown 700 HP ready to go, once I provided the angles and splaying.

I had prepared a design and handed it to them. We decided to use a center cluster, because Left to Right was 28m and mid-coverage might have proven difficult otherwise.

The quantitative side looked like that:

  • Main LR: 8 MILO.
  • Center 8 M’elodie
  • Delays: 4x 7 M’elodie

We had 9 feeds coming from the console: Main (LR), Subs, Delays, Outfills (LR), Infills (LR), Center. We set the Galileos up so one Galileo managed the main PA and the Subs and Center, the other did Delays and Outfills and Infills. So we had no patching from one Galileo to the other.The Wise Guys use a completely digital setup, Wireless Mics go via AES to MADI to the DigiCo SD8 console, and then with AES to the Galileos.

We managed to set the PA up by 1pm. I then spent 2,5h tuning the PA with Jan. Using Smaart v7, our ears and a lot of walking, we managed to get a great sounding system.

The main difficulty was getting the flown 700 HPs and the ground stacked, end fired ones to work as one. That took a lot of measuring, and a fair amount of trial and error.

The show was a success, with no complaints. Wise Guys are all about intelligible lyrics. We delivered that to every seat,

Thanks to the Wise Guys for allowing me to design without economic limits. Such a treat.

Some images below:

View from the Delay
View from the Delay
View from the audience
View from the audience
MAPP Center
MAPP Center


Installing Android HD 7.3 on a HTC Sensation with 1.27.100 HBOOT

So I bought a used phone, because I sold my HTC Desire with CyangenMod 7 installed and a2SD running. A great phone, and a fantastic ROM. Especially the keyboard, predictive text.  But its battery was only lasting for 8 hrs and I got tired of Android 2.3.

HTC SensationI swore to myself, that I would not root this device. It had Android ICS 4.0.3 installed and HTC Sense 3.6 and was basically running fine. But then the bug bit me again; it was kinda sluggish, I could use a couple of features.

So, ever putting way too much trust in the ease of the rooting process, I decided to install Android HD 7.3. Sounded great. Beats Audio, better Radio, longer bat-live, faster…

I unlocked my phone with HTCDev and flashed a CWM Touch Recovery. Fine! Got SuperSU, put Android HD on the SD card and flashed the ROM. WiFi not working. Ups.

Of course I did not back up my old config, deeming this inappropriate for power users like myself. So now I tried to flash the new firmware that the excellent XDA guide mentioned. Wouldn’t work.

The problem was, that my phone was a Vodafone branded version, with HBoot 1.27.100. I needed the phone to be S-OFF so I could set the CID in a way that would allow me to flash the firmware. The only way to really do that is with Juopunutbear amazing method, which involves using a paperclip to shortcircuit the SD-Card in a very specific rhythm.

I looked at the video, read the pages, but my first 20 or so tries with a paperclip yielded nothing. Compounding my problem was that I was using Ubuntu 12.04 from VM Player. Which meant : Everytime I disconnected or rebooted the phone I head to connect it to the VM again via VM Players menu,

paper clipsSo after the paper clip I started using a piece if wire in plastic that is used for closing nut bags or cookie bags. I cut of both ends of the plastic and now had a insulated middle with naked ends. This immediately showed more promising results, because my phone now rebooted every time I did the wire trick.

To get the rhythm right I practised with a stopwatch, but honestly, you need to try a lot of times. There is an element of pure luck involved here, and it took me abaout 30 times (30 phone reboots, 30 connect to VMs) to finally get it.

Now I had S-OFF, installed the firmware, reinstalled Android HD 7.3 and everything is happy ever after. This is one fantastic ROM! The installation procedure is the most beauiful experienced so far, so big big shout out to Mike1986 for the ROM!

the hive – thoughts about

beehiveI had some thoughts about the way humanity, or at least the part of it that I can observe evolves.

I see 3 trends:

  • individual passivity – consumerism: individuals seek to maximize their short-term pleasure more than ever before. Shurely, some of that is driven by the relentless onslaught of advertisment and the perfection of consumer good offerings and presentation, as well as the ongoing reflection of this process in movies, tv shows and so on.
  •  wisdom of crowds, crowdsourcing – the whole is more than the sum of its parts: wikipedia, github, linux… examples of spontaneous, grassroots crowd organization, led by a couple of highly motivated individuals achieve outstanding feats, which they offer free of charge.  
  • Political Passivity – Looking Within For Change And Fullfilment: Driven By the widespread adoption of buddhist principles, more and more people simply refuse to partake in politics, which they see as a mere distraction. Real change must come from within, when your perceptions and attitudes towards the world of things change. a noble thought, that leaves the political playing field in the hands of power hawks and self advancers.

The jury is still out on what the end-effect of this will be.  In my mind, a picture emerges: 

Close to 7 billion humans populate our planet today. The Internet, cheap travel, affordable voice and increasingly video communication give everybody a true sense of interconnectedness and help form a global consciousness.  

So we arrive in “The Age Of The Hive”.  This fits perfectly with the trend of individual passivity. The individual does what he or she feels is most true to her vocation, trusting The Hive to use it in some meaningful way. Organisation arises spontaneously. Planing, politics and long-range goals become less and less important or tangible.  Lately we could see the development of a crowd-sourced symphony piece. Kickstarter launches tens of projects each day.  Individuals offer their output to The Hive, that, driven by feelings, attachments and sentiment grabs and amplifies some of the output, some not.

The individual that initiated the creation becomes a queen bee, for a short time, only to return into the fold of The Hive later to restart the process…  Resistance is futile!