Category Archives: Music

Insights into SoundCloud culture

SoundCloud (cf. Wikipedia) is a young Berlin-based company “under the laws of England & Wales”, and with Swedish origin. Six years ago, in 2009, they acquired their first big funding. Since then, they experienced a tremendous growth and were able to regularly raise investment capital. Until today, SoundCloud has created and put itself into a unique position with a convincing product (which I am using — and paying — myself), but can also be considered as competitor of big brands such as Spotify and Beats Music. In fact, according to the Wall Street Journal, SoundCloud can be expected to join the party of billion-dollar IT companies quite soon:

SoundCloud, a popular music and audio-sharing service, is in discussions to raise about $150 million in new financing at a valuation that is expected to top $1.2 billion, according to two people with knowledge of the negotiations.

Having these facts in mind, it is impressive to hear that SoundCloud still only employs 300 people, in just a handful of offices around the world. Just like me, you might be curious about getting to know details of this kind, and about the SoundCloud story in itself. So, I was really eager to listen to Episode 17 of “Hipster & Hack“, featuring an interview with David Noël (Twitter profile, LinkedIn profile). David has accompanied SoundCloud for six years now and currently leads Internal Communications. He clearly is in the position to provide authoritative information about how SoundCloud’s vision was translated into reality over time, but also about how culture and communication within SoundCloud evolved. The latter is what he mainly talks about in the interview, providing insights about the structure and tools applied for defining a culture, for keeping it under control, and for communicating it to employees from the very first moment on until even after they have left the company. David defines culture as the living manifestation of core values and comes to insightful statements such as

Living your values is your culture at any moment in time.

In the interview, we learn that one of SoundCloud’s core values is being open, in the context of internal communications. The culture and communication topic really seems to have a high priority in the company, judging based on methods like the “all-hands” meeting that David refers to in the interview. Personally, I cannot overstate how much I value this, coming from classical research where such elements are often just neglected.

So, if that raises your interest, I recommend listening to three quite likable guys here (listening to minutes 4 to 29 suffices, the rest is enjoyable overhead ;-)):

Songkick events for Google’s Knowledge Graph

Google can display upcoming concert events in the Knowledge Graph of musical artists (as announced in March 2014). This is a great feature, and probably many people in the field of music marketing and especially record labels aim to get this kind of data into the Knowledge Graph for their artists. However, Google does not magically find this data on its own. It needs to be informed, with a special kind of data structure (in the recently standardized JSON-LD format) contained within the artist’s website.

While of great interest to record labels, finding a proper technical solution to create and provide this data to Google still might be a challenge. I have prepared a web service that greatly simplifies the process of generating the required data structure. It pulls concert data from Songkick and translates them into the JSON-LD representation as required by Google. In the next section I explain the process by means of an example.

Web service usage example

The concert data of the band Milky Chance is published and maintained via Songkick, a service that many artists use. The following website shows — among others — all upcoming events of Milky Chance: http://www.songkick.com/artists/6395144-milky-chance. My web service translates the data held by Songkick into the data structure that Google requires in order to make this concert data appear in their Knowledge Graph. This is the corresponding service URL that needs to be called to retrieve the data:

https://jsonld-events.appspot.com/api/songkick/artist?skid=6395144&name=Milky+Chance&weburl=http%3A%2F%2Fmilkychanceofficial.com

That URL is made of the base URL of the web service, the songkick ID of the artist (6395144 in this case), the artist name and the artist website URL. Try accessing named service URL in your browser. It currently yields this:

[
  {
    "@context": "http://schema.org", 
    "@type": "MusicEvent", 
    "name": "Milky Chance", 
    "startDate": "2014-12-12", 
    "url": "http://www.songkick.com/concerts/21926613-milky-chance-at-max-nachttheater?utm_source=30793&utm_medium=partner", 
    "location": {
      "address": {
        "addressLocality": "Kiel", 
        "postalCode": "24116", 
        "streetAddress": "Eichhofstra\u00dfe 1", 
 
[ ... SNIP ~ 1000 lines of data ... ]
 
    "performer": {
      "sameAs": "http://milkychanceofficial.com", 
      "@type": "MusicGroup", 
      "name": "Milky Chance"
    }
  }
]

This piece of data needs to be included in the HTML source code of the artist website. Google then automatically finds this data and eventually displays the concert data in the Knowledge Graph (within a couple of days). That’s it — pretty simple, right? The good thing is that this method does not require layout changes to your website. This data can literally be included in any website, right now.

That is what happened in case of Milky Chance: some time ago, the data created by the web service was fed into the Milky Chance website. Consequently, their concert data is displayed in their Knowledge Graph. See for yourself: access https://www.google.com/search?q=milky+chance and look out for upcoming events on the right hand side. Screenshot:

milkychance_google_knowledgegraph

Google Knowledge Graph generated for Milky Chance. Note the upcoming events section: for this to appear, Google needs to find the event data in a special markup within the artist’s website.

So, in summary, when would you want to use this web service?

  • You have an interest in presenting the concert data of an artist in Google’s Knowledge Graph (you are record label or otherwise interested in improved marketing and user experience).
  • You have access to the artist website or know someone who has access.
  • The artist concert data already is present on Songkick or will be present in the future.

Then all you need is a specialized service URL, which you can generate with a small form I have prepared for you here: http://gehrcke.de/google-jsonld-events

Background: why Songkick?

Of course, the event data shown in the Knowledge Graph should be up to date and in sync with presentations of the same data in other places (bands usually display their concert data in many places: on Facebook, on their website, within third-party services, …). Fortunately, a lot of bands actually do manage this data in a central place (any other solution would be tedious). This central place/platform/service often is Songkick, because Songkick really made a nice job in providing people with what they need. My web service reflects recent changes made within Songkick.

Technical detail

The core of the web service is a piece of software that translates the data provided by Songkick into the JSON-LD data as required and specified by Google. The Songkick data is retrieved via Songkick’s JSON API (I applied for and got a Songkick API key). Large parts of this software deal with the unfortunate business of data format translation while handling certain edge cases.

The service is implemented in Python and hosted on Google App Engine. Its architecture is quite well thought-through (for instance, it uses memcache and asynchronous urlfetch wherever possible). It is ready to scale, so to say. Some technical highlights:

  • The web service enforces transport encryption (HTTPS).
  • Songkick back-end is queried via HTTPS only.
  • Songkick back-end is queried concurrently whenever possible.
  • Songkick responses are cached for several hours in order to reduce load on their service.
  • Responses of this web service are cached for several hours. These are served within milliseconds.

This is an overview of the data flow:

  1. Incoming request, specifying Songkick artist ID, artist name, and artist website.
  2. Using the Songkick API (SKA), all upcoming events are queried for this artist (one or more SKA requests, depending on number of events).
  3. For each event, the venue ID is extracted, if possible.
  4. All venues are queried for further details (this implicates as many SKA requests as venue IDs extracted).
  5. A JSON-LD representation of an event is constructed from a combination of
    • event data
    • venue data
    • user-given data (artist name and artist website)
  6. All event representations are combined and a returned.

Some notable points in this context:

  • A single request to this web service might implicate many requests to the Songkick API. This is why SKA responses are aggressively cached:
    • An example artist with 54 upcoming events requires 2 upcoming events API requests (two pages, cannot be requested concurrently) and requires roundabout 50 venue API requests (can be requested concurrently). Summed up, this implicates that my web service cannot respond earlier than three SKA round trip times take.
    • If none of the SKA responses has been cached before, the retrieval of about 2 + 50 SKA responses might easily take about 2 seconds.
    • This web services cannot be faster than SK delivers.
  • This web service applies graceful degradation when extracting data from Songkick (many special cases are handled, which is especially relevant for the venue address).

Generate your service URL

This blog post is just an introduction, and sheds some light on the implementation and decision-making. For general reference, I have prepared this document to get you started:

http://gehrcke.de/google-jsonld-events

It contains a web form where you can enter the (currently) three input parameters required for using the service. It returns a service URL for you. This URL points to my application hosted on Google App Engine. Using this URL, the service returns the JSON data that is to be included in an artist’s website. That’s all, it’s really pretty simple.

So, please go ahead and use this tool. I’d love to retrieve some feedback. Closely look at the data it returns, and keep your eyes open for subtle bugs. If you see something weird, report it, please. I am very open for suggestions, and also interested in your questions regarding future plans, release cycle etc. Also, if you need support for (dynamically) including this kind of data in your artist’s website, feel free to contact me.

Allen & Heath Xone:23C — hidden technical detail and quirks

The brand new Allen & Heath Xone:23C has been presented in countless preview videos and smaller reviews, all mentioning the main features of this great device. I have just obtained mine. There are some important details to know about, which are not mentioned in the documentation and in the reviews I have found so far. I want to share these non-obvious technical details with you, particularly regarding the USB sound hardware that is built into the Xone:23C.

The Xone:23C is a true 3+3 channel mixer

The Xone:23 (without the C) is a 2+2 channel mixer, with two main channels, whereas each main channel can be fed from two analogue sources that have an independent gain control. Still, Chris Brackley from DJ TechTools found that the Xone:23 has too few input options with only two true stereo line inputs, because the other two stereo inputs are made for low voltage (phono) input devices and cannot be switched to line level without hacking the device. The Xone:23C adds two USB audio stereo channels to the mix which can be fed via USB, rendering the Xone:23C to effectively be a 3+3 channel mixer. However, the Internet resources available so far and especially the manual do not explain explicitly how to toggle between USB and line/phono sound. The most obvious observation is that there is no switch to toggle.

The circuit diagram at the end of the manual explains the behavior. I have taken a screenshot from the relevant part and labeled a few components:

xone_23c_circuit_diagram_sum_02

The USB audio is processed and converted (to an analogue signal) by the USB sound card block shown in the diagram. This block has two stereo outputs (send 1+2 and send 3+4). I have marked these two stereo channels with blue arrows. This is where your USB audio starts its way into the mixer after being converted to an analogue signal.

The important thing now are the summing amplifiers which I have labeled with green circles (what looks like an M or W actually is a capital Greek sigma, a symbol commonly used for summation). The circuit diagram tells us that each main channel mixes the stereo signals from phono, from line, and from USB, in equal parts. Phono and line have their distinct hardware gain controls on the mixer for controlling the mix ratio. Such a control is missing for the USB audio stream. But it is not needed: the volume of the USB audio can easily be controlled digitally, in the source (your computer).

One of the first things I tried when I got the mixer was to attach a line device and USB audio at the same time to the same main channel. Indeed, both audio sources are mixed into each other, and the loudness of the line signal and the USB signal can be set independently. Hence, the Xone:23C is a true 3+3 channel mixer. No need to toggle between line/phono and USB.

Keep the digital master output low enough!

Obviously, I choose to mix externally with the Xone:23C, using the ASIO drivers for transporting the audio signal from within Traktor to the USB sound card in the mixer. For tracks that are mastered quite loudly, the default master output volume of Traktor is too high, already clamping the signal, and going into the reds on the VU-meter on the mixer. Add some EQ effects or some HPF/LPF with resonance, and your signal becomes horribly distorted. I found that with a Traktor master output volume set to somewhere between -5 dB and -10 dB, the Xone:23C meters stay around 0 dB most of the time for normal parts of most tracks I listened to, whereas the signal increases to at most +6 dB for especially loud parts in a song, or when some effects are added.

If you are using any music player for playing audio on the mixer not through ASIO, but through the normal audio driver of your operating system, I found that a master volume of about 60 % to 70 % is sufficiently low enough for not clamping the signal. If this is set to 100 %, as it usually is, you are already in the reds. Bad.

USB audio from the mixer to the computer.

The USB sound card in the Xone:23C provides two output stereo channels (from the computer into the mixer) and to input stereo channels (from the mixer into the computer). The usage of the output channels is obvious: get sound into the mixer. Each of the two input channels plays a special role, this information is rather hidden in the manual. The mixer has an analogue stereo RCA record output, for capturing the main mix into an analogue recording device. USB input channels 1 and 2 are the same, just digitally. Hence, you can easily use your computer to record the master output of the Xone:23C, with no additional hardware and through the same USB cable that is connecting the mixer to your computer anyway. This is great.

The mixer also has an analogue effects unit stereo output. USB input channels 3 and 4 are the same, just digitally. Hence, you can use software for capturing this input (e.g. in Ableton), generate a corresponding effect output, and feed this one back into the FX input of the Xone:23C. The latter, however, requires additional hardware (another sound device that generates an analogue signal), because there is no digital FX input into the mixer.

Recording only works through ASIO so far

There seems to be one caveat with the USB recording function, at least on Windows. The Xone:23C presents a Line-In WDM recording device, for recording the master mix. However, I was not able to access this device with another software simultaneously playing back through ASIO. Playback and recording only seem to work simultaneously through the ASIO interface.

Audacity (and many other popular open source tools) does not support ASIO (ASIO is a proprietary interface and GPL-licensed software must legally not be binary-distributed with ASIO support built-in). On the other hand, Audacity could record through the Xone:23C Line-In WDM device. However, as stated above, this cannot be accessed if e.g. Traktor at the same time feeds the Xone:23C with audio data through ASIO. In other words, Audacity can not be used for recording the master mix through the Xone:23C WDM Line-In device, while having Traktor playback through the Xone:23C ASIO interface. Opening the WDM device in this scenario results in an error, saying that the device cannot be accessed. What works, indeed, is recording via the Xone:23C ASIO driver through e.g. Traktor or other commercial software.

Recording the master mix from within Traktor, however, is not totally straight-forward. One needs to define an external input source for a normal track deck (e.g. deck A). This input source must be the channels 1+2 from the Xone:23C ASIO input. As long as you do not switch deck A to be of type “live deck”, this input effectively is a no-op input (it does not end up in the output again). Now, you can switch to external recording mode, and choose deck A as input source. Don’t worry, deck A still behaves as a normal track deck, it is just mis-used for this workaround.

Issues with playback on one of my platforms

I have tested the Xone:23C’s internal ASIO sound hardware with two laptops. Both have Windows 7 Professional installed. One is 64 bit architecture and operating system, the other is 32 bit architecture and operating system. I have installed the ASIO drivers from here, specifically the 32 bit version for the 32 bit OS/laptop, and the 64 bit version for the 64 bit OS/laptop. On the 64 bit system, the audio chain (playback software -> ASIO driver -> USB audio interface) behaves as expected. On the 32 bit system I have observed infrequent crackling sounds in the output.

The 32 bit system is a fresh and clean Windows operating system install, and the driver is the “Xone:23C Windows 32bit Driver V2.9.65”. I tried different setups, all without success. Important examples that I tried:

  • Foobar audio player to Xone:23C audio WDM device with small and large buffer sizes
  • Traktor 2.6.8 output to Xone:23C ASIO driver, with small and large buffer sizes
  • Traktor 2.6.8 output to ASIO4ALL driver, with small and large buffer sizes

In all cases, the crackling appears and seems to be independent of the buffer size. The crackling is not very prominent, it appears roughly every 10 seconds, and is rather quiet. I tried different USB ports, re-installing the driver, and a couple of other things, but could not get rid of the cracks. The same Xone:23C attached to the 64 bit machine works perfectly. My 32 bit laptop has an Intel P8800 CPU, i.e. it is definitely not too weak, and playback from Foobar right to the WDM device does not require much CPU power at all. It could be a problem with the 32 bit driver (I have submitted a support ticket to A&H), but it could also be a certain quirk of this specific platform, where one of the drivers (e.g. ACPI or USB) is leading to high latencies. I have to further investigate. It would be great if you could report whether you got the Xone:23C USB audio properly working on a 32 bit Windows system.