Streaming Media East

For once you’ll be getting some updates more regular to the blog here on Geekteq the coming week. This is mainly due to the fact that I’m heading to New York City next week to visit Streaming Media East. I’ll be posting, hopefully, on a daily basis with some news and reviews on what I’ve heard during the day.

My plan right now is to join these two pre-conference sessions – which I find most interesting of the four I can choose from:

Encoding for Multiscreen Delivery

Learn how to create a set of video files that will play on all devices, from smartphones to computers and OTT devices. The class starts by exploring key concepts like protocol and container format and technologies like HTTP Live Streaming, DASH, and Dynamic Streaming. Then it moves to a technical overview of the H.264 specification to identify those configuration parameters that impact quality and those that don’t, and how they affect playback compatibility. Then we’ll review the technical requirements for single and multiple file delivery to Flash, HTML5, iOS, Android, Windows Phones, Windows 8, and the Apple TV, Boxee, Roku, and other OTT devices. Along the way, you’ll learn the current encoding and delivery practices used by high-profile broadcast and corporate sites to help refine your technology decisions. You’ll walk way knowing the technical requirements for delivering to all key platforms and an understanding how to do so.

and then in the afternoon it’s time for (I’m using Wowza intensively in my projects so it fits rather good wouldn’t you say?)

Wowza Media Server and End-to-End Workflows

There is a confusing array of products and solutions for streaming video to end users, but few that simply deliver your content to any device. If you’re serious about video, you’ve been seeing and hearing more references to Wowza Media Server. In this session, you will learn about the end-to-end deployment workflows that you can build using this unified streaming media server software. You’ll see live demos and find out about the newest Wowza functionality. Discover how Wowza works with cloud delivery and stacks up against other media delivery options. Finally, gain insights about emerging technologies (such as MPEG-DASH, H.265, and HTML5) and how to future-proof your streaming media deployment.

For the actual conference sessions during tuesday and wednesday I haven’t set anything in stone – there are some of them that sort of clashes – I’d like to see both but can only choose one… *bummer*

But.. in short – I think it will be these:


Encoding Video for iDevices

This session starts by detailing the playback specs for all iDevices, old and new. Then you’ll learn the strategies used by prominent iTunes publishers to serve the complete range of installed iDevices. Next, the seminar switches to cellular wireless delivery, with a technical description of Apple’s HTTP Live Streaming (HLS), including recommendations for the number of streams and Apple’s encoding parameters. You’ll walk away knowing how to encode for both iTunes and mobile delivery to iOS and compatible devices.

It’s not as if I don’t already know about how do to do this, but I might have missed something. This is one of the sessions I might change because it clashes with another one 🙂 The three sessions below is a must for me..

Content Preparation And Transcoding For Multiscreen Delivery

With the introduction of adaptive streaming formats, a growing number of IP-enabled streaming boxes, and the proliferation of handheld devices, content owners face increasing challenges for multi-screen content preparation. A key part of that content preparation is the encoding of content for each device, the algorithm choices/trade-offs, codec settings, and the particular requirements of various distribution platforms. This session will analyze the key components of file-based transcoding and will talk practically about converting content for multi-screen delivery.

Understanding the Significance of HEVC/H.265

The most recent video compression standard, HEVC / H.265, was placed into final draft for ratification earlier this year and is expected to become the video standard of choice over the next decade. As with each generation of video compression technology before it, H.265 promises to reduce the overall cost of delivering and storing video assets while maintaining or increasing the quality of experience delivered to the viewer. This session will address what H.265 is, how it differs from previous generations of compression technology including H.264, key barriers to widespread adoption, and thoughts on when H.265 is likely to be implemented.

MPEG-DASH: The Next Steps Towards Broad Adoption

Members of the DASH Industry Forum will discuss what concrete steps have been taken in order to foster fast adoption of the new industry standard for adaptive streaming over HTTP. The session will discuss the recently published DASH264 Implementation Guidelines that cover both live and on-demand services, MPEG-DASH profiles, audio and video codecs, closed-caption formatting and common encryption constants. The panel will consist of representatives from all relevant parts of the ecosystem and will address the practical matters and relevance of MPEG-DASH based service roll-outs for streaming and hybrid broadcast applications.

Then we end the day with a battle again. It would either be…

Evaluating the Effectiveness Of Your H.264 Encoder

Not all video encoders are created equal. In this session, the real-world video outputs of top commercial H.264 encoders are compared, including those from Telestream, Harmonic, Sorenson, and Adobe, as well as open-source options such as FFmpeg and x264. Learn what features you should have available in an encoding tool before you invest your organization’s budget in a solution.


Battle Of The $99 Streaming Boxes

With so many streaming devices in the market, trying to determine what each one offers in the way of streaming quality and content inventory can be quite confusing. In this special session, Dan Rayburn will present hands-on demos showcasing the leading streaming devices, including those from Apple, Roku, Boxee, Western Digital, Sony, Vizio and Netgear. Attendees will see these devices in action, learn which content platforms they run, and have a chance to ask questions.

I’m sort off leaning towards the latter since I can, mostly, decide which H.264 encoders that works better or worse already – but I might learn something new. When it comes to streaming boxes I’m a bit blind (there are too many) and it could be good to see some of them.


The first session is a no-brainer for me…

Building a DASH264 Client

With all the device fragmentation in the market, it is getting increasingly difficult to provide content to all of them equally. The MPEG-DASH specification promises to unify the field and provide a ubiquitous format that can be used by most devices. This technical session explores how to build a DASH264 player. We will explore a few different players, including one built-in JavaScript using the MediaSource APIs to run natively in some browsers, and another using OSMF and ActionScript that can run in any browser with a Flash player.

This one is sort off a best of the worst at the time – I’ve done so many livestreaming cases that I could probably host it myself (kidding) but it’s always good to get new input from others as well.

Best Practices For Live Streaming Delivery

This session provides best practices, lessons learned, and a general overview of the technical set-up for a professional live streaming production. Learn about transmission methods (IP, cellular, fiber, satellite), encoding on site or off, picking the proper encoder for the job (software vs. hardware), maximizing encoder & CDN efficiency, and delivering adaptive HD streaming to the desktop, mobile, and OTT boxes. Come learn how to improve your next live event.

Following that I’m thinking this could be interesting to hear, since I’ve got partners who was involved in that particular case..

How The BBC Ensured Live Streaming Resilience For The Olympics

Live video streams were key to the ambitious online user proposition for the London 2012 Olympics, and that coverage had to mirror the very high traditional broadcast standards of resilience and quality. Hear the challenges the BBC faced when designing a resilient HTTP streaming infrastructure that was designed to cope with huge volumes. Learn about the solution the BBC used during the games and hear what changes to their methodology was required to build resilience into a cloud-based infrastructure.

The last part of the conference seems to be winding down sessions and I can’t for the life of me select which of them to go at this moment. None of them interest me more than the other. We’ll see where I end up.

Stay tuned – more to come as the conference progresses.

DRM is pretty much useless

I know, I’m kicking the hornets nest here. There’s no doubt about it. Before we continue I want to make it clear that I’m basically only talking about streaming in this post and it does not relate to downloadble content, this because I don’t work with that – yet.

The base of my decision on that no DRM will ever prevent me from being able to copy content if I really want to copy it. Therefor it’s useless.

Yes, I’m not a fan of DRM and mainly because it adds load to servers and makes it more or less impossible to make content available regardless of platform. Today, if I’m not going to use very expensive DRM-systems I’d have to use about 3-5 different servers to serve up content for different platforms and at least the same amount of DRM-systems. At least if I’m not going to use some proprietary client that costs an arm and a leg to make and distribute.

Usually, when I a client comes to me and says ‘what about DRM? I want my content to be safe!’, I describe the problem and solution like this:

‘If you want DRM you can have it. But….’

And then I explain to the client the cost with it and continue to tell him that I can, if I get to the content legally, without any big problems copy the movie and there’s nothing he or me can do about it. As long as someone, legally, has access to the content – such as a viewer who’s paid his subscription or the onetime fee to watch a clip – he/she can copy the content. As long as I can see the content on the screen and hear the content on my soundcard I can copy it. In my case I can copy it in HD-format since my computer can play that without much hassle. So I get quite good quality, and I can do it semi-legally as long as I’ve paid to see it once (or get it for free). But of course, since I copy it it’s illegal but nothing I do can’t stop it being done.

There are of course up and ‘coming’ techniques such as UltraViolet but even that has gotten a lot of grief from users about quality and if I had anywhere to test it, I could probably still get around the ‘protection’ and dump it to disk. The costs are for me, as of yet, either non-existant (as in I don’t know) or the sites I’ve seen that seems to provide some kind of middleware then we’re talking loads of money going away from me and heading for the middleware-makers… some of them made hilarious pricingtables, or what would you say about $200.000/year (and oh, you have to sign up for 5 years)… good luck with that…

Cheaper and simpler? Watermarking. Either the old version with burning in a logo, that of course can be covered in almost any videoediting software, or still untested by me doing it digitally in the background so you can see who spread the content. The latter seems most interesting to me so I can see who the culprit is but still it’s a hassle.

So what do I suggest instead? Nothing! Huh? The short answer is price your content in a way that makes the hassle of ripping it and distributing it illegally less interesting.. That’s my three cents after tax. I’d like to hear what you think. And yes, you can call me an idiot if it makes you feel better – but it won’t make me change my opinion about it.

Wowza Media Server 3.0 (preview)

So. I’ve spent a few hours installing a Wowza Media Server 3.0 Preview Release. Wowza themselves calls it “Any Screen Done Right™” which sound very promising for me with the type of work I’ve done for the past few years – Haven’t tested it much yet but on the “paper” it’s look pretty darn good. The current project, I’m involved in, is to port (livetranscode) TV-channels to mobile devices (phones and pda’s and such). This is where one of the new features of the coming Wowza Media Server 3 is coming in.

I’m talking about the Wowza Transcoder Addon.

According to the release notes this supports adaptive bitrate (ABR) for Flash, Silverlight, and Apple HLS. It should be able to ingest from live encoders, IP cams, TV headends, and more…

It can handle the following inputs – MPEG-2, MPEG-4 Part 2 and H.264 the output will then become H.264 over any protocol supported by Wowza Media Server. They will also be providing pre-built profiles for web, mobile and IPTV-streaming. The second part is what’s interesting me at this point.

Currently the system support GPU-acceleration hardware using Intel’s QuickSync or Nvidias CUDA, on windows only, but they’re aiming to offer at least Nvidia CUDA support in Linux as well.  My current configuration is being run on a Linux server – so at this point I will not have any GPU-acceleration and have to rely on the normal processors in the server to handle that. But then again the server is running with 12 cores and 24 gig ram… so we will be stress-testing the transcoder quite harshly – to see how many transcoders we can run on one hardware and still get good results.

As you can see I’m using “we will” – and that’s because we’re still waiting for the input (source’s) to tell us how they will be providing us with the material (hopefully something that doesn’t need to much configuring on our end).

Apart from the Wowza Transcoder Addon the other “MAIN” news are:

Wowza Network DVR (nDVR) AddOn

A thing that will be interesting to try, although I can’t see the use for it on my end (yet). A Single nDVR cache — with support for pausing, resuming and rewinding a live stream. It also integrates with the Transcoder for time-shifted streaming and comes with API for customization. For the final release they’re also planning multiple bitrate support and multiple server support. Could be interesting to try – but I’d have to wait a while, at least until my transcoding tests are done.

When you’re working with IP-holders they do have a tendency to obsess over security in general and DRM especially. I have a mantra, that I always tell clients when they bring up the question “what about people copying the content – they can’t do that can they?”, and that mantra is ‘if you can view/listen to the content on a computer you can record it’ – even if it comes down to placing a camera in front of the screen and pluggin a recording device into the soundcard – it’s always possible if you really, really want to do it. But that being said, you don’t have to make it easier for them. Thus we end up with the third and for some, most important, addon in the new Wowza Media Server 3.

Wowza DRM AddOn

This addon is offering integration 3rd party DRM Key Management Systems, including:

  • Verimatrix VCAS™ for playback on HLS devices with ViewRight™ clients
  • Microsoft® PlayReady® for Silverlight smooth streaming client playback (not current but for planned for final release)

According to Wowza they will be announcing other platforms in the future. Suffice to say the above mentioned are good enough but the more the merrier. As I’ve said, I’m no big fan of DRM since it only adds more load to servers but if clients demand it – the more support, the better. This addon supports on-the-fly encryption for live and VOD. Specifically for live its encryption per stream with the ability to rotate keys and for VoD it’s instead per asset or per session with the same abilities.

The only “issue” I’ve found, yet, is that neither GPU-support, nor Transcoder-addon will work on Mac OS X – and being a iPerson – I don’t really like that, hopefully it will come something to those platforms as well…. Another thing, not an issue, is that the Wowza Media Server 3 will have free upgrade for those on the Wowza Media Server 2 platform, which of course is good. However, the addons mentioned above – will be separately priced and the price will be made public at a later date.

Now, Wowza Media Server, in my opinion is one of the most friendly in terms of cost for a software of its caliber but these addons could turn that around. I hope it doesn’t but we will have to wait and see.

I tried to get the Wowza-people to give me a few hints on what’s coming post-3.0 but only got a “no, we don’t discuss that” – which I have to say isn’t that surprising. But I had to ask!

Amazon Live Streaming and CloudFront

The weekend that passed marked the première for me to use Amazon Web Services as a streaming solution for a live broadcast. No, don’t get me wrong, I’ve done it using EC2 before – several times – but in this case I had to use CloudFormation and because of that it was a première.

The setup is pretty straightforward.

So here we go using Amazons pre-configured CloudFormation template. During about 10-15 minutes it automates a setup of CloudFront distribution as well as the EC2 instance needed to serve it. Since this current client was estimating about 1000+ concurrent viewers and they where running 1,2 Mbit/s video stream – and Amazon has a 1Gbit/s limit – I had to contact Amazon and ask them to raise the limit to 2Gbit/s, a request they happily obliged.

The basic setup of how it works is included here. Since the only way of doing live streaming via cloudfront, as far as I know, is to use HTTP-live-streaming instead of standard RTMP-streaming this is of course what we used. Amazon has an excellent article describing exactly how to get it set up so it’s not what I would consider a case of “rocket science”.

So, how did it go. Well – we found a few minor bugs – these are the issues we found during our two broadcasts this weekend:

  1. Minor buffering and drops – after checking this seemed to be an issue with caching on the cloudfront (we’ve reported it to Amazon). When a package (part of the live stream) was missing in the cache on cloudfront the time it took to retrieve it from the server was long.
  2. The cache, default, is way too long. Amazon has a 24 hour period for caching, after last access, on CloudFront. This gave us weird problems since live streaming is live. When we stopped encoding and then restarted CloudFront started serving us old content. We resolved it by killing our CloudFormation and started a new instance, thus getting a new CloudFront hostname and an empty cache. (we’ve reported this to Amazon as well).

One minor, economic, issue is that the FMS has either a 1000 concurrent viewers limit or, the next, 10.000 concurrent viewers. Since we where expecting 1000+ we had to go with the 10.000 version. But, what I haven’t gotten a response to as of yet, it would seem that the server only had ONE connection during the broadcasting (CloudFront presumably).

That said, it is a problem that I have to discuss for our next broadcast. As well as getting the small buffering and drops out of the system if possible.

Conclusion: Using Amazon for Live Streaming is quite good – It isn’t free but you can get “global coverage” with only one EC2 instance by using CloudFront for distribution. So, in short, I can’t complain and it was easily set up and the people at Amazon that I’ve been in contact with have been more than helpful. (Thanks Brad).