You are currently browsing the category archive for the ‘Opinion’ category.

I recently purchased my first dedicated headphone amplifier: a FiiO E10 USB DAC. I found that FiiO had a vendor forum on Head-Fi.org, and FiiO representatives were gathering feedback about a new guitar headphone amp they were developing. When they offered to ship review units to guitarists, I took them up on the offer. Full disclosure before we get any further: FiiO has provided this review unit to me at no cost.  I’d like it even if it wasn’t free, and would buy it to keep it.

The FiiO G01 Guitar Headphone Amplifier

The FiiO G01 Guitar Headphone Amplifier on my small mixing board,

The FiiO G01 is a small, metal box (70 x 47 x 23 mm) with 3mm headphone and “aux in” jacks, a USB Micro-B connector for charging (USB-A-to-USB-Micro-B cable is included), a power switch, and three dials for Drive, Tone, and Volume. It features a pop-out 6mm instrument jack that, when retracted, fits snugly in a recessed area. It has an LED that indicates power and/or charging state.  There are black silicone bands around it that act as bumpers, and another set of red bands was included in the package.

The G01 is powered by an internal battery that’s charged over a powered USB connection. I plugged the included USB cable into an Apple AC adapter to charge the G01. I’ve used the unit for about 1.5 hours and haven’t had to recharge it yet.

I’m primarily a bass player, so I first tried the G01 with my Fender Geddy Lee Jazz Bass. I usually play through a SansAmp Bass Driver DI with a little overdrive; I found the Drive control on the G01 to be a bit powerful on the bass.  I only used it a little bit.  The Tone control acted on the higher frequencies; I dialed it in to about halfway.  Unfortunately, there are no numbers on the dials – they would be useful as one might use the G01 with different guitars or want to practice with different settings for multiple types of music.  The adjustments must be made by ear alone.  The sound I got out of a pair of connected Sennheiser HD203 headphones was pleasing, and I practiced for a while, forgetting that I wasn’t playing through an amp or DI.

I connected the Aux In to two audio sources for testing: a Google Cr-48 Chromebook playing songs from Google Music, and a iPod Nano.  There are no controls on the G01 for that input, so level-setting must be done on the source.  It was easy to dial in a nice blend of bass guitar with the music for practicing.  I practiced along with songs for about 30 minutes at a time during the review, and didn’t ever feel uncomfortable with the sound.

My father-in-law brought over a guitar he built with neck and bridge Carvin “Allan Holdsworth” pickups, and I used the G01 with it.  The guitar had all the pickup mode selections available, so I was able to play in many different modes (single-coil, double-coil, neck only, etc.).  I thought the clean sound was impeccable.  I’m not an electric guitarist (when I play guitar, it’s acoustic), but I cranked up the Drive setting to the max to see what kind of a sound I could get out of the G01.  It was a pretty pleasing overdrive with a bit of distortion that sounded neat cranking out some of the metal riffs I know.  I can see electric guitarists liking the G01’s sound.

In summary: I’ve never been in the guitar headphone amp market, but now I see the utility of the devices.  I recommend you check out the FiiO G01 before you buy anything else.  I’ll be incorporating headphones into my practice regimen now, which should allow me to practice at odd times where it’s not practical to play thorough my amp.  I can also see it being useful for the traveling musician to use during downtime.

Check out my set of G01 pics out on Flickr.

I cross posted this at Head-Fi.

Both Dell and HP released their first wave of servers based on the new dual-socket Intel Xeon E5 series last week.  Dell is calling their new servers “12th generation”, and HP is calling theirs “Gen8″ (generation eight).  After reviewing both vendors’ offerings, my initial take is that Dell is raising the bar in terms of storage density and features, and HP is matching some of Dell’s recent gains in system manageability.

With their previous generation of servers, Dell lagged behind HP’s internal hot-plug disk capabilities.  The Dell PowerEdge R710 2U server could only hold up to 8 2.5″ (small form factor, or “SFF”) drives, but HP had an option to insert up to 16 in their ProLiant DL380 G7.  The Dell PowerEdge T710 server could only hold up to 16 SFF drives, but HP’s ProLiant DL370 G6 could hold up to 24.  My team at work ended up choosing the DL370 G6 in many cases for applications like file servers, SQL servers, and backup servers because of the large number of local disks.  External disk enclosures from either vendor are very expensive.  With their 12th-generation servers, Dell has increased the internal storage capabilities of their servers significantly.  The PowerEdge R620 can hold up to 10 SFF drives.  The PowerEdge R720 can now hold 8 or 16 SFF drives (with a two-controller 8 + 8 split-backplane option), and has added a 2U PowerEdge R720xd model that can hold up to 14 3.5″ (large form factor, or “LFF”)drives or 26 SFF drives.  The PowerEdge T620 (replacing the T610 and T710 models) can hold up to 32 SFF drives.  On some server models, Dell also has added the capability to address up to 4 of the hot-plug bays directly with PCI Express (PCIe) buses, which work with PCIe SSD (solid state disk) hot-plug disk options for high-performance workloads.  They also carry a feature on their PERC RAID controllers called CacheCade which allows the controller to use SAS SSDs as additional read/write cache.

HP’s new servers break little new ground in the storage area compared to the models the Gen8 servers are replacing.  The ProLiant DL360p Gen8 (1U rackmount server) maxes out at 8 SFF drives (same as G7), the DL380p Gen8 (2U rackmount) can hold up to 16 SFF drives (same as G7), and the ML350p Gen8 (a 5U tower/rack server similar to the Dell T620) can hold up to 24 SFF drives (same as DL370 G6).  HP doesn’t have an answer for the PCIe SSDs or CacheCade feature that Dell offers.  Both of the new-generation Dell PERC and HP Smart Array RAID controllers have flash-backed write cache (FBWC) as a standard feature: HP had this when their previous generation controllers launched; Dell added the feature after launch.  HP has 1GB and 2GB FBWC options on their new Smart Array P421 controller; Dell shows only 512MB and 1GB options on the new PERC H710.

Both vendors’ servers have similar processor/memory options at launch.  All the servers I’ve referenced here can hold up to two 8-core CPUs and have 24 DIMM slots for RAM at sizes of 2 to 32GB (maximum 768GB RAM).  Other servers that use the Xeon E5 CPUs in a 4-socket configuration are due to come out later this year and will have even greater memory capacity.  Memory now runs at up to 1600MHz and the lowest-speed configuration (when many DIMM sockets are populated) is 1066MHz (up from 800MHz in the Xeon 5600-based configurations).

With their previous generation of servers, Dell added a new Lifecycle Controller that provided a bootable, GUI environment for maintaining servers independently of the installed OS.  HP appears to be trying to match this feature with their new “iLO Management Engine”.  I’m looking forward to seeing how this works.  Both vendors have evolved their lifecycle management over the last few years to be a lot more OS-agnostic, relying on Linux-based boot environments to provide easy-to-use system management/update tools.

Both vendors are driving up their feature sets, but I think Dell is poised to be a leader in direct-attached storage applications.  I will have to review the systems management offerings from both vendors to see if they’re truly getting more similar.

For the past few years I’ve usually bought CDs that are “special edition” or have bonus material. I have also bought DVD-Audio discs to get 5.1 mixes (mainly Steven Wilson/Porcupine Tree stuff). These formats get archived as tagged FLAC files on my home server. FLAC is my lossless codec of choice because it is an open specification, supports a wide range of resolutions and channels, has good metadata support, and is supported by a wide range of software. I don’t currently own any FLAC playback hardware, but that’s not critical to me. I play back FLACs on the computer using Foobar2000. It’s also my main ripper/converter and librarian. The only gap it has for me is that it doesn’t support writing album art metadata natively. I use either Album Art Downloader, MP3tag, or iTunes for that. Speaking of metadata, in addition to the essential metadata, I make use of album artist, disc number, and ReplayGain tags.

When I buy music online, I prefer to buy FLACs. I also prefer to buy music that has moderate dynamic range. I’ve found that I enjoy music that has a ReplayGain value of -8dB or less the most. When I see ReplayGain values at or above -10dB, I usually have a bias against the mastering, which probably used at least some brickwall limiting to produce such a high volume. Unfortunately ReplayGain values aren’t a primary specification prior to purchasing music, so I usually have to purchase on reputation. I probably shouldn’t display ReplayGain values in my primary view on Foobar2000, but I do. It’s mainly to make sure the files are tagged with it, but like I said, it does color my anticipation of the music.

My FLACs get transcoded to LAME MP3 v2 (~190kbps) for normal playback, which could be on the computer, an iPod or iPhone, or to an Airport Express via Airplay. If I’m at a computer that has access to my FLAC archives and Foobar2000, I’ll listen to those.

I rarely purchase iTunes Plus (256kbps AAC) or MP3 files (190kbps or better), but when I do it’s with the expectation that I will not transcode them into another lossy codec or to the same codec at a different bitrate. I will burn them to CDs with a note like “from AAC” or “from MP3″ to let myself know not to rip the CD later expecting a lossless copy of the original. Sometimes if I’m using a playback device that’s storage-constrained, I will transcode down to AAC or MP3 at ~128kbps. These files aren’t anything I would archive – they’re for playback only.

Speaking of burning CDs, I do burn discs to listen to stuff in the car with a little more quality than just plugging my iPod or iPhone into the aux jack. Like many, I have quite a few CD playback systems around the house and at work. One little quirk I have is that I like to write CD-TEXT on my burns; my car stereo and a few of the players I use support displaying that info. When I burn CDs, I don’t apply ReplayGain or any volume processing. I burn CDs with Nero, CD Architect (rarely, if I want to do some limited mastering, crossfades, or track adjustments), or Foobar2000 (if I don’t want to write CD-TEXT).

One of the add-ins I had on my old Windows Home Server was a beta of Jungledisk that backed up my Photos folder to Amazon S3. To fill that role on my new WHS 2011 server, I’m trying Cloudberry Backup for Windows Home Server 2011.

Cloudberry offers a free trial, so I downloaded the Add-in package and installed. WHS warned me that the package was unverified, but I installed it anyways. After install, the app appeared in the navigation bar as “Cloudberry Online Backup” on the WHS Dashboard and in the list of installed Add-ins. The user interface of Cloudberry has five tabs that are easy to understand. Before I could do anything with it, I needed to give it the credentials for my S3 account in the Settings under “Set Storage Account”. It prompted me for a Bucket name, which I assumed it would create if necessary. It didn’t, and complained that the named Bucket didn’t exist, so I went into the AWS console and created it. After that, the account was added.

Because I’m not currently using this to back up critical system data, I went into Options and tuned the upload speed to 75KB/sec (about a quarter of my current upload bandwidth) and tuned down the process priority to Low. At this point, I was ready to Setup a Backup Plan, so I launched that wizard. I selected my S3 account as the target, and was given a choice between the Advanced (default) and Simple Backup Modes. Because these are just photos that I’m not keeping multiple versions of and don’t need encrypted for extra privacy, I selected Simple. I’ll probably use Advanced mode for documents or other sensitive data I might backup to S3 in the future.

I was pleased to see that the Backup Plan wizard defaulted to showing backup sources at the share level rather than at the physical drive level. There was an option to add the physical drives to the view.  I quickly selected just my “Pictures” share. I used the Advanced Filter settings to specify just to back up files with .JPG, .PNG, .MP4, and .RAW filenames, mainly to avoid backing up system files like thumbs.db and desktop.ini that Windows throws in to picture folders. However, it looks like the default selection of “Do not backup system and hidden files” would help me there. I opted for no compression, to use S3’s Reduced Redundancy Storage (RRS), and to purge deleted files after 30 days. I chose a weekly schedule with a backup at 1AM Saturdays to reduce the load on the server. The wizard then created a service to run with Administrator privs. I liked the email notification options, but chose not to use them at this time. Then the backup plan was created and I set it to run immediately.

I was concerned as I watched the initial phase of the backup.  Out of over 24GB worth of 10,000 picture files, the job status listed just 100 files and 1.8GB. However, the network utilization was on target and the performance impact to the server (an HP ProLiant MicroServer) was negligible. As I was watching the initial backup, I paged through some of the other tabs and found them straightforward and easy to understand. As the backup progressed, its number of target files and capacity increased, indicating that it wasn’t intending to cap out the backup job early.

I was very pleased to see that unlike Jungledisk, Cloudberry transferred the folder structure and files without putting them in containers. I was able to navigate the folders and see the pictures from the S3 web console. Very cool.

As I finish this entry, Cloudberry is plugging away at the initial backup and all indications are that it will work even better than my old solution! Recommended.

We were wrapping up watching some stuff on our DVR when I saw NBC’s live special report around 11pm EDT yesterday. The president came on at about 11:30, I think. While I’m glad Osama bin Laden was found, I hope that the authorities have enough physical evidence to convince his followers that he’s been eliminated. Supposedly there is indisputible photographic and DNA evidence, but his body was reportedly “buried at sea”. Unfortunately, I think his campaign had much of its intended effect. Post-9/11, the government has spent more on travel-related security in the last 10 years than in the entire history of our country combined, our Department of Defense has ironically been playing on offense halfway around the world, and our military spending has heavily contributed to the debt that threatens to wreck our economy even worse than it already is. I think about how much productivity our economy has lost due to security-related fears and I get depressed.

I pray that OBL’s death leaves a leadership void that destabilizes global terrorism enough to reduce global violence, decrease the perceived need for war, increase the peace, allow governments to unwind the security theater that’s choking our freedom and productivity, and help us to get back on the right track. Make love, not war, indeed. Let freedom ring.

As a corollary to my last post, here’s an analogy for you:

The difference between 16-bit (CD-quality) and 24-bit (“HD”) audio is 8 bits, and 2**8 = 256. That means that 24-bit audio can provide up to 256 times the amplitude “depth” of 16-bit.

Let’s compare audio bit depth with water depth. The surface of the water is analogous to unity gain, 0 dB. Unity gain is the loudest representation of sound a digital audio signal can provide. When multiple successive samples hit unity gain, the signal isn’t describing anything during that time. It’s just being loud. Severely brickwall-limited audio can have lots of samples whose value is 0dB. Think of this audio as a swimmer that stays on the water’s surface. They always swim on the top of the water, regardless of the depth.

Let’s say 16-bit audio is like a part of an ocean where there’s a 16-foot-deep coral reef. If a swimmer holds their breath, they can swim down below the surface and explore a bit, and find lots of neat stuff. There’s probably more interesting stuff under the surface of the water than at the top. When audio waveforms steer clear of unity gain (i.e. by not being clipped), more of their original resolution is present.  They can be heard more accurately, just like diving below the surface of the water helps the diver to see the objects that are under water more accurately.

Now, let’s say 24-bit audio is water that’s 256 times deeper than that 16-foot section: about 4100 feet deep; almost a mile. Again, the swimmer could stay on the surface, but they’ll be able to find much more life underneath. Putting on some scuba gear or deep-sea-diving equipment, they’ll be able to go even deeper than when they hold their breath, and potentially find lots more stuff. When audio waveforms don’t hit unity gain and can use at least some of the extra resolution that 24 bits provide vs. 16, they can be played back with even more accuracy. However, if the swimmer (in the case of recorded music, the mastering engineer) stays near the water’s surface, they won’t find much.

Mastering engineers need to learn to swim below the water’s surface before the music industry moves to 24-bit formats by default.

With the CD medium nearing its 30th birthday, I’ve been thinking about what formats and tactics much of the recording industry is using to pull us away from it.

When originally introduced, I don’t think the industry expected that less than 15 years later, it would be possible for users to digitally extract the audio data from the media and compress, tag, and file it on computers. Because there’s no security in the format, the CD is a great compromise of potential audio quality, durability, and accessibility. The only things that hold it back are its resolution (which isn’t as limiting as some in the industry make it seem) and the fact that it’s a stereo format. Since multi-channel, higher-resolution, secured formats like SACD and DVD-Audio haven’t taken off in the market, I think it’s fair to say most users are happy with what the CD delivers. I think if either of those newer formats didn’t include copy protection, more users would embrace them. Especially SACD, which can include a CD-compatible layer.

But the industry shows contempt for the CD and its users. Originally introduced at a higher price point than LPs and cassettes, presumably due to higher quality and to subsidize investment in new manufacturing capabilities, their average price has never decreased. Many of the CD’s of the 80’s were mastered with the ideal of preserving the dynamic range of the original recordings, but over the years, the loudness war has made the audio quality of most popular music a joke.

They’ve tried to kill the jewel case with the Digipak and its ilk for years. The only way that cases with less plastic can be sold as a sustainability issue is if it’s assumed that CD cases are throwaway items. I don’t know anyone that ever bought a CD with the intention of throwing it or its case in the garbage.

Listening to the excellent Home Theater Geeks podcast the other day, guest Steve Guttenberg (no, not the actor) made the point that the industry is building up a catalog of high-resolution masters of up to 24-bit, 192kHz quality. I think they’re hedging their bets on possible successors to the CD. Think about it: in many cases, they’re not even trying to use the full resolution of the CD format. Brick-wall limiting shaves significant chunks of available dynamic range off the content. Many users have migrated from home, speaker-based listening to mobile car- and headphone-listening, which makes the dynamic range issue less noticable. Those are noisy environments. Instead of leaving the dynamic range intact on the recording and letting playback devices handle dynamic range compression (DRC), such as what is available on DVD players, TVs, and set-top boxes, the industry chooses to dumb down the product for the greatest audience.

I think this is all a play to upsell the next disc-based and digital-file formats. Higher quality will be available and may actually be realized, but at a higher price point. Blu-Ray discs with high-res stereo and multi-channel audio along with videos are going to be sold at a premium, probably at least 25% more expensive than the comparable CD. This will help drive up the cost of digital-file versions of the higher-definition content that will most likely use some form of DRM. Even though the industry might have convinced consumers that it caved in when it allowed the sale of unprotected MP3, it didn’t. MP3s aren’t of equal audio quality to CDs, but they’re priced similarly. The industry has the opportunity to make a lot more money selling single digital tracks as well: those offer less profit in physical form. Consider the case of a multi-channel lossless file of the future, with DRM, purchased from a service that restricts the applications or hardware it can play back on. It may be cheaper than its physical counterpart, but the lack of playback flexibility will cripple the benefits of not having to worry about the media.

Via Scott: Slashdot: Apple in Talks to Improve Sound Quality of Music Downloads:

Apple and music labels are reportedly in discussions to raise the audio quality of of the songs they sell to 24-bit. The move could see digital downloads that surpass CD quality, which is recorded at 16 bits at a sample rate of 44.1kHz. It would also provide Apple and the music labels with an opportunity to ‘upgrade’ people’s music collections, raising extra revenue in the process. The big question is whether anyone would even notice the difference between 16-bit and 24-bit files on a portable player, especially with the low-quality earbuds supplied by Apple and other manufacturers.

The recording industry can’t even use 16 bits effectively, and they want to make consumers think that 8 more bits is automatically better in quality. I can just see the ads now: “New 24-bit iTunes HD; it’s 8 louder!”

Louder is not better. Demand better quality CDs. Take a seat and listen on speakers or nice headphones every once in a while. There’s still music out there that sounds great; purchase it and perhaps the industry will realize there are still people who demand a quality product, not just one with higher resolution. A brickwall-limited recording won’t sound any better with 8 more bits.

Mentions:

Update (2012-Mar-10):  Chris Montgomery (Monty @ Xiph.org) posted a fine essay about this that blows mine away.  Go read his take!  Monty is a personal hero of mine, having developed the Ogg Vorbis codec.

As noted here several times since it’s release (and most famously by Rip Rowan on ProRec), Rush‘s 2002 comeback album, Vapor Trails, was poorly mixed and mastered compared to their earlier albums.  In certain circles it became the poster child of the downside of the loudness war; in fact, it’s referenced as an example on that Wikipedia page I just linked to.  Many Rush fans have begged for a re-release with more dynamic range that’ll allow the music to “breathe” as opposed to being just a wall of sound.

The 2009 release of Retrospective III featured remixes of “One Little Victory” and “Earthshine” from Vapor Trails that showed the benefits of re-treating the album.  To my ears, much of the mud from the mix has been drained out.  I think the individual instruments can be heard better and there’s a bit more ebb and flow in the different movements of each song.  They are more of a joy to listen to.

Here are some short samples of the original and remixed songs that might help the reader to compare:

One Little Victory (30 second sample): Original MP3 | FLAC; Remix MP3 | FLAC
Earthshine (30 second sample): Original MP3 | FLAC; Remix MP3 | FLAC

Now that there’s concrete news of a remixed Vapor Trails from Rush guitarist Alex Lifeson, the promise of the 13-song epic being cleaned up for good is palpable.  I will gladly buy the new CD and keep the old one as a collector’s item.  However, based on the two remixed songs from R3, I’m concerned that a reissue with Richard Chycki‘s mixing and Andy VanDette‘s mastering might not be as optimal as it could be.  A few humble suggestions:

1. Don’t master the reissue with any digital clipping whatsoever.  Audible or not, clipped waveforms are a symptom of audio that’s had it’s amplitude increased beyond what should be the maximum.  The Replay Gain values of the already-remixed tracks are less than 1dB different than the originals, which indicates that their overall loudness is similar.  I’d rather the loudness and dynamic range resemble the Rush albums of the 90s, the Rush MFSL releases, or Porcupine Tree CDs.  Heck, use Steven Wilson as a sounding board.  Which gets me to my next point:

2. This CD is for the fans.  It’s to right a wrong.  It shouldn’t be meant to compete with other CDs in a changer or other MP3s on an iPod, both in terms of loudness and price.   Release it cheaply, without a bunch of marketing.  Rush has one of the largest audiences on the internet; believe me, the word of the re-release will get around with much advertising.  And the tens (maybe hundreds) of thousands of us that want to buy it will think twice if it’s priced like new material.  $10 seems good.  One suggestion to keep the price low: offer a packaging that’s just a cardboard sleeve, meant for owners of the original CD to swap the replacement CD into the original case.  Also, consider an online release option that uses losslessly-compressed FLAC files.

3. Please use this an opportunity to reassess what “quality sound” means when associated with Rush music, whether it’s on CD or DVD.  Using the “Caravan” single as a harbinger of what we can expect from the next Rush album, Clockwork Angels, it’s due to be just as loud as the original VT was.  We have volume knobs and aren’t afraid to use them.  Radio stations and many playback devices have dynamic range compression.  Geddy, Alex, and Neil, please use your influence to improve the audio quality of your band’s product going forward.  I’m looking forward to the next decade of Rush music.

Update (2011-Feb-07): Rich Chycki comments on the issue [via Blabbermouth.net]:

A while back, I remixed ‘One Little Victory’ and ‘Earthshine’ from RUSH’s 2002 release ‘Vapor Trails’, to be included on their 2009 release, ‘Retrospective 3′. The remixes were very well-received so the band has decided to let me move ahead and remix the remainder of the CD.

To put rumors to rest, there was no re-recording or performance correction done on the first two tracks and that will continue for the remainder of the CD. The reason ‘new’ details may seem to have appeared in those songs is due to the fact that I listened to the multitracks and interpreted the mix structure without first analyzing the original [deliberately] — so tracks either muted or turned down in the original mixes may shine through differently (the acoustic guitars in ‘Earthshine’ might be a good example of this). As well, the same technical tweaks will continue for the remaining songs and I still will not have a buss limiter on the mix set to stun.

In any case, both the band and I are really excited to re-visit ‘Vapor Trails’ and hope you’ll all enjoy the remix.

The “buss limiter … set to stun” quote gives me some hope, but the mastering engineer needs to play along as well to get better quality than the mixes that were on R3.

…right now, not so much.  It’s shaping up to be the choice I made between MP3 and Ogg Vorbis: MP3 makes the most sense to use for compressed storage and playback on devices, and Vorbis is preferred for streaming.  In this case, H.264 is like MP3 and WebM is like Vorbis (appropriately, since WebM includes Vorbis audio).  Right now it’s not as easy for me to create WebM videos as it was for me to create Vorbis files back in the day.  I remember using the “spinning fish” applet that Xiph published before there was more embedded support for Vorbis.  Miro Video Converter has a WebM output mode, but it doesn’t appear to be tunable.  Spelunking with the ffmpeg or vpxenc parameters tp create WebM videos doesn’t appeal to me.  It’s one thing to get into the LAME and OggEnc parameters when you’re dealing with a single audio stream.  Add video with its more complex set of parameters to that and it’s scary.

I really like being able to crunch out H.264 videos of decent quality from Handbrake that I can use on my iDevices and computers.  While I would like it if the Handbrake developers would provide similar support for WebM, I really don’t have a reason to use WebM videos right now other than for computer playback in certain scenarios.

Google’s decision to remove native H.264 support from Chrome (and hence, Chrome OS) is going to be great for the web because the trickle-down effect of this will be to:

  • Force MPEG LA to choose whether or not to sue Google for patent infringement over the technologies in WebM and finally get some resolution to the same argument that has always prevented companies like Apple from supporting Ogg Vorbis: the lurking possibility that patented techniques are embedded in the open-source media solution.  I don’t think this will happen since it appears that some of the On2 patents have been infringed by MPEG LA’s solutions.
  • Incent hardware makers to add support for WebM because websites, led by Youtube, will make it their native format.  There were (are?) several makers that supported Vorbis decoding in hardware, and I’m not aware any of them got sued.
  • Make H.264 a completely free implementation for all uses because if it isn’t available for free, software and hardware makers will favor the lower-cost WebM technology.

As far as VP8 video not performing as well as H.264 at similar resolutions and bitrates: it took quite a while for MP3 encoding to catch up to, and in some cases surpass, Vorbis.  There no reason to think that with more development, VP8 won’t catch up.  I look forward to using WebM when I have an easy way to encode to the format and I can use it in as many places as I can H.264/MPEG-4.

I see this decision more like HD-DVD vs. Blu-Ray; different logical formats that could be equally supported by hardware and software.  In fact, until Toshiba killed HD-DVD I thought that (playback of both formats) was the solution that was going to win out.  There’s no reason other than these licensing issues that support for H.264/MPEG-4 and WebM couldn’t co-exist.

How about this?:  Google will continue to ship H.264 support in Chrome if Microsoft and Apple agree to support WebM in their browsers.

Writing at NBC’s Pro Football Talk, Mike Florio (partially referencing points made by Tony Dungy) makes a great point:

On this topic, NBC’s Tony Dungy made an excellent point during Wednesday’s PFT Live.  The 11-5 Jets must play at the 10-6 Colts, and the 12-4 Ravens will be playing at the 10-6 Chiefs.  No one has complained about the inequity of those situations.

And given that the Packers and Eagles have the same record and that the Packers beat the Eagles in Week One, all four wild-card games feature a division winner hosting a team that, technically, had a better overall season.

Talking about playoff reseeding based solely on record is a slippery slope.  It weakens divisons, which I think are aligned about as well as can be expected given the NFL’s current makeup.

about.me

Alex Harden

Alex Harden

Musician & IT Pro

alex harden is: aged 43 years · married to melissa · ryan’s and zachary’s dad · a computer geek · a prog & hard rock music fan · an audio nerd · a bass player · a cyclist · a football enthusiast · a/k/a cygnus · thinking local and acting global · hard to categorize

@aharden

Recent Photos

Rush Clockwork Angels 2013 Tour: 2013-June-21, Hershey, PA

More Photos

Archives

Follow

Get every new post delivered to your Inbox.