You are currently browsing the category archive for the ‘Tech’ category.

Both Dell and HP released their first wave of servers based on the new dual-socket Intel Xeon E5 series last week.  Dell is calling their new servers “12th generation”, and HP is calling theirs “Gen8″ (generation eight).  After reviewing both vendors’ offerings, my initial take is that Dell is raising the bar in terms of storage density and features, and HP is matching some of Dell’s recent gains in system manageability.

With their previous generation of servers, Dell lagged behind HP’s internal hot-plug disk capabilities.  The Dell PowerEdge R710 2U server could only hold up to 8 2.5” (small form factor, or “SFF”) drives, but HP had an option to insert up to 16 in their ProLiant DL380 G7.  The Dell PowerEdge T710 server could only hold up to 16 SFF drives, but HP’s ProLiant DL370 G6 could hold up to 24.  My team at work ended up choosing the DL370 G6 in many cases for applications like file servers, SQL servers, and backup servers because of the large number of local disks.  External disk enclosures from either vendor are very expensive.  With their 12th-generation servers, Dell has increased the internal storage capabilities of their servers significantly.  The PowerEdge R620 can hold up to 10 SFF drives.  The PowerEdge R720 can now hold 8 or 16 SFF drives (with a two-controller 8 + 8 split-backplane option), and has added a 2U PowerEdge R720xd model that can hold up to 14 3.5″ (large form factor, or “LFF”)drives or 26 SFF drives.  The PowerEdge T620 (replacing the T610 and T710 models) can hold up to 32 SFF drives.  On some server models, Dell also has added the capability to address up to 4 of the hot-plug bays directly with PCI Express (PCIe) buses, which work with PCIe SSD (solid state disk) hot-plug disk options for high-performance workloads.  They also carry a feature on their PERC RAID controllers called CacheCade which allows the controller to use SAS SSDs as additional read/write cache.

HP’s new servers break little new ground in the storage area compared to the models the Gen8 servers are replacing.  The ProLiant DL360p Gen8 (1U rackmount server) maxes out at 8 SFF drives (same as G7), the DL380p Gen8 (2U rackmount) can hold up to 16 SFF drives (same as G7), and the ML350p Gen8 (a 5U tower/rack server similar to the Dell T620) can hold up to 24 SFF drives (same as DL370 G6).  HP doesn’t have an answer for the PCIe SSDs or CacheCade feature that Dell offers.  Both of the new-generation Dell PERC and HP Smart Array RAID controllers have flash-backed write cache (FBWC) as a standard feature: HP had this when their previous generation controllers launched; Dell added the feature after launch.  HP has 1GB and 2GB FBWC options on their new Smart Array P421 controller; Dell shows only 512MB and 1GB options on the new PERC H710.

Both vendors’ servers have similar processor/memory options at launch.  All the servers I’ve referenced here can hold up to two 8-core CPUs and have 24 DIMM slots for RAM at sizes of 2 to 32GB (maximum 768GB RAM).  Other servers that use the Xeon E5 CPUs in a 4-socket configuration are due to come out later this year and will have even greater memory capacity.  Memory now runs at up to 1600MHz and the lowest-speed configuration (when many DIMM sockets are populated) is 1066MHz (up from 800MHz in the Xeon 5600-based configurations).

With their previous generation of servers, Dell added a new Lifecycle Controller that provided a bootable, GUI environment for maintaining servers independently of the installed OS.  HP appears to be trying to match this feature with their new “iLO Management Engine”.  I’m looking forward to seeing how this works.  Both vendors have evolved their lifecycle management over the last few years to be a lot more OS-agnostic, relying on Linux-based boot environments to provide easy-to-use system management/update tools.

Both vendors are driving up their feature sets, but I think Dell is poised to be a leader in direct-attached storage applications.  I will have to review the systems management offerings from both vendors to see if they’re truly getting more similar.

For the past few years I’ve usually bought CDs that are “special edition” or have bonus material. I have also bought DVD-Audio discs to get 5.1 mixes (mainly Steven Wilson/Porcupine Tree stuff). These formats get archived as tagged FLAC files on my home server. FLAC is my lossless codec of choice because it is an open specification, supports a wide range of resolutions and channels, has good metadata support, and is supported by a wide range of software. I don’t currently own any FLAC playback hardware, but that’s not critical to me. I play back FLACs on the computer using Foobar2000. It’s also my main ripper/converter and librarian. The only gap it has for me is that it doesn’t support writing album art metadata natively. I use either Album Art Downloader, MP3tag, or iTunes for that. Speaking of metadata, in addition to the essential metadata, I make use of album artist, disc number, and ReplayGain tags.

When I buy music online, I prefer to buy FLACs. I also prefer to buy music that has moderate dynamic range. I’ve found that I enjoy music that has a ReplayGain value of -8dB or less the most. When I see ReplayGain values at or above -10dB, I usually have a bias against the mastering, which probably used at least some brickwall limiting to produce such a high volume. Unfortunately ReplayGain values aren’t a primary specification prior to purchasing music, so I usually have to purchase on reputation. I probably shouldn’t display ReplayGain values in my primary view on Foobar2000, but I do. It’s mainly to make sure the files are tagged with it, but like I said, it does color my anticipation of the music.

My FLACs get transcoded to LAME MP3 v2 (~190kbps) for normal playback, which could be on the computer, an iPod or iPhone, or to an Airport Express via Airplay. If I’m at a computer that has access to my FLAC archives and Foobar2000, I’ll listen to those.

I rarely purchase iTunes Plus (256kbps AAC) or MP3 files (190kbps or better), but when I do it’s with the expectation that I will not transcode them into another lossy codec or to the same codec at a different bitrate. I will burn them to CDs with a note like “from AAC” or “from MP3” to let myself know not to rip the CD later expecting a lossless copy of the original. Sometimes if I’m using a playback device that’s storage-constrained, I will transcode down to AAC or MP3 at ~128kbps. These files aren’t anything I would archive – they’re for playback only.

Speaking of burning CDs, I do burn discs to listen to stuff in the car with a little more quality than just plugging my iPod or iPhone into the aux jack. Like many, I have quite a few CD playback systems around the house and at work. One little quirk I have is that I like to write CD-TEXT on my burns; my car stereo and a few of the players I use support displaying that info. When I burn CDs, I don’t apply ReplayGain or any volume processing. I burn CDs with Nero, CD Architect (rarely, if I want to do some limited mastering, crossfades, or track adjustments), or Foobar2000 (if I don’t want to write CD-TEXT).

One of the add-ins I had on my old Windows Home Server was a beta of Jungledisk that backed up my Photos folder to Amazon S3. To fill that role on my new WHS 2011 server, I’m trying Cloudberry Backup for Windows Home Server 2011.

Cloudberry offers a free trial, so I downloaded the Add-in package and installed. WHS warned me that the package was unverified, but I installed it anyways. After install, the app appeared in the navigation bar as “Cloudberry Online Backup” on the WHS Dashboard and in the list of installed Add-ins. The user interface of Cloudberry has five tabs that are easy to understand. Before I could do anything with it, I needed to give it the credentials for my S3 account in the Settings under “Set Storage Account”. It prompted me for a Bucket name, which I assumed it would create if necessary. It didn’t, and complained that the named Bucket didn’t exist, so I went into the AWS console and created it. After that, the account was added.

Because I’m not currently using this to back up critical system data, I went into Options and tuned the upload speed to 75KB/sec (about a quarter of my current upload bandwidth) and tuned down the process priority to Low. At this point, I was ready to Setup a Backup Plan, so I launched that wizard. I selected my S3 account as the target, and was given a choice between the Advanced (default) and Simple Backup Modes. Because these are just photos that I’m not keeping multiple versions of and don’t need encrypted for extra privacy, I selected Simple. I’ll probably use Advanced mode for documents or other sensitive data I might backup to S3 in the future.

I was pleased to see that the Backup Plan wizard defaulted to showing backup sources at the share level rather than at the physical drive level. There was an option to add the physical drives to the view.  I quickly selected just my “Pictures” share. I used the Advanced Filter settings to specify just to back up files with .JPG, .PNG, .MP4, and .RAW filenames, mainly to avoid backing up system files like thumbs.db and desktop.ini that Windows throws in to picture folders. However, it looks like the default selection of “Do not backup system and hidden files” would help me there. I opted for no compression, to use S3’s Reduced Redundancy Storage (RRS), and to purge deleted files after 30 days. I chose a weekly schedule with a backup at 1AM Saturdays to reduce the load on the server. The wizard then created a service to run with Administrator privs. I liked the email notification options, but chose not to use them at this time. Then the backup plan was created and I set it to run immediately.

I was concerned as I watched the initial phase of the backup.  Out of over 24GB worth of 10,000 picture files, the job status listed just 100 files and 1.8GB. However, the network utilization was on target and the performance impact to the server (an HP ProLiant MicroServer) was negligible. As I was watching the initial backup, I paged through some of the other tabs and found them straightforward and easy to understand. As the backup progressed, its number of target files and capacity increased, indicating that it wasn’t intending to cap out the backup job early.

I was very pleased to see that unlike Jungledisk, Cloudberry transferred the folder structure and files without putting them in containers. I was able to navigate the folders and see the pictures from the S3 web console. Very cool.

As I finish this entry, Cloudberry is plugging away at the initial backup and all indications are that it will work even better than my old solution! Recommended.

I’ve been running Windows Home Server v1 on a repurposed Dell PowerEdge 400SC for about 2 years now and it’s been a great filer and backup server. When I found out that WHS 2011 was going to be 64-bit only, I tried out the beta on virtual machines to get a feel for if I would like it. Would it really be worth setting up a new server to run this new version? My answer was yes.

Even though I was comfortable on WHS v1, the 400SC was maxed out internally with three hard disks and two optical drives. Being a mini-tower, it was using more power than I needed to just do WHS functions. I had originally considered building a mini-ITX-based pedestal server, but when the base components added up to over $300 I checked on the HP ProLiant MicroServer, which I’d been following since its release. For about $320 I could get the server with the first GB of RAM and a 250GB HDD included. I decided to go that route, adding a 2GB DIMM, HP DVD/RW, and 1TB HDD. With the other two HDD bays I would install the two 1TB drives already deployed in my original WHS. The hardware arrived late this week and as of this writing I have decommissioned the old WHS and moved all the data and the HDDs to the new WHS 2011 server.

I decided not to try to use the MicroServer’s limited RAID. I’ve only got about 500GB of data (not including client computer backups) on the server right now, so I’ve spread that out between the first three disks, and I’m using the extra 1TB drive for server backups. Eventually I’ll add a USB or eSATA external disk for that function. I also plan on setting up Cloudberry Backup for WHS2011 to export my photos to Amazon S3. I’d been using a beta JungleDisk plugin for that on my old WHS. A review of Cloudberry Backup is coming.

…right now, not so much.  It’s shaping up to be the choice I made between MP3 and Ogg Vorbis: MP3 makes the most sense to use for compressed storage and playback on devices, and Vorbis is preferred for streaming.  In this case, H.264 is like MP3 and WebM is like Vorbis (appropriately, since WebM includes Vorbis audio).  Right now it’s not as easy for me to create WebM videos as it was for me to create Vorbis files back in the day.  I remember using the “spinning fish” applet that Xiph published before there was more embedded support for Vorbis.  Miro Video Converter has a WebM output mode, but it doesn’t appear to be tunable.  Spelunking with the ffmpeg or vpxenc parameters tp create WebM videos doesn’t appeal to me.  It’s one thing to get into the LAME and OggEnc parameters when you’re dealing with a single audio stream.  Add video with its more complex set of parameters to that and it’s scary.

I really like being able to crunch out H.264 videos of decent quality from Handbrake that I can use on my iDevices and computers.  While I would like it if the Handbrake developers would provide similar support for WebM, I really don’t have a reason to use WebM videos right now other than for computer playback in certain scenarios.

Google’s decision to remove native H.264 support from Chrome (and hence, Chrome OS) is going to be great for the web because the trickle-down effect of this will be to:

  • Force MPEG LA to choose whether or not to sue Google for patent infringement over the technologies in WebM and finally get some resolution to the same argument that has always prevented companies like Apple from supporting Ogg Vorbis: the lurking possibility that patented techniques are embedded in the open-source media solution.  I don’t think this will happen since it appears that some of the On2 patents have been infringed by MPEG LA’s solutions.
  • Incent hardware makers to add support for WebM because websites, led by Youtube, will make it their native format.  There were (are?) several makers that supported Vorbis decoding in hardware, and I’m not aware any of them got sued.
  • Make H.264 a completely free implementation for all uses because if it isn’t available for free, software and hardware makers will favor the lower-cost WebM technology.

As far as VP8 video not performing as well as H.264 at similar resolutions and bitrates: it took quite a while for MP3 encoding to catch up to, and in some cases surpass, Vorbis.  There no reason to think that with more development, VP8 won’t catch up.  I look forward to using WebM when I have an easy way to encode to the format and I can use it in as many places as I can H.264/MPEG-4.

I see this decision more like HD-DVD vs. Blu-Ray; different logical formats that could be equally supported by hardware and software.  In fact, until Toshiba killed HD-DVD I thought that (playback of both formats) was the solution that was going to win out.  There’s no reason other than these licensing issues that support for H.264/MPEG-4 and WebM couldn’t co-exist.

How about this?:  Google will continue to ship H.264 support in Chrome if Microsoft and Apple agree to support WebM in their browsers.

  1. Validate that my existing Chrome bookmarks and apps are indeed available on the new computer once I sign in.
  2. Browse the web! (Duh.)
  3. Plug in my Blue Snowball and Eyeball 2.0 and attempt to use them with GTalk.
  4. Browse to the Squeezebox Server instance on my Windows Home Server to listen to music.
  5. Plug in my digital camera and iPhone to upload pictures and video to Flickr.
  6. See, what, if any, of my podcast production workflow might be able to be done using ChromeOS.
  7. Carry it in a side pocket of my main work laptop case and use it while on the go.

Here’s hoping I’ll get to play with one of these over the Christmas holiday.

Update: I received a CR-48 today! Will be unboxing later tonight.

I just shared this with my extended family on our private forums and thought it would be useful to cross-post here:

You may have heard that Facebook’s default privacy settings are under scrutiny. It’s been discussed to death in the tech media the last few weeks. Among other things, Facebook is trying to make using your Facebook account through “unauthorized” third-party tools a criminal offense, and they’re having third parties insert their code onto non-Facebook websites to track users’ online habits. My takeaway was that I didn’t get enough value from Facebook to justify having a presence there. I’ve deactivated my account and will probably delete it soon; the only thing that’s holding me back is losing the http://www.facebook.com/alex.harden/ URL.

While I’m not recommending that any of you deactivate or delete your accounts, I would recommend that you check your privacy settings and “dial back” those settings that might be making your postings/photos/etc. available to people outside of those who you’re comfortable sharing with. Facebook has changed their default privacy settings several times since the days when “private” was default, and if one hadn’t changed from the defaults, their settings were gradually made more public. The best illustration I’ve seen of this is here: http://mattmckeon.com/facebook-privacy/.

If I were going to continue using Facebook, I would login only when I want to use the Facebook site and would then log out. This would prevent Facebook from collecting data from non-Facebook sites I visit without my expressed consent.

Sorry to get preachy, but I know many of you use Facebook and I wouldn’t want you to be caught offguard if your online activities are being observed more than you’d like!

Reading this NYT article in the Sunday Patriot-News, I couldn’t help but think that the officials that are up in arms about Google’s “inadvertent” Wi-Fi data collection are ignorant about the security available when web browsing:

“Google is in the process of frittering away its last shred of credibility,” Mr. [Till] Steffen [the justice senator for the city-state of Hamburg] said. “The company must immediately disclose to what degree it has secretly eavesdropped as we’ve sent e-mails to friends in Germany and the rest of Europe or as we’ve done our banking in the Internet.”

This prompts a question: are there still banks that don’t use HTTPS when dealing with customers’ sensitive data over the internet?  Even if someone is using open, unencrypted Wi-Fi, their HTTPS session data is protected with encryption.  That would also be the case for any other protocols that encrypt their payload end-to-end (POP3S, SFTP, SSH, etc.).  For example, I use HTTPS sessions by default with Gmail and Google Reader.

The cited German privacy laws as they apply to electronic communications seem to be a way to compensate for the ignorance of those who implement and use this technology in unsecure ways.  I’m not a fan of Google’s collection of that data, but I don’t think that they are on the wrong side of this issue.  Wi-Fi is a broadcast-based technology using public airwaves, and if you’re not securing your broadcast you’re open to being spied upon.

I think the bigger issue here is whether the benefits of technologies like Street View and Wi-Fi-based geolocation outweigh the personal liberty of people whose image or data might be caught by a machine.  Would it make a difference if the Street View vehicles had a bunch of photographers in the back as opposed to automated cameras?  If they had nerds wardriving with laptops as opposed to automated Wi-Fi sniffers/collectors?  I can only recommend that you protect your communcations and wear a mask in public if you’re worried about this kind of stuff.  Or, for now, move to Germany. 🙂

Yes, I’m a fan of Google in general and Street View in particular.  It’s nice to be able to view pictures of an unfamiliar location before having to navigate it for the first time.

Monday’s Gillmor Gang finally convinced me that FriendFeed is the way to go to aggregate my online activities.  I’m “aharden” on FriendFeed, just like I am here and on my preferred services moving forward.

Which services do I prefer?  They’re the ones I’ll link to FriendFeed.  I may still use a service like ping.fm to update my status/microblog in multiple locations, but only one of those places (right now it’s identi.ca) will be in my FriendFeed.

A few changes as a result of this:

  • No more daily link posting here.  My del.icio.us links are in my FriendFeed.
  • No more Yahoo! Pipes on my homepage (which is still being rebuilt — slowly) – I’ll embed FriendFeed.
  • I’m going to start using my last.fm account more to see what kind of data will flow out of it.
  • This may be the tool that gets me out of Bloglines and into Google Reader.  We’ll see.  I think some of the data flows I watch in Bloglines will be obviated by what I’ll end up watching in FriendFeed.

I think FriendFeed is probably doing the best job of both aggregating content and stimulating conversation around it.  I haven’t used it much, but I’ve heard a lot about it.

The only portion of my social graph that I mined when joining FriendFeed was my Gmail contacts.  I plan to ease myself into the FriendFeed pool by being careful about who I choose to follow.

From VROOM!:

A VMmark Review Panel comprised of several of our partners has recently been formed in order to provide additional rigor and transparency to the benchmark reviews. The founding members of the review panel are AMD, Dell, and HP, in addition to VMware. (We hope to add more partners soon.)

[…]

This broader and more open review process will produce an even greater level of confidence in the accuracy and compliance of published results.

I agree.  I’ve blogged about the veracity of VMmark results before and it generated some good discussion.

@aharden

Archives