You are currently browsing the category archive for the ‘Computing’ category.

Both Dell and HP released their first wave of servers based on the new dual-socket Intel Xeon E5 series last week.  Dell is calling their new servers “12th generation”, and HP is calling theirs “Gen8″ (generation eight).  After reviewing both vendors’ offerings, my initial take is that Dell is raising the bar in terms of storage density and features, and HP is matching some of Dell’s recent gains in system manageability.

With their previous generation of servers, Dell lagged behind HP’s internal hot-plug disk capabilities.  The Dell PowerEdge R710 2U server could only hold up to 8 2.5″ (small form factor, or “SFF”) drives, but HP had an option to insert up to 16 in their ProLiant DL380 G7.  The Dell PowerEdge T710 server could only hold up to 16 SFF drives, but HP’s ProLiant DL370 G6 could hold up to 24.  My team at work ended up choosing the DL370 G6 in many cases for applications like file servers, SQL servers, and backup servers because of the large number of local disks.  External disk enclosures from either vendor are very expensive.  With their 12th-generation servers, Dell has increased the internal storage capabilities of their servers significantly.  The PowerEdge R620 can hold up to 10 SFF drives.  The PowerEdge R720 can now hold 8 or 16 SFF drives (with a two-controller 8 + 8 split-backplane option), and has added a 2U PowerEdge R720xd model that can hold up to 14 3.5″ (large form factor, or “LFF”)drives or 26 SFF drives.  The PowerEdge T620 (replacing the T610 and T710 models) can hold up to 32 SFF drives.  On some server models, Dell also has added the capability to address up to 4 of the hot-plug bays directly with PCI Express (PCIe) buses, which work with PCIe SSD (solid state disk) hot-plug disk options for high-performance workloads.  They also carry a feature on their PERC RAID controllers called CacheCade which allows the controller to use SAS SSDs as additional read/write cache.

HP’s new servers break little new ground in the storage area compared to the models the Gen8 servers are replacing.  The ProLiant DL360p Gen8 (1U rackmount server) maxes out at 8 SFF drives (same as G7), the DL380p Gen8 (2U rackmount) can hold up to 16 SFF drives (same as G7), and the ML350p Gen8 (a 5U tower/rack server similar to the Dell T620) can hold up to 24 SFF drives (same as DL370 G6).  HP doesn’t have an answer for the PCIe SSDs or CacheCade feature that Dell offers.  Both of the new-generation Dell PERC and HP Smart Array RAID controllers have flash-backed write cache (FBWC) as a standard feature: HP had this when their previous generation controllers launched; Dell added the feature after launch.  HP has 1GB and 2GB FBWC options on their new Smart Array P421 controller; Dell shows only 512MB and 1GB options on the new PERC H710.

Both vendors’ servers have similar processor/memory options at launch.  All the servers I’ve referenced here can hold up to two 8-core CPUs and have 24 DIMM slots for RAM at sizes of 2 to 32GB (maximum 768GB RAM).  Other servers that use the Xeon E5 CPUs in a 4-socket configuration are due to come out later this year and will have even greater memory capacity.  Memory now runs at up to 1600MHz and the lowest-speed configuration (when many DIMM sockets are populated) is 1066MHz (up from 800MHz in the Xeon 5600-based configurations).

With their previous generation of servers, Dell added a new Lifecycle Controller that provided a bootable, GUI environment for maintaining servers independently of the installed OS.  HP appears to be trying to match this feature with their new “iLO Management Engine”.  I’m looking forward to seeing how this works.  Both vendors have evolved their lifecycle management over the last few years to be a lot more OS-agnostic, relying on Linux-based boot environments to provide easy-to-use system management/update tools.

Both vendors are driving up their feature sets, but I think Dell is poised to be a leader in direct-attached storage applications.  I will have to review the systems management offerings from both vendors to see if they’re truly getting more similar.

For the past few years I’ve usually bought CDs that are “special edition” or have bonus material. I have also bought DVD-Audio discs to get 5.1 mixes (mainly Steven Wilson/Porcupine Tree stuff). These formats get archived as tagged FLAC files on my home server. FLAC is my lossless codec of choice because it is an open specification, supports a wide range of resolutions and channels, has good metadata support, and is supported by a wide range of software. I don’t currently own any FLAC playback hardware, but that’s not critical to me. I play back FLACs on the computer using Foobar2000. It’s also my main ripper/converter and librarian. The only gap it has for me is that it doesn’t support writing album art metadata natively. I use either Album Art Downloader, MP3tag, or iTunes for that. Speaking of metadata, in addition to the essential metadata, I make use of album artist, disc number, and ReplayGain tags.

When I buy music online, I prefer to buy FLACs. I also prefer to buy music that has moderate dynamic range. I’ve found that I enjoy music that has a ReplayGain value of -8dB or less the most. When I see ReplayGain values at or above -10dB, I usually have a bias against the mastering, which probably used at least some brickwall limiting to produce such a high volume. Unfortunately ReplayGain values aren’t a primary specification prior to purchasing music, so I usually have to purchase on reputation. I probably shouldn’t display ReplayGain values in my primary view on Foobar2000, but I do. It’s mainly to make sure the files are tagged with it, but like I said, it does color my anticipation of the music.

My FLACs get transcoded to LAME MP3 v2 (~190kbps) for normal playback, which could be on the computer, an iPod or iPhone, or to an Airport Express via Airplay. If I’m at a computer that has access to my FLAC archives and Foobar2000, I’ll listen to those.

I rarely purchase iTunes Plus (256kbps AAC) or MP3 files (190kbps or better), but when I do it’s with the expectation that I will not transcode them into another lossy codec or to the same codec at a different bitrate. I will burn them to CDs with a note like “from AAC” or “from MP3″ to let myself know not to rip the CD later expecting a lossless copy of the original. Sometimes if I’m using a playback device that’s storage-constrained, I will transcode down to AAC or MP3 at ~128kbps. These files aren’t anything I would archive – they’re for playback only.

Speaking of burning CDs, I do burn discs to listen to stuff in the car with a little more quality than just plugging my iPod or iPhone into the aux jack. Like many, I have quite a few CD playback systems around the house and at work. One little quirk I have is that I like to write CD-TEXT on my burns; my car stereo and a few of the players I use support displaying that info. When I burn CDs, I don’t apply ReplayGain or any volume processing. I burn CDs with Nero, CD Architect (rarely, if I want to do some limited mastering, crossfades, or track adjustments), or Foobar2000 (if I don’t want to write CD-TEXT).

  1. Validate that my existing Chrome bookmarks and apps are indeed available on the new computer once I sign in. << DONE
  2. Browse the web! (Duh.) << DONE
  3. Plug in my Blue Snowball and Eyeball 2.0 and attempt to use them with GTalk.
  4. Browse to the Squeezebox Server instance on my Windows Home Server to listen to music. << Squeezebox didn’t work.  I got this going with Firefly Media Server and Fireplay.  However, couldn’t save the Fireplay web page as a bookmark or app. Filed a bug report.
  5. Plug in my digital camera and iPhone to upload pictures and video to Flickr. << Plugged in my iPhone and couldn’t browse the file system while trying to use Flickr’s web uploader.  Filed  a bug report.
  6. See, what, if any, of my podcast production workflow might be able to be done using ChromeOS.
  7. Carry it in a side pocket of my main work laptop case and use it while on the go. << DONE.  When you’ve got six geeks ogling your new tech toy, you know it’s popular. :)
  8. (New) Edited my WordPress blog!

 

I received a Google CR-48 netbook from their Pilot Program yesterday.  One of the first things I wanted to do with it was get it to play music from my home library, which is hosted on my Windows Home Server.  I have used both Firefly Media Server and Squeezebox Server on there for a few years.  Firefly serves out iTunes-compatible DAAP, and Squeezebox Server can serve to Squeezebox-compatible clients like MainSqueeze on the Roku.  Since I knew Squeezebox had an HTTP interface, I thought it would be the way to integrate with ChromeOS.  But I’d forgotten that that was only a control interface; playback happened on a device, not the web page itself.

That reminded me of the Fireplay add-on for Firefly, which I had read about but never had a need to use.  While there is a packaged add-on of it available for WHS, installing that didn’t put the necessary files in the Firefly web interface directory.  Manually putting the files in the directory and restarting the Firefly service did the trick.  Fireplay is a flash-based player that communicates directly with the Firefly Media Server.

Fireplay on ChromeOS

Brief instructions:

  1. Obtain and install Firefly Media Server.  I have mine configured to use port 9999 for its web service.  It has an admin password, but not a music (streaming) password.
  2. Obtain Fireplay from here.
  3. Unzip the Fireplay files into your Firefly instance’s admin-root folder; mine’s at “C:\Program Files\Firefly Media Server\admin-root”.  Detailed directions are here.
  4. Restart the Firefly Media Server service.
  5. Browse to the Firefly server using a URL like “http://<servername&gt;:<port>/FirePlay.html”; mine is http://ghostrider:9999/FirePlay.html
  6. Login with a blank username and your admin (not music) password.
  7. Enjoy!
  1. Validate that my existing Chrome bookmarks and apps are indeed available on the new computer once I sign in.
  2. Browse the web! (Duh.)
  3. Plug in my Blue Snowball and Eyeball 2.0 and attempt to use them with GTalk.
  4. Browse to the Squeezebox Server instance on my Windows Home Server to listen to music.
  5. Plug in my digital camera and iPhone to upload pictures and video to Flickr.
  6. See, what, if any, of my podcast production workflow might be able to be done using ChromeOS.
  7. Carry it in a side pocket of my main work laptop case and use it while on the go.

Here’s hoping I’ll get to play with one of these over the Christmas holiday.

Update: I received a CR-48 today! Will be unboxing later tonight.

I just shared this with my extended family on our private forums and thought it would be useful to cross-post here:

You may have heard that Facebook’s default privacy settings are under scrutiny. It’s been discussed to death in the tech media the last few weeks. Among other things, Facebook is trying to make using your Facebook account through “unauthorized” third-party tools a criminal offense, and they’re having third parties insert their code onto non-Facebook websites to track users’ online habits. My takeaway was that I didn’t get enough value from Facebook to justify having a presence there. I’ve deactivated my account and will probably delete it soon; the only thing that’s holding me back is losing the http://www.facebook.com/alex.harden/ URL.

While I’m not recommending that any of you deactivate or delete your accounts, I would recommend that you check your privacy settings and “dial back” those settings that might be making your postings/photos/etc. available to people outside of those who you’re comfortable sharing with. Facebook has changed their default privacy settings several times since the days when “private” was default, and if one hadn’t changed from the defaults, their settings were gradually made more public. The best illustration I’ve seen of this is here: http://mattmckeon.com/facebook-privacy/.

If I were going to continue using Facebook, I would login only when I want to use the Facebook site and would then log out. This would prevent Facebook from collecting data from non-Facebook sites I visit without my expressed consent.

Sorry to get preachy, but I know many of you use Facebook and I wouldn’t want you to be caught offguard if your online activities are being observed more than you’d like!

Reading this NYT article in the Sunday Patriot-News, I couldn’t help but think that the officials that are up in arms about Google’s “inadvertent” Wi-Fi data collection are ignorant about the security available when web browsing:

“Google is in the process of frittering away its last shred of credibility,” Mr. [Till] Steffen [the justice senator for the city-state of Hamburg] said. “The company must immediately disclose to what degree it has secretly eavesdropped as we’ve sent e-mails to friends in Germany and the rest of Europe or as we’ve done our banking in the Internet.”

This prompts a question: are there still banks that don’t use HTTPS when dealing with customers’ sensitive data over the internet?  Even if someone is using open, unencrypted Wi-Fi, their HTTPS session data is protected with encryption.  That would also be the case for any other protocols that encrypt their payload end-to-end (POP3S, SFTP, SSH, etc.).  For example, I use HTTPS sessions by default with Gmail and Google Reader.

The cited German privacy laws as they apply to electronic communications seem to be a way to compensate for the ignorance of those who implement and use this technology in unsecure ways.  I’m not a fan of Google’s collection of that data, but I don’t think that they are on the wrong side of this issue.  Wi-Fi is a broadcast-based technology using public airwaves, and if you’re not securing your broadcast you’re open to being spied upon.

I think the bigger issue here is whether the benefits of technologies like Street View and Wi-Fi-based geolocation outweigh the personal liberty of people whose image or data might be caught by a machine.  Would it make a difference if the Street View vehicles had a bunch of photographers in the back as opposed to automated cameras?  If they had nerds wardriving with laptops as opposed to automated Wi-Fi sniffers/collectors?  I can only recommend that you protect your communcations and wear a mask in public if you’re worried about this kind of stuff.  Or, for now, move to Germany. :)

Yes, I’m a fan of Google in general and Street View in particular.  It’s nice to be able to view pictures of an unfamiliar location before having to navigate it for the first time.

From VROOM!:

A VMmark Review Panel comprised of several of our partners has recently been formed in order to provide additional rigor and transparency to the benchmark reviews. The founding members of the review panel are AMD, Dell, and HP, in addition to VMware. (We hope to add more partners soon.)

[...]

This broader and more open review process will produce an even greater level of confidence in the accuracy and compliance of published results.

I agree.  I’ve blogged about the veracity of VMmark results before and it generated some good discussion.

VMmark is a virtualization throughput benchmark developed by VMware to test its products’ performance on compatible hardware configurations. Its job is to stress the CPU/memory subsystem of a server hosting virtual machines and index its performance at its maximum acceptable workload. Vendors document VMmark tests with VMware products (normally ESX) on a given hardware/software configuration and submit the results to VMware, who publishes them on a web site.

VMmark came out of beta with version 1.0 in July 2007. To date, Dell, HP, IBM, and Sun have submitted results that have been published by VMware. The results cover the AMD Opteron and Intel Xeon server platforms, which all four server vendors now provide to varying degrees. It’s been a useful resource for me, since the competition between AMD and Intel the last few years has resulted in each vendor taking turns leading in virtualization performance in the 2-socket and 4-socket x86 server spaces. Regardless of which vendor submits a VMmark result for a particular processor/memory/chipset combination, the result can usually be inferred to be similar to what would be obtained on another vendor’s implementation of that combination. Based on a recent conversation I had with HP, they expect that customers will make that inference. I had approached them twice about HP’s lack of up-to-date VMmark results for their flagship virtualization platforms, and was told that they hadn’t submitted recent benchmarks due to their reluctance to publish results with non-production VMware ESX builds and/or hardware that wasn’t yet available to customers. Because other vendors were publishing results on current or upcoming platforms sooner, HP apparently didn’t see much return on going though the trouble and cost of performing and documenting VMmarks on their implemenation of similar platforms.

Note that when I described VMmark, I mentioned compatible, not supported, hardware configurations; that’s because VMware has published results from vendors that used pre-release, unsupported software and/or hardware. I think this is the most likely reason Dell was the first to release a quad-core-Opteron-based VMmark. If you look at the disclosure for that submission, you’ll see that it was run on a PowerEdge R905 with 2.5GHz quad-core Opterons (model 8360 SE), a processor model that isn’t available for purchase in that server today. The fastest available R905 today has model 8356 (2.3GHz) processors. Dell’s submitted results for their PowerEdge R900 with Xeon 7350 processors used a beta version of VMware ESX Server v3.5, build 62773, and was tested on November 16, 2007: a few weeks before the production release of ESX 3.5, build 64607, on December 10th. In fact, of the 16 total VMmark results published as of today, the only vendor who submitted results with hardware or software unavailable at the time of publishing is Dell.

To better reflect the version and status of hardware and software used to obtain the published results, I think VMware should:

  • refuse to publish results that use pre-release hardware and/or software
  • clearly state the availability and/or versions of the tested hardware and software in the system descriptions on the results page

That would allow customers like me to better determine the veracity of a published score without having to be a detective. As VMmark evolves and future SPEC-sanctioned virtualization benchmarks come to market, it would be nice to be able to see more, relevant benchmarks from more vendors rather than gamed, dubious benchmarks from a few.

I’d been intrigued by the Linksys NSLU2 (and the associated NSLU2-Linux project) a while back. [see linkblog] When some $30 refurb units came up on Slickdeals yesterday I couldn’t say no. I should have one in a couple of days. Any advise as to which firmware to run on it would be helpful. I plan on trying out the different MP3 server packages on it, as well as doing general CIFS serving and some Python hacking. I’m leaning towards SlugOS/BE.

Update (3/31):  I spoke too soon.  CompUSA (I mean TigerDirect) first put a hold on my order to confirm my PayPal credit, then they put it on backorder, now they’ve cancelled it.  And I had to keep going to their website to find out the status; they didn’t have the balls to email me to let me know.  Guess I didn’t get in on the Slickdeal or run in the hamster wheel quickly enough.  After this disappointment I can’t recommend doing business with CompUSA.com or TigerDirect when there are awesome mail-order places like NewEgg.

about.me

Alex Harden

Alex Harden

Musician & IT Pro

alex harden is: aged 43 years · married to melissa · ryan’s and zachary’s dad · a computer geek · a prog & hard rock music fan · an audio nerd · a bass player · a cyclist · a football enthusiast · a/k/a cygnus · thinking local and acting global · hard to categorize

@aharden

Recent Photos

Rush Clockwork Angels 2013 Tour: 2013-June-21, Hershey, PA

More Photos

Archives

Follow

Get every new post delivered to your Inbox.