Linux Journal/2008.11

From Omnia
Jump to navigation Jump to search

Linux Journal Contents #175, November 2008

Features

  • The Roadrunner Supercomputer: a Petaflop's No Problem by James Gray
IBM and Los Alamos National Lab teamed up to build the world's fastest supercomputer.
  • Massively Parallel Linux Laptops, Workstations and Clusters with CUDA by Robert Farber
Unleash the GPU within!
  • Increase Performance, Reliability and Capacity with Software RAID by Will Reese
Put those extra hard drives to work.
  • Overcoming the Challenges of Developing Applications for the Cell Processor by Chris Gottbrath
Introducing techniques for troubleshooting programs written for the Cell processor.

Letters - Bashing Arithmetic

I was surprised to see Dave Taylor imply that the magic bash variable $RANDOM is a feature of $(( )) arithmetic syntax [see “Movie Trivia and Fun with Random Numbers”, LJ, August 2008]. It is just a magic variable that can be used anywhere. Likewise, there is no need to use both $(expr) and $(( )); one or the other is sufficient. In particular, lines like:

pickline="$(expr $(( $RANDOM % 250 )) + 1 )"

could have been simplified to:

pickline=$(( $RANDOM % 250 + 1 ))

I might also mention in passing that double quoting is redundant in variable assignment, even if the expression would normally be unsafe without quoting.

— Peter Samuelson

Dave Taylor replies: Thanks for your note. I didn't mean to imply that $RANDOM was part of the $(( )) notation, but I will say that in my experience it's far more useful in that context than elsewhere I use it. Finally, although the double quotes are occasionally unneeded, I find that a consistent style (for example, always quote variable assignments) helps with debugging.

Letters - Linux Hardware Support

It's a Vendor Thing

I am not a computer specialist and neither do I have any interest in computer code. But, I use a computer most of the day, every day. Having been stuck with Windows (which I don't like because of the way everything I do is controlled by Microsoft), I recently bought a small laptop with Linux as the operating system. It is an absolute disaster area. For a start, it is incompatible with 3 mobile broadband (I have read a number of blogs and even the experts agree on that). I have had no success in loading Java, which is essential for the work I do. And, I can't even load a 56K modem for emergency use. In short, it is totally useless to me, and I am going to have to load up Windows XP instead—much against my wishes. I had hoped that Linux was a serious competitor to Microsoft, but in reality, it is light-years away, strictly for computer specialists. Of course, I could spend days and days reading up on how to make it work, but why should I? I only want to use the computer, not re-invent it. Kernels, shells, command prompts—these things are of no interest to me whatsoever. It's back to the dark days of MSDOS all over again.


Shawn Powers replies: I feel your pain. It is so frustrating to buy a computer, especially one preloaded with Linux, only to have it fail during normal, everyday tasks.

It's frustrating as a Linux evangelist when vendors sell pre-installed computers that don't work quite right. I assure you, it's not a Linux thing, but rather a vendor thing. If a vendor shipped a Windows notebook without the drivers, I'd venture to guess it would be even less useful than your Linux laptop.

UpFront - kmemcheck

The kmemcheck code now has an official maintainer—or actually two official maintainers. Vegard Nossum and Pekka Enberg will share maintainership. The kmemcheck patch is a cool little debugging tool that logs whenever memory is read that had not been written previously. Clearly, there's a bug if we're trying to read data from a location to which we never wrote.

Cooking with Linux - Warp-Speed Blogging - OpenID

microblogging servers (and services). Microblogging (MB) is an interesting mix of blogging and instant messaging. Posts are generally updates sent either publicly or to a list of people who “follow” your published updates. The updates are limited to 140 characters, the traditional length of SMS phone text messages. The 140-character limit forces you to be brief, but it also makes it possible to follow the updates of a great many people. It doesn't take long to write 140 characters, and it doesn't take long to read either.

The best known MB service on the Internet is probably Twitter, followed distantly by Jaiku. Although those two may be the best known, they aren't necessarily the best in terms of functionality or features. Today, I show you three MB services built entirely on free software. In all cases, I assume you have a working Apache server and MySQL installation. Depending on where you want to locate each service, you may need to update your Apache configuration as well.

The first MB service I have to show you is arguably the one that has received the most attention of late. Launched in July 2008 by Evan Prodromou, Identi.ca is an MB service built on the free and open-source Laconica software. Laconica supports a Twitter-like API, can be updated via SMS or Jabber/XMPP, and it allows you to register and log in using OpenID.

Work the Shell - Pushing Your Message Out to Twitter

Work the Shell - Pushing Your Message Out to Twitter

Use the shell to generate movie trivia from a movie database.

The URL is twitter.com/statuses/update.json, but the way you pass data is a bit tricky. You need to send it as a name=value pair in the connection stream, not as a GET value or other URL-based technique. The variable it seeks is status, and you can test it by doing this:

$ curl --data-ascii status=testing \
http://twitter.com/statuses/update.json

The problem is immediately obvious in the return message:

Could not authenticate you.

Ah, well, yes, we haven't specified our user ID. Those credentials aren't sent via the URL or --data-ascii, but instead through a basic HTTP auth. So, in fact, you need to do this:

$ curl --user "$user:$pass" --data-ascii \
status=testing "http://twitter.com/statuses/update.json" 

Now, of course, you need a Twitter account to utilize. For this project, I'm going to continue working with FilmBuzz, a Twitter account that I also set up to disseminate interesting film-industry news.

For the purposes of this article, let's assume the password is DeMille, though, of course, that's not really what it is. The user and pass variables can be set, and then we invoke Curl to see what happens:

$ curl --user "$user:$pass" --data-ascii status=testing
http://twitter.com/statuses/update.json
{"truncated":false,"in_reply_to_status_id":null,"text":
↪"testing","favorited":null,"in_reply_to_user_id":
↪null,"source":"web","id":880576363,"user":{"name":
↪"FilmBuzz","followers_count":214,"url":null,
↪"profile_image_url":"http:\/\/s3.amazonaws.com\
↪/twitter_production\/profile_images\/55368346\
↪/FilmReelCloseUp_normal.JPG","description":"Film
↪trivia game, coming soon!","location":"Hollywood, of
↪course","screen_name":"FilmBuzz","id":15097176,"protected":
↪false},"created_at":"Thu Aug 07 16:51:49 +0000 2008"}

The return data gives you a nice sneak peek into the configuration of the FilmBuzz twitter account and tells you that what we sent wasn't too long (truncated:false), but otherwise, it's pretty forgettable stuff, isn't it?

How do you ignore output with a command in the shell? Reroute the output to /dev/null, of course:

$ curl --user "$user:$pass" --data-ascii status=testing
http://twitter.com/ses/update.json > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 505 100 505 0 0 1127 0 --:--:-- --:--:-- --:--:-- 0

Nope, that doesn't work, because then Curl wants to give us some transactional stats. How do we mask that? Use the --silent flag to Curl. Now, we're just about there:

$ curl --silent --user "$user:$pass" --data-ascii status=testing
http://twitter.com/statuses/update.json > /dev/null
$

TechTip - Finding Which RPM Package Contains a File

To search a list of RPM files for a particular file:

ls [RPMS-TO-SEARCH] | \
  args rpm --query --filesbypkg --package | \
  grep [FILE-TO-BE-SEARCHED-FOR]

Paranoid Penguin - Samba Security, Part I

file serving, specifically, allowing users to mount persistent “network volumes” that let them use networked server disk space as though it were a local disk. This has all sorts of productivity- and operations-related benefits: it's (usually) easy for end users to use, and it makes data easier to access from multiple systems and locations and easier to back up and archive.

As it happens, there's a rich toolkit available to Linux users for building, securing and using file servers, mainly in the form of Jeremy Allison and Andrew Tridgell's Samba suite of dæmons and commands, plus various graphical tools that supplement them. For the next few columns, I'm going to show you how to build a secure Samba file server using both command-line and GUI tools.

Does that sound like a good Paranoid Penguin project? Good enough, I hope, to forgive me for ignoring file servers for so long. (So many things to secure, so little time!)


let's agree on some requirements of my choosing (hopefully, some or all of these coincide with yours). It seems reasonable to focus on the following: security, convenience and cross-platform compatibility.

  • Security
  • Convenience
  • Cross-Platform Compatibility

The trio of goals I listed above (confidentiality, integrity and availability) is part of classic information security dogma. In just about any information security scenario you can think of, C, I and A are important one way or another.


NFS TCP support...

more current versions of NFS (versions 3 and 4) allow the server/dæmon to use a single TCP port for all connections by concurrent users. However, much of the world seems to be stuck on NFS v2. Worse still for our purposes here, there never has been good support for NFS outside the world of UNIX and UNIX-like platforms.


Samba Glossary:

  • SMB: the Server Message Block protocol, the heart of Samba. SMB is the set of messages that structure and use file and print shares.
  • CIFS: short for the Common Internet File System, which in practical terms is synonymous with SMB.
  • NetBIOS: the API used to pass SMB messages to lower-level network protocols, such as TCP/IP.
  • NBT: the specification for using NetBIOS over TCP/IP.
  • WINS: Microsoft's protocol for resolving NBT hostnames to IP addresses; it's the MS world's answer to DNS.
  • Workgroup: a peer-to-peer group of related systems offering SMB shares. User accounts are decentralized—that is, maintained on all member systems rather than on a single central server.
  • NT domain: a type of group consisting of computers, user accounts and other groups (but not other domains). It is more complex than a workgroup, but because all domain information is maintained on one or more domain controllers rather than being distributed across all domain members, domains scale much better than workgroups.
  • Active Directory: Microsoft's next-generation domain system. Samba can serve as an Active Directory client via Kerberos, but you can't control an Active Directory tree with a Samba server as you presently can do with NT Domains. Active Directory server support will be introduced in Samba v4.
  • User-mode security: when a Samba server's shares are authenticated by local workgroup user names and passwords.
  • Share-mode security: when each share on a Samba server is authenticated with a share-specific password that isn't explicitly associated with a user name.
  • Guest access: when a Samba server allows anonymous connections to a given share via a shared guest account with no password.


the protocols. SMB, aka CIFS, is the protocol that defines the network filesystem—its structure and its use. NetBIOS provides an API through which SMB messages may be transmitted over networks, and which may be used by servers to “advertise” services and by clients to “browse” those services. NetBIOS can use any of a number of lower-level network protocols as its transport, but the most important of these is TCP/P; NetBIOS over TCP/IP is called NBT. WINS provides centralized name services (mappings of hostnames to IP addresses), where needed.


Samba Tools...

On your Samba server, you're going to need your distribution's packages for Samba's libraries; the Samba dæmons smbd, nmbd and winbindd; the Samba client commands smbclient, smbmount and so forth (which are useful even on servers for testing Samba configurations); and also the Web-based configuration tool SWAT


SWAT - Web-based configuration tool

Using SWAT is the best way to get up and running quickly—not because it does very much work for you, but because its excellent help system makes it super-convenient to summon the pertinent parts of Samba's various man pages.


Resources:

Christopher R. Hertel's On-Line Book Implementing CIFS, a Comprehensive Source of Information on All Things CIFS/SMB-Related: www.ubiqx.org/cifs

“The Official Samba 3.2.x HOWTO and Reference Guide”: us1.samba.org/samba/docs/man/Samba-HOWTO-Collection

[Hack and / - Memories of the Way Windows Were Hack and / - Memories of the Way Windows Were]

I'm a half-organized person. On one hand, if something of mine has a place, I can be pretty anal about making sure I put it back every time I use it. On the other hand, if something doesn't have a place, it inevitably ends up in a pile or a junk drawer. I've learned that if I want to be organized, I must give everything a home.

The same rule applies to my desktop environment. Back when I used to use Windows, I didn't have much of a choice—everything ended up stacking up on the same desktop, either maximized or at some arbitrary size. Once I started using Linux though, I discovered this interesting multiple desktop model. With Linux, I could assign windows into certain groups and then arrange each group on a particular desktop. The main downside to this much organization was that every time I opened a window, I usually needed to resize it and move it to a particular desktop. That's a lot of manual work on my part, and it wasn't long before I discovered that certain window managers supported window memory. With window memory, every window I use on a regular basis can be assigned a location, a size and a desktop.


CompizConfig Settings Manager

By default, at least in Ubuntu, there are only so many settings you can tweak in Compiz. Compiz provides a tool, however, called CompizConfig Settings Manager (or ccsm) that gives you very detailed control over many different aspects of both Compiz eye-candy effects as well as a lot of the important settings for the window manager itself. The major downside to ccsm, however, is that there are almost too many options—if you don't know exactly what you are looking for, expect to spend some time digging through different categories. Even window memory settings are split between two different categories.


Compiz Window Identifiers

Compiz can match windows based on a number of different attributes documented at wiki.compiz-fusion.org/WindowMatching, such as window type, role, class, title, xid and state—all of which you can find out about with the xprop command-line utility. Simply run xprop, and then click on the particular window for which you want information. Even though there are lots of possible attributes to match, probably the easiest attribute to use is the window title. To figure out a window's title, either view its title bar, or alternatively, run:

xprop WM_NAME | cut -d\" -f2

and then click on the window of interest. Compiz doesn't necessarily need the full title of the window, just some identifying information. So for instance, if you want Mozilla Firefox to be sticky, you could add title=Mozilla Firefox to the Sticky option, or you simply could add title=Firefox.

You also can add multiple window attributes to each of these fields and separate them with a | for “or” or & for “and”. So if I wanted both xterms and aterms to be sticky, I would add the following to the sticky field:

title=xterm | title=aterm

== Memopal's On-Line Backup Utility

Memopal's On-Line Backup Utility - $50/year

European companies often get the jump on their North American counterparts regarding the addition of Linux compatibility. A fine example is Italy's Memopal, which now offers a Linux version of its on-line backup utility. Memopal offers automatic and continuous backup to a remote server via a secure Internet connection, a service that has been lacking in the Linux space. The company claims that its Memopal Global File System archiving technology provides a distributed filesystem that supports up to 100 million terabytes of storage, transparent read-write compression, hot-add scalability and more. In beta at the time of this writing, Memopal for Linux supports Ubuntu 8.04 and Debian Etch.

www.memopal.com

VDrift

VDrift—Open-Source Drift Racing Simulator (vdrift.net)

To quote the Web site:

VDrift is a cross-platform, open-source driving simulation made with drift racing in mind. It's powered by the excellent Vamos physics engine. It is released under the GNU General Public License (GPL) v2. It is currently available for Linux, FreeBSD, Mac OS X and Windows (Cygwin).

CharTr—Mind Mapping Tool

CharTr—Mind Mapping Tool (code.google.com/p/chartr)

CharTr is an artistic piece of software made for fun to give mind mappers good usability. For those unfamiliar with mind mapping, Wikipedia says the following:

A mind map is a diagram used to represent words, ideas, tasks or other items linked to and arranged radially around a central key word or idea. It is used to generate, visualize, structure and classify ideas, and as an aid in study, organization, problem solving, decision making and writing.

Tracking Your Business Finances with NolaPro

Tracking Your Business Finances with NolaPro

NolaPro is an easy-to-use, Web-based accounting program.


About six months ago, I started looking for an accounting program in order to track my side work. I needed something that was easy to use, ran on Linux and preferably was Web-based. It had to be easy to use, because I'm not an accountant or a financial analyst; I'm a nerd. The program had to run on Linux, or I wouldn't have anywhere to run it. I wanted it to be Web-based, because I didn't want to have to install the software on all my machines, and I wanted to be able to log time and charges from remote locations. The Holy Grail would have been if it also would integrate with eGroupWare.

eGroupWare? - The Holy Grail would have been if it also would integrate with eGroupWare.


NolaPro from Noguska LLC (www.nolapro.com/index-5.html)


What is NolaPro

NolaPro is a Web-based accounting program that runs on Linux using Apache, MySQL and PHP. NolaPro can track customers, orders, accounts payable, inventory, employees and, everyone's favorite, accounts receivable. NolaPro also has a point-of-sale system, a B2B module, an e-commerce module and a general ledger—all of this in one very mature, polished software package.

NolaPro originally was an in-house program used to track Noguska's printing operations. Eventually, Noguska decided to release the program to the public as a free download. As the company doesn't charge money for the program, it doesn't have an advertising budget for it, which explains why I'd never heard of it when I was looking for an accounting program. Even so, the NolaPro user base is growing, and the company actively supports the program. One of the company's motivations for releasing the product is to help introduce its customers to Linux as a flexible and low-cost system, thus reducing its costs as well as its customer's costs.


Robinhood Business

Noguska offers fee-based custom development and integration services to NolaPro users. The results of this paid development find their way into the next release of NolaPro. So, the entire user community benefits when a company pays to have a particular feature added to NolaPro. Donovon Lee, Noguska's CEO, described it to me as a Robin Hood business model where the richer, larger companies pay for features they need, and the smaller, poorer companies benefit from the results. The results in this case are that NolaPro has the features companies want, not the features the software companies think they want.


Features

Customer. The customer module lets me store just about anything I could care to know about my customers. The interface isn't fancy, but it is intuitive.

Billing. billing is where the money's at, right? Clicking on Billing and then New Invoice was all it took to start a new customer invoice

PDF Invoice. The resulting invoice is a professional PDF file that I can send to my clients and that I can track to ensure timely payment.

Payable. The Payables module is almost as intuitive as the Billing module, although it's not as fun to input where all your money is going as it is to input where it's coming from.

The Orders module. Being a computer nerd, I tend to think in terms of “projects”, not “orders”. But, once I realized that a programming project was nothing but a service order, I found that NolaPro's Orders module would allow me to track all my billable hours against a given order, and when the work was done, I could convert the order into a ready-made invoice. I had been tracking my billable time in an OpenOffice.org spreadsheet, which was less than ideal.

Orders. NolaPro also allows me to enter an estimated cost for each order, which lets me make sure all my projects come in under budget. For orders that entail shipping a product from inventory, NolaPro allows you to create a fulfillment order, so you can estimate shipping costs and modify inventory counts.

Inventory. NolaPro's Inventory module lets me track inventory items that may be stored at multiple locations. It also allows me to set the minimum and maximum number of each inventory item to keep in stock, as well as minimum ordering quantity. I get a substantial discount when I order phone adapters in quantity, for example, and NolaPro allows me to take that fact into consideration when I re-order.

Ledger. NolaPro's Ledger features go well beyond anything I need as a small-time entrepreneur. Here's where you find a general ledger, budgeting and bank statement reconciliation. Most of the GL functions are tied to a set of standardized general accounts, such as cash on hand, sales income and so forth. The Budgeting module then allows you to set financial targets for each GL account on a per-month basis. The Ledger functions obviously are geared toward companies that are large enough to have an accountant or bookkeeper.

Point of Sale. he Point of Sale module allows a user to sell items from inventory and accept cash, check or credit card as payment, without having to create an invoice. The B2B module allows business partners to log in and view outstanding orders, invoices and payment history. The B2B module also allows partners to pay bills via credit card. I've been told that Noguska will be releasing a Web Services API that will allow third-party developers to integrate with NolaPro.

Forum. NolaPro's on-line forums are organized by feature and seem to have a high signal-to-noise ratio.

Training. there's the video training library. Yes, free video training. If you want to learn how to add a new service order, you can click on a few hyperlinks from the NolaPro home page. From there, you can watch and listen to a video of someone actually performing that task. The video describes the process, and you clearly can see what menu items to use. It's more than just a slideshow with a brief outline. Some of the videos are from older versions of the program, which means the menus have a slightly dated layout, but even so, the video training is well done and quite usable.


Flaws

NolaPro is able to handle a fairly broad range of business types, but it doesn't do everything. Some of my consulting customers require regular incremental invoicing on projects, even before the project is finished. I discovered that NolaPro won't let me split an order up and invoice part of it. I also discovered that the invoicing module tracks inventory items by count, whereas I need to track them individually by serial number. I was a bit disappointed by the fact that when I asked the program to e-mail an invoice to a customer, it sent a very generic e-mail with the invoice attached as an oddly named .pdf file, and none of this was configurable. I've resolved this by simply exporting the .pdf files and attaching them to my own e-mail messages. Finally, I don't know any two people who do their budgets the same way. NolaPro's budgeting capability seemed to be too closely tied to the GL accounts, which I found cumbersome to wrap my head around. With these weaknesses in mind, NolaPro is a very powerful program, and I'm sure that people in different lines of business would find other things they felt needed to be improved. No software program can do everything to everyone's satisfaction.

The Popcorn Hour A-100

The Popcorn Hour A-100

Watch out! Here comes the Networked Media Tank.

Website: http://www.popcornhour.com

Prices:
A-110 $215
B-110 $199
A-100 $179

Video Review: The Real HT Info Podcat review - Popcorn Hour A100 Review $179 - Video

The Popcorn Hour is a reference hardware implementation for a new Linux-based middleware layer from Syabas Technology called the Networked Media Tank. According to its Web site: “The Networked Media Tank (NMT) is a state-of-the-art integrated digital entertainment system that allows you to watch, store and share digital content on your home network.”

The Popcorn Hour is not designed to compete with your TiVo or MythTV box; it can connect to on-line video streams and podcasts, but it doesn't do live over-the-air (or cable) television. The closest device it competes with, at least partially, is the Neuros OSD. The Neuros can encode video though, which is something the Popcorn Hour can't do—it's strictly a playback device.

For me, the encoding capabilities of the Neuros OSD are not needed. I've already digitized most of my DVD library. I have a mix of media in both MPEG-4.2 and H.264 formats, and the Popcorn Hour's support of both formats was one of the things that attracted me to it over the MPEG-4.2-only Neuros OSD. For more on the differences between MPEG-4.2 and H.264, see the When MPEG-4 Isn't MPEG-4 sidebar.

Another thing that caught my eye on the Popcorn Hour were the various outputs, which include composite, component, S-Video and HDMI. I currently have a standard-definition television, but we plan on replacing it with an LCD-HDTV before the end of the year, and the Popcorn Hour will work on both. It can output NTSC, 480p, 720p, 1080i and 1080p among others.


Testing the Popcorn Hour

For video tests, I used Big Buck Bunny from the Open Movie Project. This Creative Commons-licensed animated short is available at bigbuckbunny.org in several video formats and sizes. The sizes range from 1920x1080 pixels (1080P) at the high end down to 320x180 pixels at the low end. For each size, there is a selection of container formats, including AVI, Ogg, M4V and MOV. Each container format has a different video format inside it, including MPEG-4.2, H.264, MS MP4 and Theora. The audio is in MP3, AAC, AC3 and Vorbis formats. These various versions provide a pretty good test suite for the capabilities of any video player.

The only container format it doesn't have is Matroska (.mkv). This, however, was not an insurmountable problem, because it is simple to create it, thanks to mkvmerge. For more on this and the Matroska container format, see the Matroska sidebar.


Death to Media

Many people feel that the era of physical formats, like DVDs and Blu-ray discs, is coming to an end. In fact, many of them feel that Blu-ray is the last physical format. The only problem with this line of thinking is that if the world moves away from physical formats, whatever replaces them should be able to do everything (or almost everything) they can do. This includes multiple languages, alternate video streams, subtitles in various languages and other features. The LGPL-licensed Matroska is trying to become that format.


MPEG-2 - The most dominant video format of the past ten years (or more) has been MPEG-2, the format DVDs use.


DRM-infected Content

One thing I should note is that the Popcorn Hour does not play any sort of DRM-infected content. So, if you've been purchasing things from iTunes and/or similar digital stores that cripple their content, the Popcorn Hour probably is not a good purchase.


DVD Playback - NO CSS encrypted DVDs

The Popcorn Hour can play DVDs either from a USB DVD-ROM drive plugged in to one of the USB ports or from a DVD ISO file. There is one huge caveat to this ability, however; it can't play encrypted DVDs, which basically covers nearly all commercial DVDs and ISO files of those DVDs. The only DVDs I found in my collection that can be played directly by the Popcorn Hour are the ones I purchased at the dollar store and Big Buck Bunny from the Open Movie Project.

Assuming you have some unencrypted DVDs or ISO files, playing them is similar to using any off-the-shelf DVD player. One note though: playing an ISO of a DVD off of a network share takes a lot of bandwidth, so you'll have your best luck with NFS and a wired connection as opposed to Samba and/or a wireless connection.

Solution: decrypted ISOs


Conclusion

The Popcorn Hour is a very capable little box. It plays a wide variety of music and video formats—provided they aren't encumbered with DRM.

Right now, there are several unfinished pieces, but thankfully, firmware updates are coming regularly, and each one unlocks more functionality. Despite the rough spots, I have to admit I am perfectly happy with the core functionality, and I recommend it (as long as you don't have a lot of DRM-infected content). It is well worth the modest purchase price of $179.


Resources

Popcorn Hour Web Site: popcornhour.com

Networked Media Tank and Popcorn Hour Forum: networkedmediatank.com

The NMT Wiki: networkedmediatank.com/wiki

The NMT Quick-Start Guide: support.popcornhour.com/UserFiles/Popcorn_Hour/file/NMT_Quick_Start_Guide_Rev1_0.pdf

Syabas Technology: syabas.com

Details of Sigma Designs SMP8635 Chip: www.sigmadesigns.com/public/Products/SMP8630/SMP8630_series.html

Daniel Bartholomew lives with his wife and children in North Carolina. He can be found on-line at daniel-bartholomew.com.


When MPEG-4 Isn't MPEG-4

Many people are familiar with what generically is known as MPEG-4 or MP4 video. Popular implementations of this are DivX and Xvid, both of which have found wide use on file-sharing sites. Technically though, DivX and Xvid implement MPEG-4 Part 2. Much like MPEG-2 before it, the MPEG-4 standard encompasses several different audio, video and file format standards. There isn't space to go into too much detail on MPEG-4 here, so see en.wikipedia.org/wiki/MPEG-4 for more information (a lot more). The main point I want to make about MPEG-4 Part 2 is that even though when the first implementations were released it was hailed as an excellent video codec that was far better than MPEG-2 video, MPEG-4 Part 2's two main modes are known as the Simple or Advanced Simple Profiles. In other words, they're the children of the MPEG-4 world. The “all-grown-up” video codec of MPEG-4 is Part 10, which is also known as MPEG-4 AVC (Advanced Video Coding). The International Telecommunications Union calls it H.264.

To avoid confusion, when I refer to MPEG-4 Part 2 in this article, I call it MPEG-4.2 instead of Xvid or DivX or the generic MP4. And, when I'm talking about MPEG-4 Part 10, I refer to it by the ITU name, H.264.

Much as MPEG-2 is the format used on DVDs, H.264 video is the preferred video format of Blu-ray discs. It also is becoming the preferred video format for small devices, such as cell phones. This is because H.264 was designed to provide video quality equivalent to MPEG-4.2 at half the bandwidth. This efficiency comes at an increased processing cost both to encode and decode, but since the standard was formalized a few years ago, several chipsets have been developed to do the decoding in hardware. Thus, even extremely small and low-power devices, such as cell phones, can play back H.264-encoded video easily.

This is exactly what the Popcorn Hour does. It utilizes a Sigma Designs SMP8635 chip, which, according to the manufacturer, provides MPEG-4.2, H.264, VC-1, WMV9 and MPEG-2 decoding at up to 1080p resolution.

Matroska

According to the Matroska home page, “Matroska aims to become the standard of multimedia container formats.” A lofty goal to be sure, but it's making progress due to the tremendous flexibility the container format has.

The main trick that Matroska has over other container formats is that it can support multiple audio and video streams inside a single file. This enables you to, for example, have multiple selectable languages to go along with the video portion of the file. If you think that sounds like a DVD, you're on the right track. Many people feel that the era of physical formats, like DVDs and Blu-ray discs, is coming to an end. In fact, many of them feel that Blu-ray is the last physical format. The only problem with this line of thinking is that if the world moves away from physical formats, whatever replaces them should be able to do everything (or almost everything) they can do. This includes multiple languages, alternate video streams, subtitles in various languages and other features. The LGPL-licensed Matroska is trying to become that format.

Matroska containers can contain nearly any audio and video format, and one of the ways for putting those formats into a .mkv file on Linux is with mkvmerge. The mkvmerge tool can be downloaded as part of the mkvtools package from www.bunkus.org/videotools/mkvtoolnix/downloads.html. Follow the directions for your distribution to install it.

To change the container of any video file from whatever it is to Matroska, simply launch the mkvmerge GUI, and click the Add button to open the file you want to convert. By default, it will save the output .mkv file to the same directory. If you want to change that, click on the Browse button in the lower-right corner of the window, and choose where you want to save it. Finally, click the Start muxing button, and mkvmerge will begin the process of extracting the audio and video from the existing container and putting it all into a Matroska container. Because the tool is not converting the audio or video to a different format, the process is lossless and does not take long.

The mkvmerge program is very easy to use.

If you want to do the muxing from the command line, the GUI tool offers a Copy to Clipboard button that gives you the command and all of the options that it will do when you press the Start muxing button. The general command is this:

mkvmerge -o "destination-file.mkv" -a 1 -d 0 -S

↪"original-file.avi" --track-order 0:0,0:1

At the end of the process, you will have a Matroska container with whatever audio and video you copied out of the other container inside of it.

For more on Matroska, see matroska.org.

The Roadrunner Supercomputer: a Petaflop's No Problem

IBM and Los Alamos National Lab built Roadrunner, the world's fastest supercomputer. It not only reached a petaflop, it beat that by more than 10%. This is the story behind Roadrunner.


History

In 1995, the French threw the world into an uproar. Their testing of a nuclear device on Mururoa Atoll in the South Pacific unleashed protests, diplomatic friction and a boycott of French restaurants worldwide. Thanks to many developments—among them Linux, hardware and software advances and many smart people—physical testing has become obsolete, and French food is back on the menu. These developments are manifested in Roadrunner, currently the world's fastest supercomputer. Created by IBM and the Los Alamos National Laboratory (LANL), Roadrunner models precise nuclear explosions and other aspects of our country's aging nuclear arsenal.

Although modeling nuclear explosions is necessary and interesting to some, the truly juicy characteristic of the aptly named Roadrunner is its speed. In May 2008, Roadrunner accomplished the almost unbelievable—it operated at a petaflop. I'll save you the Wikipedia look-up on this one: a petaflop is one quadrillion (that's one thousand trillion) floating-point operations per second. That's more than double the speed of the long-reigning performance champion, IBM's 478.2-teraflop Blue Gene/L system at Lawrence Livermore National Lab.

Besides the petaflop achievement, the story behind Roadrunner is equally incredible in many ways. Elements such as Roadrunner's hybrid Cell-Opteron architecture, its applications, its Linux and open-source foundation, its efficiency, as well as the logistics of unifying these parts into one speedy unit, make for a great story. This being Linux Journal's High-Performance Computing issue, it seems only fitting to tell the story behind the Roadrunner supercomputer here.


Petaflop accomplishment

The petaflop accomplishment occurred at “IBM's place”, where the machine was constructed, tested and benchmarked. In reality, Roadrunner achieved 1.026 petaflops—merely 26 trillion additional floating-point calculations per second beyond the petaflop mark. Roadrunner's computing power is equivalent to 100,000 of today's fastest laptops.


Speed = Hybrid Architecture + Software

You may be surprised to learn that Roadrunner was built 100% from commercial parts. The secret formula to its screaming performance involves two key ingredients, namely a new hybrid Cell-Opteron processor architecture and innovative software design. Grice emphasized that Roadrunner “was a large-scale thing, but fundamentally it was about the software”.

Despite that claim, the hardware characteristics remain mind-boggling. Roadrunner is essentially a cluster of clusters of Linux Opteron nodes connected with MPI and a parallel filesystem. It sports 6,562 AMD dual-core Opteron 2210 1.8GHz processors and 12,240 IBM PowerXCell 8i 3.2GHz processors. The Opteron's job is to manage standard processing, such as filesystem I/O; the Cell processors handle mathematically and CPU-intensive tasks. For instance, the Cell's eight vector-engine cores can accomplish acceleration of algorithms, much cooler, faster and cheaper than general-purpose ones. “Most people think [that the Cell processor] is a little bit hard to use and that it's just a game thing”, joked Grice. But, the Cell clearly isn't only for gaming anymore. The Cell processors make each computing node 30 times faster than using Opterons alone.

LANL's White further emphasized the uniqueness of Roadrunner's hybrid architecture, calling it a “hybrid hybrid”, because the Cell processor itself is a hybrid. This is because the Cell has the PPU (PowerPC) core and eight SPUs. Because the PPU is “of modest performance” as the folks at LANL politely say, they needed a core for running code that wouldn't run on the SPUs and improved performance. Thus, the Cells are connected to the Opteron.

The system also carries 98 terabytes of memory, as well as 10,000 InfiniBand and Gigabit Ethernet connections that require 55 miles of fiber optic cabling. 10GbE is used to connect to the 2 petabyes of external storage. The 278 IBM BladeCenter racks take up 5,200 square feet of space.

The machine is composed of a unique tri-blade configuration consisting of one two-socket dual-core Opteron LS21 blade and two dual-socket IBM QS22 Cell blade servers. Although the Opteron cores each are connected to a Cell chip via a dedicated PCIe link, the node-to-node communication is via InfiniBand. Each of the 3,456 tri-blades can perform at 400 Gigaflops (400 billion operations per second).

The hybrid Opteron-Cell architecture is manifested in a tri-blade setup. The tri-blade allows the Opteron to perform standard processing while the Cell performs mathematically and CPU-intensive tasks.

The hybrid, tri-blade architecture has allowed for a quantum leap in the performance while utilizing the same amount of space as previous generations of supercomputers. Roadrunner takes up the same space and costs the same to operate as its two predecessors, the ASC Purple and ASC White machines before it. This is because performance continues to grow predictably at a rate of 1,000% every 10–11 years. Grice noted how just three of Roadrunner's tri-blades have the same power as the fastest computer from 1998. Put another way, a calculation that would take a week on Roadrunner today would be only half finished on an old 1 teraflop machine that was started in 1998.

Such quantum leaps in performance help boggle the minds of many scientists, who see their careers changing right before their eyes. If they have calculations that take too long today, they can be quite sure that in two years, the calculation will take one-tenth of the time.

Neither IBM's Grice nor LANL's White could emphasize enough the importance and complexity of the software that allows for exploitation of Roadrunner's hardware prowess. Because clock frequency and chip power have plateaued, Moore's Law will continue to hold through other means, such as with Roadrunner's hybrid architecture.


Roadrunner Runs

Clearly a petaflop isn't the limit. Not only was the original petaflop achievement actually 1.026 petaflops, since then, Roadrunner has done better. In June 2008, LANL and IBM ran a project called PetaVision Synthetic Cognition, a model of the brain's visual cortex that mimicked more than one billion brain cells and trillions of synapses. It reached the 1.144 petaflop mark. Calculations like these are the petaflop-level tasks for which Roadrunner is ideal.


Keeping the Bird Cool

In general, “power and cooling are second only to the software complexity”, emphasized Grice. Power is the real problem for driving HPC forward. Roadrunner solves these issues through the efficiency of its design. Especially due to the efficiency of the Cell processors, Roadrunner needs only 2.3MW of power at full load running Linpack, delivering a world-leading 437 million calculations per Watt. This result was much better than IBM's official rating of 3.9MW at full load. Such efficiency has placed Roadrunner in third place on the Green 500 list of most efficient supercomputers.

Otherwise, Roadrunner is air-cooled, utilizing large copper heat sinks and variable-speed fans.


Resources

IBM Fact Sheet on Roadrunner: www-03.ibm.com/press/us/en/pressrelease/24405.wss

Roadrunner Home Page at Los Alamos National Lab: www.lanl.gov/orgs/hpc/roadrunner/index.shtml

The Green 500 List: www.green500.org

James Gray is Linux Journal Products Editor and a graduate student in environmental sciences and management at Michigan State University. A Linux enthusiast since the mid-1990s, he currently resides in Lansing, Michigan, with his wife and cats.


Massively Parallel Linux Laptops, Workstations and Clusters with CUDA

Use an NVIDIA GPU or a cluster of them to realize massive performance increases.

NVIDIA's CUDA (Compute Unified Device Architecture) makes programming and using thousands of simultaneous threads straightforward. CUDA turns workstations, clusters—and even laptops—into massively parallel-computing devices. With CUDA, Linux programmers can address real-time programming and computational tasks previously possible only with dedicated devices or supercomputers.


RAID

One example is the RAID software developed by researchers at the University of Alabama and Sandia National Laboratory that transforms CUDA-enabled GPUs into high-performance RAID accelerators that calculate Reed-Solomon codes in real time for high-throughput disk subsystems (according to “Accelerating Reed-Solomon Coding in RAID Systems with GPUs” by Matthew Curry, Lee Ward, Tony Skjellum and Ron Brightwell, IPDPS 2008). From their abstract, “Performance results show that the GPU can outperform a modern CPU on this problem by an order of magnitude and also confirm that a GPU can be used to support a system with at least three parity disks with no performance penalty.” I'll bet the new NVIDIA hardware will perform even better. My guess is we will see a CUDA-enhanced Linux md (multiple device or software RAID) driver in the near future. [See Will Reese's article “Increase Performance, Reliability and Capacity with Software RAID” on page 68 in this issue.]

Imagine the freedom of not being locked in to a proprietary RAID controller. If something breaks, simply connect your RAID array to another Linux box to access the data. If that computer does not have an NVIDIA GPU, just use the standard Linux software md driver to access the data. Sure, the performance will be lower, but you still will have immediate access to your data.


CUDA

CUDA-enabled GPUs can run multiple applications at the same time by time sharing—just as Linux does. CUDA devices have a very efficient hardware scheduler. It's fun to “wow” people by running a floating-point-intensive program while simultaneously watching one or more graphics-intensive applications render at a high frame rate on the screen.

CUDA is free, so it's easy to see what I mean. If you already have a CUDA-enabled GPU in your system (see www.nvidia.com/object/cuda_learn_products.html for compatible models), simply download CUDA from the NVIDIA Web site (www.nvidia.com/cuda), and install it. NVIDIA provides the source code for a number of working examples. These examples are built by simply typing make.


Emulator

Don't have a CUDA-enabled GPU? No problem, because CUDA comes with an emulator. Of course, the emulator will not provide the same level of performance as a CUDA-enabled GPU, but you still can build and run both the examples and your own applications. Building software for the emulator is as simple as typing make emu=1.


massive parallelism

So, how can NVIDIA offer hundreds of thread processors while the rest of the industry can deliver only dual- and quad-core processors?

The answer is that NVIDIA designed its processors from the start for massive parallelism. Thread scalability was designed in from the very beginning. Essentially, the NVIDIA designers used a common architectural building block, called a multiprocessor, that can be replicated as many times as required to provide a large number of processing cores (or thread processors) on a GPU board for a given price point. This is, of course, ideal for graphics applications, because more thread processors translate into increased graphics performance (and a more heart-pounding customer experience). Low price-point GPUs can be designed with fewer multiprocessors (and, hence, fewer thread processors) than the higher-priced, high-end models.


History

CUDA was born after a key realization was made: the GPU thread processors can provide tremendous computing power if the problem of programming tens to hundreds of them (and potentially thousands) can be solved easily.

A few years ago, pioneering programmers discovered that GPUs could be harnessed for tasks other than graphics—and they got great performance! However, their improvised programming model was clumsy, and the programmable pixel shaders on the chips (the precursors to the thread processors) weren't the ideal engines for general-purpose computing. At that time, writing software for a GPU meant programming in the language of the GPU. A friend once described this as a process similar to pulling data out of your elbow in order to get it to where you could look at it with your eyes.

NVIDIA seized upon this opportunity to create a better programming model and to improve the hardware shaders. As a result, NVIDIA ended up creating the Compute Unified Device Architecture. CUDA and hardware thread processors were born. Now, developers are able work with familiar C and C++ programming concepts while developing software for GPUs—and happily, it works very well in practice. One friend, with more than 20 years' experience programming HPC and massively parallel computers, remarked that he got more work done in one day with CUDA than he did in a year of programming the Cell broadband engine (BE) processor. CUDA also avoids the performance overhead of graphics-layer APIs by compiling your software directly to the hardware (for example, GPU assembly language), which results in excellent performance.


Similar to C

Other than the addition of an execution configuration, a CUDA kernel call looks syntactically just like any other C subroutine call, including the passing of parameters and structures. There are no function calls to CUDA, because of the asynchronous nature of the kernel invocations. More information can be found in the tutorials or in “The CUDA Programming Guide” in the documentation.


SIMD Taxonomy

Is CUDA appropriate for all problems? No, but it is a valuable tool that will allow you to do things on your computer that you could not do before.

One issue to be aware of is that threads on each multiprocessor execute according to an SIMD (Single Instruction Multiple Data) model. The SIMD model is efficient and cost-effective from a hardware standpoint, but from a software standpoint, it unfortunately serializes conditional operations (for example, both branches of each conditional must be evaluated one after the other). Be aware that conditional operations can have profound effects on the runtime of your kernels. With care, this is generally a manageable problem, but it can be problematic for some issues. Advanced programmers can exploit the MIMD (Multiple Instruction Multiple Data) capability of multiple-thread processors. For more detail on Flynn's taxonomy of computer architectures, I recommend the Wikipedia article en.wikipedia.org/wiki/Flynn%27s_taxonomy.


Increase Performance, Reliability and Capacity with Software RAID

Linux software RAID provides a flexible software alternative to hardware RAID with excellent performance.


In the late 1980s, processing power and memory performance were increasing by more than 40% each year. However, due to mechanical limitations, hard drive performance was not able to keep up. To prepare for a “pending I/O crisis”, some researchers at Berkeley proposed a solution called “Redundant Arrays of Inexpensive Disks”. The basic idea was to combine several drives so they appear as one larger, faster and/or more-reliable drive. RAID was, and still is, an effective way for working around the limitations of individual disk drives. Although RAID is typically implemented using hardware, Linux software RAID has become an excellent alternative. It does not require any expensive hardware or special drivers, and its performance is on par with high-end RAID controllers. Software RAID will work with any block device and supports nearly all levels of RAID, including its own unique RAID level (see the RAID Levels sidebar). Getting Started

Most Linux distributions have built-in support for software RAID. This article uses the server edition of Ubuntu 8.04 (Hardy). Run the following commands as root to install the software RAID management tool (mdadm) and load the RAID kernel module:

  1. apt-get install mdadm
  2. cat /proc/mdstat

Once you create an array, /proc/mdstat will show you many details about your software RAID configuration. Right now, you just want to make sure it exists to confirm that everything is working.

Creating an Array

Many people like to add a couple drives to their computer for extra file storage, and mirroring (RAID 1) is an excellent way to protect that data. Here, you are going to create a RAID 1 array using two additional disks, /dev/sda and /dev/sdb.

Before you can create your first RAID array, you need to partition your disks. Use fdisk to create one partition on /dev/sda, and set its type to “Linux RAID autodetect”. If you are just testing RAID, you can create a smaller partition, so the creation process does not take as long:

  1. fdisk /dev/sda

> n > p > 1 > <RETURN> > <RETURN> > t > fd > w


Now, you need to create an identical partition on /dev/sdb. You could create the partition manually using fdisk, but it's easier to copy it using sfdisk. This is especially true if you are creating an array using more than two disks. Use sfdisk to copy the partition table from /dev/sda to /dev/sdb, then verify that the partition tables are identical:

  1. sfdisk -d /dev/sda | sfdisk /dev/sdb
  2. fdisk -l

Now, you can use your newly created partitions (/dev/sda1 and /dev/sdb1) to create a RAID 1 array called /dev/md0. The md stands for multiple disks, and /dev/mdX is the standard naming convention for software RAID devices. Use the following command to create the /dev/md0 array from /dev/sda1 and /dev/sdb1:

  1. mdadm -C /dev/md0 -l1 -n2 /dev/sda1 /dev/sdb1

When an array is first created, it automatically will begin initializing (also known as rebuilding). In your case, that means making sure the two drives are in sync. This process takes a long time, and it varies based on the size and type of array you created. The /proc/mdstat file will show you the progress and provide an estimated time of completion. Use the following command to verify that the array was created and to monitor its progress:

  1. watch cat /proc/mdstat # ctrl-c to exit

It is safe to use the array while it's rebuilding, so go ahead and create the filesystem and mount the drive. Don't forget to add /dev/md0 to your /etc/fstab file, so the array will be mounted automatically when the system boots:

  1. mkfs.ext3 /dev/md0
  2. mkdir /mnt/md0
  3. mount /dev/md0 /mnt/md0
  4. echo "/dev/md0 /mnt/md0 ext3 defaults 0 2" >> /etc/fstab

Once the array is finished rebuilding, you need to add it to the mdadm configuration file. This will make it easier to manage the array in the future. Each time you create or modify an array, update the mdadm configuration file using the following command:

  1. mdadm --detail --scan >> /etc/mdadm/mdadm.conf
  2. cat /etc/mdadm/mdadm.conf

That's it. You successfully created a two-disk RAID 1 array using software RAID. Replacing a Failed Disk

The entire point of a RAID 1 array is to protect against a drive failure, so you are going to simulate a drive failure for /dev/sdb and rebuild the array. To do this, mark the drive as failed, and then remove it from the array. If the drive actually failed, the kernel automatically would mark the drive as failed. However, it is up to you to remove the disk from the array before replacing it. Run the following commands to fail and remove the drive:

  1. mdadm /dev/md0 -f /dev/sdb1
  2. cat /proc/mdstat
  3. mdadm /dev/md0 -r /dev/sdb1
  4. cat /proc/mdstat

Notice how /dev/sdb is no longer part of the array, yet the array is functional and all your data is still there. It is safe to continue using the array as long as /dev/sda does not fail. You now are free to shut down the system and replace /dev/sdb when it's convenient. In this case, pretend you did just that. Now that your new drive is in the system, format it and add it to the array:

  1. sfdisk -d /dev/sda | sfdisk /dev/sdb
  2. mdadm /dev/md0 -a /dev/sdb1
  3. watch cat /proc/mdstat

The array automatically will begin rebuilding itself, and /proc/mdstat should indicate how long that process will take. Managing Arrays

In addition to creating and rebuilding arrays, you need to be familiar with a few other tasks. It is important to understand how to start and stop arrays. Run the following commands to unmount and stop the RAID 1 array you created earlier:

  1. umount /dev/md0
  2. mdadm -S /dev/md0
  3. cat /proc/mdstat

As you can see, the array no longer is listed in /proc/mdstat. In order to start your array again, you need to assemble it (there isn't a start command). Run the following commands to assemble and remount your array:

  1. mdadm -As /dev/md0
  2. mount /dev/md0
  3. cat /proc/mdstat

Sometimes it's useful to place an array in read-only mode. Before you do this, you need to unmount the filesystem (you can't just remount as read-only). If you try to place an array in read-only mode while it is mounted, mdadm will complain that the device is busy:

  1. umount /dev/md0
  2. mdadm -o /dev/md0
  3. cat /proc/mdstat
  4. mount /dev/md0

Notice how /dev/md0 is now read-only, and the filesystem was mounted as read-only automatically. Run the following commands to change the array and filesystem back to read-write mode:

  1. mdadm -w /dev/md0
  2. mount -o remount,rw /dev/md0

mdadm can be configured to send e-mail notifications regarding the status of your RAID arrays. Ubuntu automatically starts mdadm in monitoring mode for you; however, it currently is configured to send e-mail to the root user on the local system. To change this, edit /etc/mdadm/mdadm.conf and set MAILADDR to your e-mail address, then restart the mdadm dæmon:

  1. vim /etc/mdadm/mdadm.conf

Set MAILADDR to <your e-mail address>, and then do:

  1. /etc/init.d/mdadm restart

Run the following command to test your e-mail notification setup:

  1. mdadm --monitor --scan -t -1

Converting a Server to RAID 1

If you are building a new server, you can use the Ubuntu Alternate install CD to set up your system on a software RAID array. If you don't have the luxury of performing a clean install, you can use the following process to convert your entire server to a RAID 1 array remotely. This requires your server to have an additional drive that is the same size or larger than the first disk. These instructions also assume you are using the server edition of Ubuntu 8.04 (Hardy), but the process is similar in other Linux distributions. You always should test a procedure like this before performing it on a production server.

Install mdadm and verify that the software RAID kernel module was loaded properly:

  1. apt-get install mdadm
  2. cat /proc/mdstat

Copy the partition table from your first drive to your second drive, and set the partition types to “Linux RAID autodetect”:

  1. sfdisk -d /dev/sda | sfdisk /dev/sdb
  2. fdisk /dev/sdb

> t > 1 > fd > t > 2 > fd > w

Create the RAID 1 arrays for the root and swap partitions, and update the mdadm configuration file. This time, specify that the first drive is “missing”, which will delay the rebuild until you add the first drive to the array. You don't want to mess with the first drive until you are sure the RAID configuration is working correctly:

  1. mdadm -C /dev/md0 -n2 -l1 missing /dev/sdb1 # root
  2. mdadm -C /dev/md1 -n2 -l1 missing /dev/sdb2 # swap
  3. cat /proc/mdstat
  4. mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Modify /boot/grub/menu.lst so your server boots from the array:

  1. vim /boot/grub/menu.lst

Then:

   *
     Add fallback 1 to a new line after default 0.
   *
     Change the kopt line to # kopt=root=/dev/md0 ro.
   *
     Copy the first kernel entry and change (hd0,0) to (hd1,0).
   *
     Change root=xxx to root=/dev/md0 in the new kernel entry. 

When your server is booting up, it needs to be able to load the RAID kernel modules and start your array. Use the following command to update your initrd file:

  1. update-initramfs -u

At this point, you can create and mount the filesystem, then copy your data to the additional drive. Make sure all of your applications are shut down and the server is idle; otherwise, you run the risk of losing any data modified after you run the rsync command:

  1. mkswap /dev/md1
  2. mkfs.ext3 /dev/md0
  3. mkdir /mnt/md0
  4. mount /dev/md0 /mnt/md0
  5. rsync -avx / /mnt/md0

To mount the RAID arrays automatically when your server reboots, modify /mnt/md0/etc/fstab and replace /dev/sda1 with /dev/md0, and replace /dev/sda2 with /dev/md1. You do this only on the second drive, in case you need to fall back to the old setup if something goes wrong:

  1. vim /mnt/md0/etc/fstab

Then:

   *
     Replace /dev/sda1 with /dev/md0.
   *
     Replace /dev/sda2 with /dev/md1. 

Make sure GRUB is installed properly on both disks, and reboot the server:

  1. grub

> device (hd0) /dev/sda > root (hd0,0) > setup (hd0) > device (hd0) /dev/sdb > root (hd0,0) > setup (hd0) > quit

  1. reboot

When your server comes back on-line, it will be running on a RAID 1 array with only one drive in the array. To complete the process, you need to repartition the first drive, add it to the array, and make a few changes to GRUB. Make sure your server is functioning normally and all your data is intact before proceeding. If not, you run the risk of losing data when you repartition your disk or rebuild the array.

Use sfdisk to repartition the first drive to match the second drive. The --no-reread option is needed; otherwise, sfdisk will complain about not being able to reload the partition table and fail to run:

  1. sfdisk -d /dev/sdb | sfdisk /dev/sda --no-reread

Now that your first drive has the correct partition types, add it to both arrays. The arrays will start the rebuild process automatically, which you can monitor with /proc/mdstat:

  1. mdadm /dev/md0 -a /dev/sda1
  2. mdadm /dev/md1 -a /dev/sda2
  3. watch cat /proc/mdstat

Once the arrays have completed rebuilding, you safely can reconfigure GRUB to boot from the first drive. Although it is not required, you can reboot to make sure your server still will boot from the first drive:

  1. vim /boot/grub/menu.lst

Next, copy first kernel entry and change (hd1,0) to (hd0,0). Then:

  1. reboot

That completes the process. Your server should be running on a RAID 1 array protecting your data from a drive failure. Conclusion

As you can see, Linux software RAID is very flexible and easy to use. It can protect your data, increase server performance and provide additional capacity for storing data. Software RAID is a high-performance, low-cost alternative to hardware RAID and is a viable option for both desktops and servers.


Resources

Original RAID Paper: www.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-391.pdf

Linux RAID: linux-RAID.osdl.org/index.php/Linux_Raid

Why Software RAID?: linux.yyz.us/why-software-RAID.html

MD RAID 10: cgi.cse.unsw.edu.au/~neilb/01093607424

Will Reese has worked with Linux for the past ten years, primarily scaling Web applications running on Apache, Python and PostgreSQL. He currently is a developer at HushLabs working on www.natuba.com.

RAID Levels

RAID Levels

RAID is extremely versatile and uses a variety of techniques to increase capacity, performance and reliability as well as reduce cost. Unfortunately, you can't quite have all those at once, which is why there are many different implementations of RAID. Each implementation, or level, is designed for different needs.

Mirroring (RAID 1):

Mirroring creates an identical copy of the data on each disk in the array. The array has the capacity of only a single disk, but the array will remain functional as long as at least one drive is still good. Read and write performance is the same or slightly slower than a single drive. However, read performance will increase if there are multiple read requests at the same time, because each drive in the array can handle a read request (parallel reads). Mirroring offers the highest level of reliability.

Striping (RAID 0):

Striping spreads the data evenly over all disks. This makes reads and writes very fast, because it uses all the disks at the same time, and the capacity of the array is equal to the total capacity of all the drives in the array. However, striping does not offer any redundancy, and the array will fail if it loses a single drive. Striping provides the best performance and capacity.

Striping with Parity (RAID 4, 5, 6):

In addition to striping, these levels add information that is used to restore any missing data after a drive failure. The additional information is called parity, and it is computed from the actual data. RAID 4 stores all the parity on a single disk; RAID 5 stripes the parity across all the disks, and RAID 6 stripes two different types of parity. Each copy of parity uses one disk worth of capacity in the array, so RAID 4 and 5 have the capacity of N-1 drives, and RAID 6 has the capacity of N-2 drives. RAID 4 and 5 require at least three drives and can lose any single disk in the array; whereas RAID 6 requires at least four drives and can withstand losing up to two disks. Write performance is relatively slow because of the parity calculations, but read performance is usually very fast as the data is striped across all of the drives. These RAID levels provide a nice blend of capacity, reliability and performance.

Nesting RAID Levels (RAID 10):

It is possible to create an array where the “drives” actually are other arrays. One common example is RAID 10, which is a RAID 0 array created from multiple RAID 1 arrays. Like RAID 5, it offers a nice blend of capacity, reliability and performance. However, RAID 10 gives up some capacity for increased performance and reliability, and it requires a minimum of four drives. The capacity of RAID 10 is half the total capacity of all the drives, because it uses a mirror for each “drive” in the stripe. It offers more reliability, because you can lose up to half the disks as long as you don't lose both drives in one of the mirrors. Read performance is excellent, because it uses striping and parallel reads. Write performance also is very good, because it uses striping and does not have to calculate parity like RAID 5. This RAID level typically is used for database servers because of its performance and reliability.

MD RAID 10:

This is a special RAID implementation that is unique to Linux software RAID. It combines striping and mirroring and is very similar to RAID 10 with regard to capacity, performance and reliability. However, it will work with two or more drives (including odd numbers of drives) and is managed as a single array. In addition, it has a mode (raid10,f2) that offers RAID 0 performance for reads and very fast random writes.

Spanning (Linear or JBOD):

Although spanning is not technically RAID, it is available on nearly every RAID card and software RAID implementation. Spanning appends disks together, so the data fills up one disk then moves on to the next. It does not increase reliability or performance, but it does increase capacity, and it typically is used to create a larger drive out of a bunch of smaller drives of varying sizes.

Tech Tip - copying a Filesystem between Computers

Page 72

If you need to transfer an entire filesystem from one machine to another, for example, when you get a new computer, do the following steps.

1) Boot both PCs with any Linux live CD (for example, Knoppix), and make sure they can access each other via the network

2) On the source machine, mount the partition containing the filesystem to be copied, and start the transfer using netcat and tar:

cd /mnt/sda1
tar -czpsf - . | pv -b | nc -l 3333

3) On the destination machine, moun the partition to receive the filesystem, and start the process:

cd /mnt/sda1
nc 192.168.10.101 3333 | pv -b tar -xzpsf -

The nc (netcat) command is used for any kind of TCP connections between two hosts. The pv (progress viewer) command is used to display the progress of the transfer. tar is used to archive the files on the source machine and un-archive them on the destination.

 -Dashamir Hoxha

Overcoming the Challenges of Developing Programs for the Cell Processor

What has to be done to provide developers with a debugging environment that correctly represents what is happening as a program runs on a Cell processor?


Roadrunner

In June 2008, Los Alamos National Lab announced the achievement of a numerical goal to which computational scientists have aspired for years—its newest Linux-powered supercomputer, named Roadrunner, had reached a measured performance of just over one petaflop. In doing so, it doubled the performance achieved by the world's second fastest supercomputer, the Blue Gene/L at Lawrence Livermore National Lab, which also runs Linux. [See James Gray's “The Roadrunner Supercomputer: a Petaflop's No Problem” on page 56 of this issue.]


Top 500

The competition for prestigious spots on the list of the world's fastest supercomputers (www.top500.org) is serious business for those involved and an intriguing showcase for new technologies that may show up in your data center, on your desk or even in your living room.


Cells

One of the key technologies enabling Roadrunner's remarkable performance is the Cell processor, collaboratively designed and produced by IBM, Sony and Toshiba. Like the more-familiar multicore processors from Intel and others, the Cell incorporates multiple cores so that multiple streams of computation can proceed simultaneously within the processor. But unlike multicore, general-purpose CPUs, which typically include a set of four, eight or more cores that behave essentially like previous-generation processors in the same family, the Cell has two different kinds of processing cores. One is a general-purpose processor, referred to as the Power Processing Element (PPE), and the remaining (eight in the current configurations) are highly optimized for performing intensive single-precision and double-precision floating-point calculations. These eight cores, called Synergistic Processing Elements (SPEs), are capable of performing about 100 billion double-precision floating-point operations per second (100 Gflops).


Cryptix

"I encrypt my hard drive, and I also just figured out how to use Cryptix with SD cards as well. "


Cory Doctorow—Linux Guru? - Big Brother

DS: So, tell us a bit about Little Brother. What's it about, why the title, and how does it tie in to your other advocacy?

CD: Little Brother is a novel about hacker kids in the Bay Area who, after a terrorist attack that blows up the Bay Bridge, decide that there are worse things than terrorist attacks, which, after all, end. Those things include the authoritarian responses to terrorists, which have no end, which only expand and expand. When you're fighting a threat as big and nebulous as terrorism, there's virtually no security measure that can't be justified. And so they find themselves caught inside an ever-tightening noose of control and surveillance, and they decide that they're going to fight back. They do so by doing three things: they use technology to take control of their technology, so they jailbreak all of their tools and use them to build free, encrypted wireless networks that they can communicate in secrecy with. The second thing they do is get better at understanding the statistics of rare occurrences so that they can control the debate. So they start to investigate how, when you try to stop a very rare occurrence with a security measure, the majority of things you end up stopping won't be the rare occurrence because the rare occurrence happens so rarely. So they start to show how automated surveillance and automated systems of suspicion and control disproportionately punish innocent people and rarely if ever catch guilty people.

DS: Yeah, you're actually having this problem in London now, aren't you?

CD: Oh, well, absolutely. We've got massive surveillance networks here, but it's in the US as well. You've got the hundreds of pages of no-fly-list names. People who are so dangerous that they can't be allowed to get on an airplane but so innocent that we can't think of anything to charge them with....And then, finally, they get involved in electoral politics, because no change endures unless it can be cemented into place and shellacked over with law. You might be able to convert this year's government to the cause, but...in order to make it endure, you have to make it into a law that every government that comes afterward has to abide by. And so for these three measures, they end up changing society and changing the whole world.

The novel is very explicitly didactic. Every chapter has instructions and information necessary to build technology that can help you fight the war on the war on terror. So, from setting up your own TOR node, to building a pinhole camera detector, to disabling an RFID tag, it's in the book. We did a series of “instructables”—little how-tos for building this stuff with kids that can be used as science-fair projects or home projects, and people have taken some of this stuff to heart. There's a notional Linux distro in the book called Paranoid Linux that's kind of an amalgam of all the different security-conscious Linux distros out there, and there are people trying to build a Linux distro based on Paranoid Linux, which is pretty exciting.

DS: Thank you very much for the interview Cory.


Little Brother » Download for Free


TOR Tunnel

...TOR was incredibly useful to me when I was in China last year. It just seemed to me like, over and over again, the sites I wanted to visit were being blocked by the firewall, so I was able to get to them that way. And, I use TOR in other ways too—I mean, there are plenty of times when I try to get on-line, and I just find myself not able to access one site or another, and TOR just fixes it, which is great. I have FoxyProxy on Firefox, which allows me to turn on or turn off TOR automatically when I need it, and my friend Seth Shoan helped me with a little script to tunnel my mail over TOR....so I'm sending SMTP over SSH over TOR, which is great.

The Script:

alias tortunnel='ssh -o ProxyCommand="/usr/bin/connect 
 ↪-S localhost:9050 %h %p" -f -N -C -l username 
 ↪-L5002:255.255.255.255:25 -L5003:255.255.255.255:110 
 ↪-L5555:localhost:5555 255.255.255.255'

Tech Tip - List Open Files

page 86

If you try to unmount a partition and get a message like this:

# umount /media/usbdisk/
umount: /media/usbdisk: device is busy

use the lsof command to find out what programs are using what files:

# lsof /media/usbdisk/

This shows which programs are using the device. For an even clearer picture, use the device rather than the mountpoint:

# lsof /dev/sdb1

You either can wait until those processes exit or terminate them manually.


Use Python for Scientific Computing

scipy

scipy is a further extension built on top of numpy. This package simplifies a lot of the more-common tasks that need to be handled, including tools such as those used to find the roots of polynomials, doing Fourier transformations, doing numerical integrals and enhanced I/O. With these functions, a user can develop very sophisticated scientific applications in relatively short order.


The Python code is much shorter and cleaner, and the intent of the code is much clearer. This kind of clarity in the code means that the programmer can focus much more on the algorithm rather than the gritty details of the implementation. There are C libraries, such as LAPACK, which help simplify this work in C. But, even these libraries can't match the simplicity of scipy.

Table 1. Average Runtimes

LanguageAverage Time (seconds)
C1.620
C (-O3)0.010
Python0.250

“But what about efficiency?”, I hear you ask. Well, let's take a look at it with some timed runs. Taking our above example, we can put some calls around the actual matrix multiplication part and see how long each one takes.


MPI

Going Parallel

So far, we have looked at relatively small data sets and relatively straightforward computations. But, what if we have really large amounts of data, or we have a much more complex analysis we would like to run? We can take advantage of parallelism and run our code on a high-performance computing cluster.

The good people at the SciPy site have written another module called mpi4py. This module provides a Python implementation of the MPI standard. With it, we can write message-passing programs. It does require some work to install, however.

The first step is to install an MPI implementation for your machine (such as MPICH, OpenMPI or LAM). Most distributions have packages for MPI, so that's the easiest way to install it. Then, you can build and install mpi4py the usual way with the following:

python setup.py build python setup.py install

To test it, execute:

mpirun -np 5 python tests/helloworld.py

which will run a five-process Python and run the test script.

Now, we can write our program to divide our large data set among the available processors if that is our bottleneck. Or, if we want to do a large simulation, we can divide the simulation space among all the available processors. Unfortunately, a useful discussion of MPI programming would be another article or two on its own. But, I encourage you to get a good textbook on MPI and do some experimenting yourself.


Types of Parallel Programming

Parallel programs can, in general, be broken down into two broad categories: shared memory and message passing. In shared-memory parallel programming, the code runs on one physical machine and uses multiple processors. Examples of this type of parallel programming include POSIX threads and OpenMP. This type of parallel code is restricted to the size of the machine that you can build.

To bypass this restriction, you can use message-passing parallel code. In this form, independent execution units communicate by passing messages back and forth. This means they can be on separate machines, as long as they have some means of communication. Examples of this type of parallel programming include MPICH and OpenMPI. Most scientific applications use message passing to achieve parallelism.


Resources

Python Programming Language—Official Web Site: www.python.org

SciPy: www.scipy.org

ScientificPython—Theoretical Biophysics, Molecular Simulation, and Numerically Intensive Computation: dirac.cnrs-orleans.fr/plone/software/scientificpython