Linux Journal/2009.09
Linux Journal Contents #185, September 2009
Linux Journal Issue #185/September 2009
In a world of full of standards creating Cross Platform applications ought to be simple, right? Well the important word there is full: you can't walk down the street these days without tripping over somebody's standard. As always it's Open Source to the rescue. This month we highlight a few of the tools available for doing Cross Platform Development: Lazurus, Qt, and Titanium. We also have an interview with the developers of Google Chrome, the newest cross platform browser. Along with our features we have our usual spate of articles on Linux and Open Source: Shoulda (a favorite tool of Hillary Clinton), AppArmor, ImageMagick, Openfire, SocNetV, Linux-MiniDisc, Open Source Compliance, and in the slow but never ending evolution of our own Kyle Rankin, he gets one step closer to being a fan of Twitter by using tircd.
Features
- Google Chrome: the Making of a Cross-Platform Browser by James Gray
- What does it take to make a cross-platform browser work well on three platforms?
- Rich Cross-Platform Desktop Applications Using Open-Source Titanium by Mark Obcena
- Web developer, meet the desktop.
- Lazarus for Cross-Platform Development by Mattias Gaertner
- Pascal. Native code. Linux, Windows and Mac, oh my!
- How to Be Cute on All Desktops with Qt by Johan Thelin
- It's not called Qt for nuttin.
LSI drivers
One thing that doesn't happen often is a hardware vendor asking for advice from the Linux community about how to code its drivers. But, Atul Mukker from LSI Corporation recently did exactly that. He said LSI wanted to take a whole new approach to driver writing, in which it had operating-system-independent code at the core, with a thin layer of support for Linux, Windows and so on. And, he just wanted to know if anyone had any advice. Turns out several folks did—one of the main ones being Jeff Garzik. Jeff recommended Intel's networking drivers as excellent examples of good practice. He suggested modularizing the code so that each piece of hardware would have its own codebase, which also could be kept free of any operating-system-specific code. He also recommended keeping general-purpose code out of the driver entirely, where other drivers could use it more easily. The Application Binary Interface (ABI), Jeff said, also should remain consistent with other drivers already in the kernel. Any feature similar to something found elsewhere should imitate that other interface. Any features that were unique, on the other hand, could create whatever interface seemed best.
Mac OS X, It's Not Linux, but It's Close
Apple provides a package called Xcode on its developer site. Xcode has the necessary tools for compiling programs on the Mac, and it includes a nice graphical IDE and lots of examples for developing applications for OS X. Xcode is based on the GNU toolset, providing tools like gcc, libtool, make and so on. That means, with Xcode, most command-line applications can be compiled and run on the Mac. So, a simple little hello world program:
#include <stdio.h> #include <stdlib.h> int main (int argc, char **argv) { printf("Hello World\n"); }
compiles fine with gcc, giving you an executable that prints out “Hello World” on the command line. Basically, anything that is POSIX-compliant should compile and run with no issues.
Why Buy a $350 Thin Client?
LTSP (Linux Terminal Server Project, www.ltsp.org) crew and their commercial company (www.disklessworkstations.com)
One of the issues network administrators need to sort out is whether a decent thin client, which costs around $350, is worth the money when full-blown desktops can be purchased for a similar investment. As with most good questions, there's really not only one answer. Thankfully, LTSP is very flexible with the clients it supports, so whatever avenue is chosen, it usually works well. Some of the advantages of actual thin-client devices are:
1. Setup time is almost zero. The thin clients are designed to be unboxed and turned on.
2. Because modern thin clients have no moving parts, they very seldom break down and tend to use much less electricity compared to desktop machines.
3. Top-of-the-line thin clients have sufficient specs to support locally running applications, which takes load off the server without sacrificing ease of installation.
4. They look great.
They Said It
We're done with the first 80%, and well into the second 80%.
—Larry Wall, referring to Perl 6
Doing linear scans over an associative array is like trying to club someone to death with a loaded Uzi.
—Larry Wall
Getting information off the Internet is like taking a drink from a fire hydrant.
—Mitchell Kapor
Globalization, as defined by rich people like us, is a very nice thing...you are talking about the Internet, you are talking about cell phones, you are talking about computers. This doesn't affect two-thirds of the people of the world.
—Jimmy Carter
I don't have to write about the future. For most people, the present is enough like the future to be pretty scary.
—William Gibson
In Cyberspace, the First Amendment is a local ordinance.
—John Perry Barlow
Cooking with Linux - Cross at Your Platform?
XMPP - Openfire by Jive
instant messaging, or IM for short
XMPP (extensible messaging and presence protocol). It's more commonly known as Jabber, and it's used by many companies and organizations. (Jabber/XMPP is the protocol used by Google Talk.)
One of my favorite Jabber servers comes from a company called Jive Software. It's called Openfire, and it's completely open and released under the GPL.
Getting an Openfire Jabber server up and running starts with a visit to the Jive Software's Ignite Realtime community site at www.igniterealtime.org. Click on Products, then select the Openfire Jabber collaboration server link (at the time of this writing, the version number is 3.6.4). Jive and Ignite Realtime have many products listed on the site, and all of them are meant to enable collaboration and communication, but I concentrate only on Openfire here. The package comes in an RPM format package as well as DEB. There's also a tarred and gzipped bundle that should work in environments where RPM or DEB might be an issue. Installing either version of the package is easy. To install the RPM package, type the following:
sudo rpm -i openfire_3.6.4-1.i386.rpm
If you choose to use the Debian package, you can install it with:
sudo dpkg -i openfire_3.6.4_all.deb
If you need to use the tarred bundle, extract it in the /opt directory. This is the installation folder for the RPM package as well. Openfire files and programs wind up under /opt/Openfire. One plus of the RPM package is that it comes with the Java Runtime Environment (JRE). If you choose (or need to use) the tarred bundle, you also need version 1.6 Java RE loaded on your system. Java is, of course, available from java.sun.com. Debian (or Ubuntu) users also need an installed Java JRE. In addition, that whole thing about everything in /opt doesn't apply to Debian users.
The Openfire server administrative interface runs on port 9090, so point your browser to the following address: http://localhost:9090.
The 9090 port is the default, along with port 9091 for secure connections to the server. Unless you have a good reason, it probably makes sense to accept those defaults.
What seems so simple to your instant-messaging users actually is a fairly complex and exceedingly powerful collaboration server. The administrator has extensive control over Openfire's operation, from server-to-server communications, message audit policies, the treatment of messages sent to off-line users (stored, by default), private data storage, file transfers, security settings (this includes encrypted communications) and a lot more. Openfire also is extensible with added functionality provided through a system of plugins.
Chat rooms.
Permanent chat rooms can be created where users can gather for general meetings or predefined functions.
Resources
Kopete: kopete.kde.org
Openfire Server at Ignite Realtime: www.igniterealtime.org/projects/openfire/index.jsp
Pidgin: www.pidgin.im
Work the Shell - Messing around with ImageMagick
ImageMagick - www.imagemagick.org
$ identify iphone-id.png iphone-id.png PNG 470x118 470x118+0+0 8-bit DirectClass 12.2kb
$ file iphone-id.png iphone-id.png: PNG image data, 470 x 118, 8-bit/color RGB, non-interlaced
mogrify command - resize
convert command
Digging through them, here's the flag I want to use:
-resize geometry resize the image
That sounds like what we need is to resize the images, though “geometry” is still a bit of an unknown. Now it's time to pop over to the ImageMagick Web site, where we find a ton of options for geometry, including:
- scale%: height and width both scaled by specified percentage.
- scale-x%xscale-y%: height and width individually scaled by specified percentages.
- width: width given, height automatically selected to preserve aspect ratio.
- xheight: height given, width automatically selected to preserve aspect ratio.
- widthxheight: maximum values of height and width given, aspect ratio preserved.
$ convert -resize 1024 DSC_7466.JPG smaller-DSC_7466.JPG
for filename in *.png do convert -resize "50%" $filename smaller/$filename done
Paranoid Penguin - AppArmor in Ubuntu 9
AppArmor in Ubuntu
Three years ago, I devoted a couple columns (in the April and August 2006 issues of LJ) to Novell AppArmor, a partial implementation of Mandatory Access Controls (MACs) that Novell had integrated into SUSE Linux as part of its acquisition of Immunix. Novell also had released AppArmor's source code under the GPL. I expressed hope that other distributions soon would offer AppArmor as an easier-to-configure alternative to SELinux.
The good news is, during the three years since I wrote those articles, both Ubuntu and Mandriva have incorporated AppArmor into their respective distributions. Although until recently Ubuntu hasn't provided very much documentation on its AppArmor port—one might even characterize Ubuntu's AppArmor adoption as stealthful—AppArmor actually has been in Ubuntu since Ubuntu 7.10 (Gutsy Gibbon). In fact, I mentioned this inclusion in these very pages in the April 2008 issue, in my article “Security Features in Ubuntu Server”.
At the time, I commented that due to its lack of AppArmor GUI tools or documentation, AppArmor in Ubuntu 7.10 appeared to be targeted at expert users. With Ubuntu 9.04, I'm happy to report that although AppArmor in Ubuntu still is configured strictly via the command line, it's now amply documented and comes with a useful set of default profiles.
The bad news is, in late 2007, Novell laid off all four of its full-time AppArmor developers, raising serious questions about the future of AppArmor.
AppArmor in Ubuntu, for many applications, you don't need to do anything to enable or configure AppArmor protection. For others, AppArmor essentially can train itself in protecting them.
AppArmor Review
In case you missed my earlier articles on this topic, AppArmor is based on the Linux Security Modules (LSMs), as is SELinux. AppArmor, however, provides only a subset of the controls SELinux provides. Whereas SELinux has methods for Type Enforcement (TE), Role-Based Access Controls (RBACs), and Multi-Level Security (MLS), AppArmor provides only a form of Type Enforcement.
Type Enforcement involves confining a given application to a specific set of actions, such as writing to Internet network sockets, reading a specific file and so forth. RBAC involves restricting user activity based on the defined role, and MLS involves limiting access to a given resource based on its data classification (or label).
By focusing on Type Enforcement, AppArmor provides protection against, arguably, the most common Linux attack scenario—the possibility of an attacker exploiting vulnerabilities in a given application that allows the attacker to perform activities not intended by the application's developer or administrator. By creating a baseline of expected application behavior and blocking all activity that falls outside that baseline, AppArmor (potentially) can mitigate even zero-day (unpatched) software vulnerabilities.
What AppArmor cannot do, however, is prevent abuse of an application's intended functionality. For example, the Secure Shell dæmon, SSHD, is designed to grant shell access to remote users. If an attacker figures out how to break SSHD's authentication by, for example, entering just the right sort of gibberish in the user name field of an SSH login session, resulting in SSHD giving the attacker a remote shell as some authorized user, AppArmor may very well allow the attack to proceed, as the attack's outcome is perfectly consistent with what SSHD would be expected to do after successful login.
If, on the other hand, an attacker figured out how to make the CUPS print services dæmon add a line to /etc/passwd that effectively creates a new user account, AppArmor could prevent that attack from succeeding, because CUPS has no reason to be able to write to the file /etc/passwd.
The future of AppArmor
But, you should know that AppArmor's future is uncertain. In late 2007, Novell laid off its full-time AppArmor developers, including project founder Crispin Cowan (who subsequently joined Microsoft).
Other Ubuntu AppArmor profiles are installed, but set to run in complain mode, in which AppArmor only logs unexpected application behavior to /var/log/messages rather than both blocking and logging it. You either can leave them that way, if you're satisfied with just using AppArmor as a watchdog for those applications (in which case, you should keep an eye on /var/log/messages), or you can switch them to enforce mode yourself, although, of course, you should test thoroughly first.
Active AppArmor profiles reside in /etc/apparmor.d. The files at the root of this directory are parsed and loaded at boot time automatically. The apparmor-profiles package installs some of its profiles there, but puts experimental profiles in /usr/share/doc/apparmor-profiles/extras.
Creating AppArmor Profiles
At a high level, creating a new AppArmor profile involves creating a deny all policy and then running that profile in complain (log-only) mode; running your application in as typical a fashion as possible; using the resulting log messages to loosen up the profile enough (but only enough) for the application to work properly; and setting the finished, tuned profile to enforce mode.
AppArmor, through its genprof and logprof commands, walks you through this entire process interactively. I'm not going to cover the process for tweaking existing AppArmor profiles with logprof. logprof sessions are very similar to genprof sessions, so if you're comfortable creating new profiles, it's easy to tweak existing ones. (See Resources for more information on the latter.)
So, let's walk through the process of creating a new AppArmor profile. For this example scenario, let's start with a simple shell script, spaztacle.sh, that could use some protection. Listing 1 shows the script itself.
Listing 1. A Shell Script Needing AppArmor Protection
#!/bin/sh # # spaztacle.sh : archives /var/spaetzle to specified tar-file tar -cf $1 /var/spaetzle
As you can see, this script allows users to create a backup archive of the directory /var/spaetzle, using the archive filename specified in the command line (for example, spaztacle.sh mybackup.tar). To create an AppArmor profile for it, run the following command:
bash-$ sudo genprof spaztacle.sh
What follows is an interactive question-and-answer session in which:
1. genprof creates a new AppArmor profile for spaztacle.sh, containing a simple “deny all access” policy.
2. genprof loads the new policy in complain mode and prompts you to start the application in a separate window (this is your first opportunity to demonstrate normal application activity to genprof).
3. After you've demonstrated the application sufficiently, genprof analyzes the messages the new profile generated in /var/log/messages.
4. For each log message, genprof asks what sort of rule to add to your new AppArmor profile to account for the behavior that was logged.
5. After all log messages have been analyzed, genprof allows you to repeat the test/analyze cycle, which may or may not result in additional rules for the profile.
6. When you're done with the testing/log-analyzing cycle, genprof saves the profile and loads it in enforce mode. You're done!
This implies that users might be able to write files to other users' home directories, but AppArmor controls augment normal Linux filesystem permissions; they don't replace them. In our example, therefore, users will be able to write to other other users' directories only if those directories' permissions are set accordingly.
# Last Modified: Mon Jun 15 21:29:38 2009 #include <tunables/global> /usr/bin/spaztacle.sh { #include <abstractions/base> #include <abstractions/nameservice> /bin/dash rix, /bin/tar rix, owner /home/** a, /usr/bin/spaztacle.sh r, /var/spaetzle/ r, /var/spaetzle/** r, }
Happily, if I run spaztacle.sh again, it still works. But, is AppArmor doing anything? I can make sure the new profile is loaded with this command:
bash-$ sudo aa-status
Resources
bodhi.zazen's “Introduction to AppArmor” for Ubuntu: ubuntuforums.org/showthread.php?t=1008906
Official Ubuntu AppArmor User Guide: https://help.ubuntu.com/9.04/serverguide/C/apparmor.html
Official Ubuntu AppArmor Overview: www.ubuntu.com/products/whatisubuntu/serveredition/features/apparmor
Ubuntu Community AppArmor Documentation: https://help.ubuntu.com/community/AppArmor
“AppArmor Is Dead” (Blog Post by Russell Coker): etbe.coker.com.au/2008/08/23/apparmor-is-dead
“Go Ahead, Make My Day” (Response to Coker by Crispin Cowan): blogs.msdn.com/crispincowan/archive/2008/09/02/go-ahead-make-my-day.aspx
Novell AppArmor Developer Roadmap: developer.novell.com/wiki/index.php/Apparmor_dev
Miscellaneous, Interesting AppArmor Notes on the Ubuntu Wiki: https://wiki.ubuntu.com/AppArmor
The OpenSUSE Project's AppArmor Page: en.opensuse.org/Apparmor
“Security Features in SUSE 10.0” by Mick Bauer, LJ April 2006: www.linuxjournal.com/article/8783
“An Introduction to Novell AppArmor” by Mick Bauer, LJ August 2006: www.linuxjournal.com/article/9036
“Security Features in Ubuntu Server” by Mick Bauer, LJ April 2008: www.linuxjournal.com/article/10012
Mick Bauer (darth.elmo@wiremonkeys.org) is Network Security Architect for one of the US's largest banks. He is the author of the O'Reilly book Linux Server Security, 2nd edition (formerly called Building Secure Servers With Linux), an occasional presenter at information security conferences and composer of the “Network Engineering Polka”.
Hack and / - What Really IRCs Me: Twitter
In my never-ending search to do all communications through the same IRC client, this month I present tircd—a great way to connect to Twitter over IRC.
I discussed how I used Bitlbee so I could access all sorts of IM services from my IRC client, and I promised that in the follow-up column, I would talk about how to do something similar for Twitter.
tircd to the Rescue
tircd is a simple Perl script that works much like Bitlbee. When you start the program, it creates a new IRC server on your local machine that you can connect to with an IRC client. The only difference is that it interfaces with your Twitter account, so people you follow show up as users in the channel, and their tweets show up as normal chat messages. Once you are in the channel, everything you type becomes a new Twitter message as well, so it behaves much like any other IRC channel.
To install tircd, first go to the main project page at code.google.com/p/tircd, and download the latest version. As with many Perl scripts, tircd makes use of some CPAN modules you might not have on your system, so dust off your Perl programmer hat, and type the following command as root to install the CPAN modules:
# cpan -i POE POE::Filter::IRCD Net::Twitter
tircd includes a sample configuration file that is heavily commented, so you can see what each option does. The default settings should work in most situations, unless you already run a local IRC server (such as Bitlbee in my case). If you do run another IRC server, change the port setting in the file from port 6667 to port 6668 so it won't conflict.
For instance, on most command-line IRC clients, you would type:
/server localhost 6667 twitter_password twitter_username
In my case, as I already had Bitlbee running on port 6667, I connected to port 6668:
/server localhost 6668 twitter_password twitter_username
Once you are connected to the tircd server, join the #twitter channel. tircd automatically imports everyone you are following, so they show up as users in the channel, and you also will see their recent posts. Any users that follow you back are voiced (+v).
New Products
Ksplice Uptrack Service
Across the pond at Germany's big LinuxTag event, Ksplice unveiled Ksplice Uptrack, a new service that installs security and bug fixes on a running kernel without rebooting. Ksplice, whose technology was developed at MIT, claims to be the only solution that allows this application of updates without rebooting. Currently available for Ubuntu 9.04, Uptrack supports generic, virtual and server kernels. It also works in VMware, Xen, Virtuozzo or other virtualized environments. Although the initial release is a consumer-oriented solution, an enterprise solution is expected in Q3 2009. [See the August 2009 issue for a feature article on Ksplice.]
www.ksplice.com
Joe Hutsko's Green Gadgets for Dummies (Wiley)
Greening your gadgets and lifestyle can be not only fun but money-saving as well. Such is the motto of Joe Hutsko's new book Green Gadgets for Dummies from Wiley, a title billed as a friendly reference for exploring the environmental and financial benefits of green gadgets. Green gadgets encompass everything from iPods to energy-efficient home entertainment devices to solar laptop chargers and crank-powered gizmos. The book explains how to research green gadgets, calculate energy consumption, make a smart purchasing decision, use products you already own in a more environmentally friendly way, and bid farewell to electronics that zap both energy and money. Finally, the book covers product labels and how to avoid “greenwashing”—that is the overselling of environmental benefits.
www.wiley.com
CoroWare's Explorer
CoroWare Technologies announced the Explorer, an all-terrain robot designed and optimized for conducting R&D into new robotic applications that operate in unstructured, outdoor environments. Built on a ruggedized chassis, the Explorer functions well outside the lab, navigating rough terrain and resisting environmental elements. The Explorer's camera, wheel encoders and GPS enable the robot to examine the environment while the fully articulated four-wheel drive ensures the Explorer can navigate curbs, steps and inclines. By including a 2.0GHz PC-class processor, 80GB disk storage space and Ubuntu Linux with support for Player Project pre-installed, Explorer is ready to support any software the developer desires. Explorer comes standard with four-wheel drive, 802.11n Wi-Fi, GPS and 1600x1200 color camera. Expansion capabilities exist via extra USB, RS-232, digital I/O and analog inputs. Options include wheel encoders, a pan/tilt/zoom camera and a 64-bit dual-core motherboard.
www.coroware.com/explorer
Google Chrome: the Making of a Cross-Platform Browser
Google's Evan Martin and Mads Ager discuss the challenges behind making a browser work well on Linux, Mac OS and Windows.
Google's Strategy with Chrome
In some of my earliest conversations with Google, we talked about the company's motivations for building Chrome. After developing a range of rich and complex Web apps, the company saw that it was time to build a browser from scratch that could better handle “today's Web”. From the beginning, they focused on a browser that innovated in four key areas, namely speed, simplicity, security and stability. Early on, the Google Chrome team realized that the linchpin for innovating in these key aspects, as well as to handling the new Web apps, would be a much more efficient handling of JavaScript. Thus, the V8 JavaScript engine, explained further below, was conceived and became central to the Google Chrome Project.
LJ: What is the Google Chromium Project?
EM: After we wrote the code for Google Chrome, we open sourced it under the name Chromium. Much like Firefox is a trademark of Mozilla, Google Chrome is a trademark of Google; the name Chromium is not, so distributions are free to use it to refer to the same project. We hope that developers and browser vendors take a look at the Chromium source code and that it will be useful for new projects built by the Open Source community in the future.
JG: Can you tell us more about V8, its history, your rationale for developing it and who the key people were behind it?
MA: The V8 Project started in late 2006. At that time, existing JavaScript engines did not perform very well. The goal of the V8 Project was to push the performance of JavaScript engines by building a new JavaScript engine on which large object-oriented programs run fast. The V8 Project was pioneered by the dynamic duo of serial virtual machine builders Lars Bak and Kasper Lund in a farmhouse outside Aarhus, Denmark.
JG: What innovations and new approaches does V8 bring to the browser?
MA: V8 uses the concept of hidden classes and hidden class transitions combined with native code generation and a technique called inline caching to make property accesses and function calls fast. V8 uses precise generational garbage collection to make the engine scale to large object-oriented programs that use a lot of objects. In addition, V8 contains a JavaScript regular expression engine that was developed from scratch, is automata-based and generates native code for regular expressions.
JG: Did you name it after the internal combustion engine or the vegetable drink?
MA: The internal combustion engine. It was developed in the context of Google Chrome, and we thought that there should be a powerful V8 engine under the “chrome”.
Open-Source Compliance
A discussion of open-source compliance, the challenges faced when establishing a compliance program, an overview of best practices and recommendations on how to deal with compliance inquiries.
Establishing Compliance Best Practices:
1. Scanning Code
The first step in the compliance process is usually scanning the source code, also sometimes called auditing the source code.
2. Identification and Resolution of Flagged Issues
After scanning the source code, the scanning tool generates a report that includes a “Build of Material”, an inventory of all the files in the source code package and their discovered licenses, in addition to flagging any possible licensing issues found and pinpointing the offending code.
- Identify whether your engineers made any code modifications. Ideally, you shouldn't rely on engineers to remember if they made code changes. You should rely on your build tools to be able to identify code changes, who made them and when.
3. Architecture Review
4. Linkage Analysis
static linking vs dynamic linking [1]
[Image:Static_linking_vs_dynamic_linking.jpg]
5. Legal Review
6. Final Review
Source Code Scanning Tools
There are commercial and open-source tools that offer the capabilities of scanning source code for potential open-source issues. Commercial tools include Protex from Black Duck Software, Inc. (www.blackducksoftware.com/protex) and Palamida Compliance Edition from Palamida (www.palamida.com/products/complianceedition). A popular open-source tool is FOSSology (www.fossology.org).
Open-Source Compliance Insurance
In the past few years, some insurance companies started offering insurance services against the legal risks that can result from using open-source software. The insurance policy often is called open-source compliance insurance. The insurance policy (depending on the issuing company) offers coverage for monetary damages, including profit losses related to noncompliance with open-source software licenses and the cost of updating the offending code.
SFLC's Practical Guide to GPL Compliance
On August 26, 2008, the Software Freedom Law Center (SFLC) published a guide on how to be compliant with the GNU General Public License (GPL) and related licenses. The guide focuses on avoiding compliance actions and minimizing the negative impact when enforcement actions occur. The guide is available at www.softwarefreedom.org/resources.
Resources
Free Software Foundation: www.fsf.org
Software Freedom Law Center: www.softwarefreedom.org
GNU Project: www.gnu.org/licenses/gpl-violation.html
Open-Source Compliance
As Yogi Berra famously said, “When you get to a fork in the road, take it.”