Tuesday, December 2, 2008

Complete FTP Session - BASH

A Complete Example FTP Session

Let us now look at an examle FTP where many of the commands above are used in practice. We do the following:
  • connect to the year1 server -- open,
  • list the files -- dir,
  • change directory -- cd,
  • list directories contents -- dir,
  • set binary transfer mode: to download the gif file correctly -- (bin)ary,
  • download a single file -- get
  • turn prompt on: to allow interactive multiple get),
  • perform a multiple get: note prompt we get and MUST acknowledge -- mget, and
  • finally close the connection .

The FTP Session looks like this:

ftp> open ftp.cs.cf.ac.uk

Connected to thrall.cs.cf.ac.uk.
220-************************************************************************
220- Cardiff Computer Science campus ftp access. Access is available
220- here as anonymous, by ftp group or by username/password.
220-
220- The programs and data held on this system are the property of the
220- Department of Computer Science in the University of Wales, Cardiff.
220- They are lawfully available to authorised Departmental users only.
220- Access to any data or program must be authorised by the Department
220- of Computer Science.
220-
220- It is a criminal offence to secure unauthorised access to any programs
220- or data on this computer system or to make any unauthorised
220- modification to its contents.
220-
220- Offenders are liable to criminal prosecution. If you are not an
220- authorised user do not log in.
220-************************************************************************
220-
220-Cardiff University. Department of Computer Science.
220-This is the WUSL ftp daemon. Please report problems to
220-Robert.Evans@cs.cf.ac.uk.
220-
220 thrall.cs.cf.ac.uk FTP server (Version wu-2.6.1(1) Mon Sep 18 12:45:30 BST 2000) ready.
Name (ftp.cs.cf.ac.uk:dave): year1
331 Password required for year1.
Password:
230-
230-Welcome to the guest ftp server for Year 1 Internet Computing
230-in the Department of Computer Science at the University of Wales, Cardiff.
230-
230-Please note that all commands and transfers from this ftp account
230-are logged and kept in an audit file.
230-
230-
230 User year1 logged in. Access restrictions apply.

ftp> dir

200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 32
drwxrwxrwx 2 y1ftp 2048 Nov 8 1999 ex_gif
drwxrwxrwx 2 y1ftp 2048 Nov 8 1999 ex_hqx
drwxrwxrwx 2 y1ftp 2048 Nov 8 1999 ex_text
drwxrwxrwx 2 y1ftp 2048 Nov 8 1999 ex_uu
drwxrwxrwx 2 y1ftp 2048 Nov 8 1999 ex_zip
drwxr-xr-x 2 y1ftp 512 Oct 18 1999 exercise
drwxrwxr-x 2 gueftp 2048 Nov 5 1999 incoming
drwx--x--x 2 staff 1024 Nov 11 1999 marker
drwxrwxr-x 2 gueftp 2048 Nov 10 1999 test
226 Transfer complete.
489 bytes received in 0.0032 seconds (148.17 Kbytes/s)

ftp> cd exercise

250 CWD command successful.
ftp> dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 156
-rw-rw-r-- 1 staff 25943 Dec 8 1997 ex.gif
-rw-rw-r-- 1 staff 53104 Oct 18 1999 ex.txt
226 Transfer complete.
117 bytes received in 0.0066 seconds (17.22 Kbytes/s)

ftp> bin

200 Type set to I.
ftp> get ex.gif
200 PORT command successful.
150 Opening BINARY mode data connection for ex.gif (25943 bytes).
226 Transfer complete.
local: ex.gif remote: ex.gif
25943 bytes received in 0.072 seconds (350.60 Kbytes/s)
ftp> prompt
Interactive mode on.

ftp> mget *.*

mget ex.gif? y
200 PORT command successful.
150 Opening BINARY mode data connection for ex.gif (25943 bytes).
226 Transfer complete.
local: ex.gif remote: ex.gif
25943 bytes received in 0.067 seconds (378.46 Kbytes/s)
mget ex.txt? y
200 PORT command successful.
150 Opening BINARY mode data connection for ex.txt (53104 bytes).
226 Transfer complete.
local: ex.txt remote: ex.txt
53104 bytes received in 0.13 seconds (387.51 Kbytes/s)

ftp> close

221-You have transferred 184037 bytes in 5 files.
221-Total traffic for this session was 186865 bytes in 9 transfers.
221-Thank you for using the FTP service on thrall.cs.cf.ac.uk.
221 Goodbye.
ftp>

Make sure that you can pick out the different ftp commands and responses in this output. Notice that the ftp responses are only displayed when the verbose feature is turned on.


Taken From: http://www.cs.cf.ac.uk/Dave/Internet/node118.html


Thursday, November 13, 2008

HTC Diamond as a Rndis Modem (GPRS/EDGE/HSDPA)

Using HTC Diamond as a Rndis modem

Hi, after a small amount of investigation I have finally got the HTC Diamond working as a Rndis modem over the USB port.

What this means is that you can use the Internet connection sharing function of the Diamond to get a computer onto the Internet using the H or G phone data connection.

Please note that the first steps of this guide will work for any ACTIVESYNC connected HTC phone that has the Connection Sharing (it's in the Connection Manager on other HTC phones)

Obviously using data on your mobile costs money so be aware of this and make sure you have a package that has reasonable charges.

finally I have only tested this on an ORANGE HTC Diamond in the UK.

The reason you have to modify the source is that if you don't the rndis fails with an error like (this is seen in the /var/log/syslog)

Code:

[355.215268] rndis_host 5-3:1.0: dev can't take 1558 byte packets (max 1536)


you need a working internet connection to set this up

1. Install Pre-requisites
2. Get the Source
3. Modify the source (Diamond only)
4. Compile and make and install
5. Start the Internet Connection Sharing
6. Plug in the Phone (USB)


Once you have done steps 1-4 you will only ever need to do steps 5 & 6 to get re-connected.



Step 1 - Install Pre-requisites

open a terminal (use same terminal in next steps)

Code:

$ sudo apt-get install subversion

Step 2 - Get the Source

Code:

$ svn co http://synce.svn.sourceforge.net/svnroot/synce/trunk/usb-rndis-lite
cd usb-rndis-lite/

Step 3 - Modify the source (Diamond only)

Code:

$ gedit rndis_host.c

on line 524, find this bit

Code:

if (tmp <>hard_mtu) {
dev_err(&intf->dev,
"dev can't take %u byte packets (max %u)\n",
dev->hard_mtu, tmp);
goto fail;
}

change it to this

Code:

if (tmp <>hard_mtu) {
dev_err(&intf->dev,
"dev can't take %u byte packets (max %u)\n",
dev->hard_mtu, tmp);
retval = -EINVAL;
/* goto fail;*/
}

save the file

Step 4 - Compile and make and install

Code:

$ make
$ sudo ./clean.sh
$ sudo make install

Step 5 - Start the Internet Connection Sharing

On OLD Tytn II's open Comm Manager on your phone and click on the Internet Sharing Now make sure USB is selected and choose connect

On Diamonds it's a seperate program called Internet Connection Sharing

Step 6 - Plug in the Phone (USB)

Plug the phone in, once the Phone has a data connection the Internet connection sharing will say connecting, then connected.

If this takes a while the dhcp may timeout and you will have to run the following command.

Code:

$ sudo dhclient

You should then see that you have an ip on the rndis device

Code:

$ ifconfig

Taken From: http://ubuntuforums.org/archive/index.php/t-935203.html

Monday, October 27, 2008

Change Your Mac Address in Linux

# In order to change your MAC address just type the following:

$ sudo ifconfig eth0 down hw ether 0A:0B:0C:0D:AA:BB


Note: "0A:0B:0C:0D:AA:BB" is just an example, you should put there the disired MAC address

Note: bringing down the interface (sudo ifconfig eth0 down) and them changing the mac address (sudo ifconfig eth0 hw ether 0A:0B:0C:0D:AA:BB) did not work, instead i did all in one line as shown above



# Now bring the interface back up, and you are ready to go.

$ sudo ifconfig eth0 up

Wednesday, October 15, 2008

Secured Remote Desktop/Application Sessions

Run graphical applications from afar, securely.

There are many different ways to control a Linux system over a network, and many reasons you might want to do so. When covering remote control in past columns, I've tended to focus on server-oriented usage scenarios and tools, which is to say, on administering servers via text-based applications, such as OpenSSH. But, what about GUI-based applications and remote desktops?

Remote desktop sessions can be very useful for technical support, surveillance and remote control of desktop applications. But, it isn't always necessary to access an entire desktop; you may need to run only one or two specific graphical applications.

In this month's column, I describe how to use VNC (Virtual Network Computing) for remote desktop sessions and OpenSSH with X forwarding for remote access to specific applications. Our focus here, of course, is on using these tools securely, and I include a healthy dose of opinion as to the relative merits of each.

Remote Desktops vs. Remote Applications

So, which approach should you use, remote desktops or remote applications? If you've come to Linux from the Microsoft world, you may be tempted to assume that because Terminal Services in Windows is so useful, you have to have some sort of remote desktop access in Linux too. But, that may not be the case.

Linux and most other UNIX-based operating systems use the X Window System as the basis for their various graphical environments. And, the X Window System was designed to be run over networks. In fact, it treats your local system as a self-contained network over which different parts of the X Window System communicate.

Accordingly, it's not only possible but easy to run individual X Window System applications over TCP/IP networks—that is, to display the output (window) of a remotely executed graphical application on your local system. Because the X Window System's use of networks isn't terribly secure (the X Window System has no native support whatsoever for any kind of encryption), nowadays we usually tunnel X Window System application windows over the Secure Shell (SSH), especially OpenSSH.

The advantage of tunneling individual application windows is that it's faster and generally more secure than running the entire desktop remotely. The disadvantages are that OpenSSH has a history of security vulnerabilities, and for many Linux newcomers, forwarding graphical applications via commands entered in a shell session is counterintuitive. And besides, as I mentioned earlier, remote desktop control (or even just viewing) can be very useful for technical support and for security surveillance.

Using OpenSSH with X Window System Forwarding

Having said all that, tunneling X Window System applications over OpenSSH may be a lot easier than you imagine. All you need is a client system running an X server (for example, a Linux desktop system or a Windows system running the Cygwin X server) and a destination system running the OpenSSH dæmon (sshd).

Note that I didn't say “a destination system running sshd and an X server”. This is because X servers, oddly enough, don't actually run or control X Window System applications; they merely display their output. Therefore, if you're running an X Window System application on a remote system, you need to run an X server on your local system, not on the remote system. The application will execute on the remote system and send its output to your local X server's display.

Suppose you've got two systems, mylaptop and remotebox, and you want to monitor system resources on remotebox with the GNOME System Monitor. Suppose further you're running the X Window System on mylaptop and sshd on remotebox.

First, from a terminal window or xterm on mylaptop, you'd open an SSH session like this:

mick@mylaptop:~$ ssh -X admin-slave@remotebox
admin-slave@remotebox's password: **********
Last login: Wed Jun 11 21:50:19 2008 from dtcla00b674986d
admin-slave@remotebox:~$

Note the -X flag in my ssh command. This enables X Window System forwarding for the SSH session. In order for that to work, sshd on the remote system must be configured with X11Forwarding set to yes in its /etc/ssh/sshd.conf file. On many distributions, yes is the default setting, but check yours to be sure.

Next, to run the GNOME System Monitor on remotebox, such that its output (window) is displayed on mylaptop, simply execute it from within the same SSH session:

admin-slave@remotebox:~$ gnome-system-monitor &

The trailing ampersand (&) causes this command to run in the background, so you can initiate other commands from the same shell session. Without this, the cursor won't reappear in your shell window until you kill the command you just started.

At this point, the GNOME System Monitor window should appear on mylaptop's screen, displaying system performance information for remotebox. And, that really is all there is to it.

This technique works for practically any X Window System application installed on the remote system. The only catch is that you need to know the name of anything you want to run in this way—that is, the actual name of the executable file.

If you're accustomed to starting your X Window System applications from a menu on your desktop, you may not know the names of their corresponding executables. One quick way to find out is to open your desktop manager's menu editor, and then view the properties screen for the application in question.

For example, on a GNOME desktop, you would right-click on the Applications menu button, select Edit Menus, scroll down to System/Administration, right-click on System Monitor and select Properties. This pops up a window whose Command field shows the name gnome-system-monitor.

Figure 1 shows the Launcher Properties, not for the GNOME System Monitor, but instead for the GNOME File Browser, which is a better example, because its launcher entry includes some command-line options. Obviously, all such options also can be used when starting X applications over SSH.

Figure 1. Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble, or if you're worried about accidentally messing up your desktop menus, you simply can run the application in question, issue the command ps auxw in a terminal window, and find the entry that corresponds to your application. The last field in each row of the output from ps is the executable's name plus the command-line options with which it was invoked.

Once you've finished running your remote X Window System application, you can close it the usual way (selecting Exit from its File menu, clicking the x button in the upper right-hand corner of its window and so forth). Don't forget to close your SSH session too, by issuing the command exit in the terminal window where you're running SSH.

Virtual Network Computing (VNC)

Now that I've shown you the preferred way to run remote X Window System applications, let's discuss how to control an entire remote desktop. In the Linux/UNIX world, the most popular tool for this is Virtual Network Computing, or VNC.

Originally a project of the Olivetti Research Laboratory (which was subsequently acquired by Oracle and then AT&T before being shut down), VNC uses a protocol called Remote Frame Buffer (RFB). The original creators of VNC now maintain the application suite RealVNC, which is available in free and commercial versions, but TightVNC, UltraVNC and GNOME's vino VNC server and vinagre VNC client also are popular.

VNC's strengths are its simplicity, ubiquity and portability—it runs on many different operating systems. Because it runs over a single TCP port (usually TCP 5900), it's also firewall-friendly and easy to tunnel.

Its security, however, is somewhat problematic. VNC authentication uses a DES-based transaction that, if eavesdropped-on, is vulnerable to optimized brute-force (password-guessing) attacks. This vulnerability is exacerbated by the fact that many versions of VNC support only eight-character passwords.

Furthermore, VNC session data usually is transmitted unencrypted. Only a couple flavors of VNC support TLS encryption of RFB streams, and it isn't clear how securely TLS has been implemented even in those. Thus, an attacker using a trivially hacked VNC client may be able to reconstruct and view eavesdropped VNC streams.

Luckily, as it operates over a single TCP port, VNC is easy to tunnel through SSH, through Virtual Private Network (VPN) sessions and through TLS/SSL wrappers, such as stunnel. Let's look at a simple VNC-over-SSH example.

Tunneling VNC over SSH

To tunnel VNC over SSH, your remote system must be running an SSH dæmon (sshd) and a VNC server application. Your local system must have an SSH client (ssh) and a VNC client application.

Our example remote system, remotebox, already is running sshd. Suppose it also has vino, which is also known as the GNOME Remote Desktop Preferences applet (on Ubuntu systems, it's located in the System menu's Preferences section). First, presumably from remotebox's local console, you need to open that applet and enable remote desktop sharing. Figure 2 shows vino's General settings.

Figure 2. General Settings in GNOME Remote Desktop Preferences (vino)

If you want to view only this system's remote desktop without controlling it, uncheck Allow other users to control your desktop. If you don't want to have to confirm remote connections explicitly (for example, because you want to connect to this system when it's unattended), you can uncheck the Ask you for confirmation box. Any time you leave yourself logged in to an unattended system, be sure to use a password-protected screensaver!

vino is limited in this way. Because vino is loaded only after you log in, you can use it only to connect to a fully logged-in GNOME session in progress—not, for example, to a gdm (GNOME login) prompt. Unlike vino, other versions of VNC can be invoked as needed from xinetd or inetd. That technique is out of the scope of this article, but see Resources for a link to a how-to for doing so in Ubuntu, or simply Google the string “vnc xinetd”.

Continuing with our vino example, don't close that Remote Desktop Preferences applet yet. Be sure to check the Require the user to enter this password box and to select a difficult-to-guess password. Remember, vino runs in an already-logged-in desktop session, so unless you set a password here, you'll run the risk of allowing completely unauthenticated sessions (depending on whether a password-protected screensaver is running).

If your remote system will be run logged in but unattended, you probably will want to uncheck Ask you for confirmation. Again, don't forget that locked screensaver.

We're not done setting up vino on remotebox yet. Figure 3 shows the Advanced Settings tab.

Figure 3. Advanced Settings in GNOME Remote Desktop Preferences (vino)

Several advanced settings are particularly noteworthy. First, you should check Only allow local connections, after which remote connections still will be possible, but only via a port-forwarder like SSH or stunnel. Second, you may want to set a custom TCP port for vino to listen on via the Use an alternative port option. In this example, I've chosen 20226. This is an instance of useful security-through-obscurity; if our other (more meaningful) security controls fail, at least we won't be running VNC on the obvious default port.

Also, you should check the box next to Lock screen on disconnect, but you probably should not check Require encryption, as SSH will provide our session encryption, and adding redundant (and not-necessarily-known-good) encryption will only increase vino's already-significant latency. If you think there's only a moderate level of risk of eavesdropping in the environment in which you want to use vino—for example, on a small, private, local-area network inaccessible from the Internet—vino's TLS implementation may be good enough for you. In that case, you may opt to check the Require encryption box and skip the SSH tunneling.

(If any of you know more about TLS in vino than I was able to divine from the Internet, please write in. During my research for this article, I found no documentation or on-line discussions of vino's TLS design details whatsoever—beyond people commenting that the soundness of TLS in vino is unknown.)

This month, I seem to be offering you more “opt-out” opportunities in my examples than usual. Perhaps I'm becoming less dogmatic with age. Regardless, let's assume you've followed my advice to forego vino's encryption in favor of SSH.

At this point, you're done with the server-side settings. You won't have to change those again. If you restart your GNOME session on remotebox, vino will restart as well, with the options you just set. The following steps, however, are necessary each time you want to initiate a VNC/SSH session.

On mylaptop (the system from which you want to connect to remotebox), open a terminal window, and type this command:

mick@mylaptop:~$ ssh -L 20226:remotebox:20226 admin-slave@remotebox

OpenSSH's -L option sets up a local port-forwarder. In this example, connections to mylaptop's TCP port 20226 will be forwarded to the same port on remotebox. The syntax for this option is “-L [localport]:[remote IP or hostname]:[remoteport]”. You can use any available local TCP port you like (higher than 1024, unless you're running SSH as root), but the remote port must correspond to the alternative port you set vino to listen on (20226 in our example), or if you didn't set an alternative port, it should be VNC's default of 5900.

That's it for the SSH session. You'll be prompted for a password and then given a bash prompt on remotebox. But, you won't need it except to enter exit when your VNC session is finished. It's time to fire up your VNC client.

vino's companion client, vinagre (also known as the GNOME Remote Desktop Viewer) is good enough for our purposes here. On Ubuntu systems, it's located in the Applications menu in the Internet section. In Figure 4, I've opened the Remote Desktop Viewer and clicked the Connect button. As you can see, rather than remotebox, I've entered localhost as the hostname. I've also entered port number 20226.

Figure 4. Using vinagre to Connect to an SSH-Forwarded Local Port

When I click the Connect button, vinagre connects to mylaptop's local TCP port 20226, which actually is being listened on by my local SSH process. SSH forwards this connection attempt through its encrypted connection to TCP 20226 on remotebox, which is being listened on by remotebox's vino process.

At this point, I'm prompted for remotebox's vino password (shown in Figure 2). On successful authentication, I'll have full access to my active desktop session on remotebox.

To end the session, I close the Remote Desktop Viewer's session, and then enter exit in my SSH session to remotebox—that's all there is to it.

This technique is easy to adapt to other versions of VNC servers and clients, and probably also to other versions of SSH—there are commercial alternatives to OpenSSH, including Windows versions. I mentioned that VNC client applications are available for a wide variety of platforms; on any such platform, you can use this technique, so long as you also have an SSH client that supports port forwarding.

Conclusion

Thus ends my crash course on how to control graphical applications over networks. I hope my examples of both techniques, SSH with X forwarding and VNC over SSH, help you get started with whatever particular distribution and software packages you prefer. Be safe!

Mick Bauer (darth.elmo@wiremonkeys.org) is Network Security Architect for one of the US's largest banks. He is the author of the O'Reilly book Linux Server Security, 2nd edition (formerly called Building Secure Servers With Linux), an occasional presenter at information security conferences and composer of the “Network Engineering Polka”.

Taken From: Linux Journal Contents #173, September 2008

Friday, October 10, 2008

Adding a User to a Group - Linux

Q. How can I add a user to a group under Linux operating system?

A. You can use useradd or usermod commands to add a user to a group. useradd command creates a new user or update default new user information. usermod command modifies a user account i.e. it is useful to add user to existing group. There are two types of group. First is primary user group and other is secondary group. All user account related information is stored in /etc/passwd, /etc/shadow and /etc/group files to store user information.

useradd example - Add a new user to secondary group

Use useradd command to add new users to existing group (or create a new group and then add user). If group does not exist, create it. Syntax:
useradd -G {group-name} username
Create a new user called vivek and add it to group called developers. First login as a root user (make sure group developers exists), enter:
# grep developers /etc/group
Output:

developers:x:1124:

If you do not see any output then you need to add group developers using groupadd command:
# groupadd developers
Next, add a user called vivek to group developers:
# useradd -G developers vivek
Setup password for user vivek:
# passwd vivek
Ensure that user added properly to group developers:
# id vivekOutput:

uid=1122(vivek) gid=1125(vivek) groups=1125(vivek),1124(developers)

Please note that capital G (-G) option add user to a list of supplementary groups. Each group is separated from the next by a comma, with no intervening whitespace. For example, add user jerry to groups admins, ftp, www, and developers, enter:
# useradd -G admins,ftp,www,developers jerry

useradd example - Add a new user to primary group

To add a user tony to group developers use following command:
# useradd -g developers tony
# id tony

uid=1123(tony) gid=1124(developers) groups=1124(developers)
Please note that small -g option add user to initial login group (primary group). The group name must exist. A group number must refer to an already existing group.

usermod example - Add a existing user to existing group

Add existing user tony to ftp supplementary/secondary group with usermod command using -a option ~ i.e. add the user to the supplemental group(s). Use only with -G option :


# usermod -a -G ftp tonyChange existing user tony primary group to www:
# usermod -g www tony

Taken From: http://www.cyberciti.biz/faq/howto-linux-add-user-to-group/

Wednesday, October 1, 2008

Storing Object Oriented Data on a Database

Relational databases are really great for storing and retrieving data, but sometimes, they aren't quite up to the task. Joe Celko, whose SQL for Smarties books are among my favorites, dedicated an entire volume to the issue of trees and hierarchies. These data structures might be common and useful in most programming languages, but they can be difficult to model as tables, particularly if you care about efficient use of the database. Things become even trickier if you're dealing with a number of related, but distinct, types of entities, such as different types of employees or different types of vehicles.

One way to solve this problem is not to use relational databases. Objects can be quite good at handling trees and arrays, as well as inheritance hierarchies. Furthermore, object databases do exist, and the Python-based Zope application framework has demonstrated that it's even possible to have object databases in production. Gemstone's demonstration of Ruby running on top of its Smalltalk VM, with its accompanying object database, means that Ruby programmers soon might have access to similar technology.

But, object databases still are far from the mainstream. Most Web developers have access to a relational database, and not much else. Is there anything that we can do for these people?

This month, we take a look at two different ways we can handle data that doesn't quite fit into a relational database. These techniques are quite different from one another, and they don't even come close to the full range of possibilities you can get with a relational database. But, they both work and are used in production environments—and if your data doesn't seem to fit into standard database paradigms, you might want to consider one of them.

PostgreSQL's Table Inheritance
Some data-modeling issues are typically even harder to deal with. For example, a classic introduction to the world of object-oriented programming describes a human resources department. The HR department tracks employees, all of whom have some common characteristics. But, some employees are programmers, some are secretaries, and some are managers—and each of the employee types has specific data that needs to be associated with them.

In an object-oriented world, it's easy to model this. You create an employee class, and then create multiple subclasses of programmer, secretary and manager. Subclassing creates an “is-a” relationship, such that a programmer is an employee. This means that programmers have all the attributes of an employee, but also have some additional characteristics that distinguish them from an ordinary employee. With these subclasses in place, we then can create an array (or any other data structure) of people in our company, knowing that although some are programmers and others are secretaries, they're all employees and can be treated as such.

Translating this idea to the world of relational databases can be a bit tricky. One solution is to use inheritance in your database tables. PostgreSQL has done this for years; thus, it's called an object-relational database by many users. You can do the following in PostgreSQL, for example:


CREATE TABLE Employees (
id SERIAL,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL,
email_address TEXT NOT NULL,

PRIMARY KEY(id),
UNIQUE(email_address)
);

CREATE TABLE Programmers (
main_language TEXT NOT NULL
) INHERITS(Employees);

CREATE TABLE Secretaries (
words_per_minute INTEGER NOT NULL
) INHERITS(Employees);


INSERT INTO Employees (first_name, last_name, email_address)
VALUES ('George', 'Washington', 'georgie@whitehouse.gov');

INSERT INTO Programmers (first_name, last_name,
email_address, main_language)
VALUES ('Linus', 'Torvalds', 'torvalds@osdl.org', 'C');

INSERT INTO Secretaries (first_name, last_name,
email_address, words_per_minute)
VALUES ('Condoleezza', 'Rice', 'rice@state.gov', 10);


If we ask for all employees in the system, we'll get all three of the people we have entered:


atf=# select * from employees;
id first_name last_name email_address
----+------------+------------+------------------------
1 George Washington georgie@whitehouse.gov
2 Linus Torvalds torvalds@osdl.org
3 Condoleezza Rice rice@state.gov
(3 rows)


Of course, this query shows only the columns of the Employees table, which are common to that table and to those that inherit from it. If we want to find out how many words per minute someone types, we must address that query specifically to the Secretaries table:


atf=# select * from secretaries;
id first_name last_name email_address words_per_minute
----+------------+-----------+----------------+------------------
3 Condoleezza Rice rice@state.gov 10
(1 row)


Notice that the id column for all three tables, which was defined as SERIAL (that is, a nonrepeating incrementing integer), is unique across all three tables.


Polymorphic Associations

The way that PostgreSQL has integrated this type of object hierarchy into its relational system is impressive, flexible and useful. And yet, because it is unique to PostgreSQL, it means that no higher-level, database-agnostic application framework can support it. This especially is true in Ruby on Rails, which tries to treat all databases as similar or identical, going so far as to encourage programmers to use a Ruby-based domain-specific language (migrations) to create and modify database definitions. Using PostgreSQL's inheritance features might work, but it will take a fair amount of twisting to make it compatible with Rails.

Besides, Rails already has a feature, known as polymorphic associations, that lets us work with distinct types of items as if they were part of a single class. This isn't the same as an object hierarchy—we can't say that secretaries and programmers are both types of employees. But, we can say that secretaries and programmers are both employable and treat them as similar via that description.

To begin, you might remember that Rails has something known as associations, which allow us to connect one model to another. For example, let's say that each company has one or more employees. Thus, we can create some simple models. We can generate migrations with:

./script/generate model company name:string
./script/generate model employee first_name:string
last_name:string email_address:string company_id:integer

Then, we can turn the automatically generated migration files into actual database tables with the following:

rake db:migrate

Now, we can indicate that each company can have one or more employees by modifying the model files. For example, we add the following to employee.rb:


class Company < xyz =" Company.create(:name"> 'XYZ Corporation')

george = Employee.create(:first_name => 'George',
:last_name => 'Washington',
:email_address => 'georgie@whitehouse.gov',
:company_id => xyz.id)


Now, we can say:

p xyz.employees.first

and we get back our george user. Similarly, we can say:

p george.company

and get back our xyz company. This is all standard stuff for Rails programmers, and it is part of the ActiveRecord feature known as associations. You can create all sorts of associations, giving them arbitrary names. For example, we could say:


class Company < class_name =""> 'Employee',
:conditions => "first_name ilike '%a%'"
end


With this in place, and after restarting the console (or typing reload!), we now can say:

xyz = Company.find_by_name('XYZ Corporation')

xyz.employees_with_a

This prints the empty list—not surprising, given that we have defined only a single employee so far, and his name didn't contain an a. But, now we can create a second employee:


jane = Employee.create(:first_name => 'Jane',
:last_name => 'Austin',
:email_address => 'jane@bookauthor.com',
:company_id => xyz.id)


If we run our association again:

xyz.employees_with_a

now we get our jane employee.

This is all well and good, but what happens if we want to represent different types of employees, each of whom is employed by a company, but with different associated data? This is where polymorphic associations become useful. In order for this to work, we need to change the definitions of our models, as well as the relationships among them (if you're playing along at home, blow away the existing Employee and Company models before continuing):

./script/generate model company name:string
./script/generate model contract employable_id:integer
employable_type:string company_id:integer
./script/generate model programmer main_language:string
first_name:string last_name:string email_address:string
./script/generate model secretary words_per_minute:integer
first_name:string last_name:string email_address:string

The above invocations of script/generate create four different models: one for a company, another for a programmer, another for a secretary and a fourth for a contract. Our PostgreSQL model allowed us to have a single Employee table and to have programmers and secretaries inherit from that table. Rails doesn't let us specify that one model inherits from another. Rather, we use Rails to describe the relationships among the models. Companies are connected to programmers and secretaries via employment contracts.

Because we are looking at the relationships among standalone models, rather than an inheritance hierarchy, there's no obviously good place in which to stick attributes that are common to programmers and secretaries. In the end, I decided to put the attributes in the programmer and secretary models, respectively, despite the repetition.

Now, let's define the associations:


class Company < polymorphic =""> true
end

class Programmer < as =""> :employable
has_many :companies, :through => :contracts
end

class Secretary < as =""> :employable
has_many :companies, :through => :contracts
end


In other words, each company has many contracts. Each contract joins together a company and someone who is employable. Who is employable? Right now, only programmers and secretaries fit the bill, connecting to the employable interface with contracts, and then to a company via a contract.

Behind the scenes, Rails is pulling a nasty trick, one that should make any good database programmer feel sick. The contract model includes two fields (employable_id and employable_type), which point to a single row in a particular table. In some ways, this is sort of a poor man's foreign key. But the difference is that the foreign key can point to any of several tables. And, of course, there is no error checking; only the application can stop me from entering a random text string in the employable_type column.

So, now we can create some relationships:


xyz = Company.create(:name => 'XYZ Corporation')

p1 = Programmer.create(:first_name => 'Linus',
:last_name => 'Torvalds',
:email_address => 'torvalds@osdl.org',
:main_language => 'C')

Contract.create(:employable => p1, :company => xyz)

s1 = Secretary.create(:first_name => 'Condoleezza',
:last_name => 'Rice',
:email_address => 'rice@state.gov',
:words_per_minute => 90)

Contract.create(:employable => s1, :company => xyz)


That's already pretty remarkable. Because both programmers and secretaries are employable (as they both expose the employable interface to the contracts model, using has_many :as), we can join each of them to an instance of the contract model.

But, it gets better, if we add a few more associations:


class Contract < polymorphic =""> true

belongs_to :programmer,
:class_name => 'Programmer', :foreign_key => 'employable_id'
belongs_to :secretary,
:class_name => 'Secretary', :foreign_key => 'employable_id'
end

class Company < through =""> :contracts,
:source => :programmer,
:conditions => "contracts.employable_type = 'Programmer' "

has_many :secretaries, :through => :contracts,
:source => :secretary,
:conditions => "contracts.employable_type = 'Secretary' "

end


With this in place, we now have a complete bidirectional association between programmers and secretaries on one side and companies on the other. Thus, we can say:


>> xyz.programmers
=> [#]

>> xyz.secretaries
=> [#]


But, we also can say:


>> Programmer.find(1).companies
=> [#]


Moreover, we can iterate over xyz.contracts, bringing together the secretaries and programmers models into one package:

>> xyz.contracts.each {c puts c.employable.first_name}
Linus
Condoleezza

Although Rails does not provide inheritance within the models, polymorphic associations make it possible to come close to such functionality. You also get a bunch of convenience functions that make it more natural to work with these additional attributes.

Conclusion
Not all data fits cleanly into two-dimensional tables. When this occurs, you can try to shoehorn your data into an inappropriate container. Or, you can try to use the help that is built in to one or more levels of your software stack. If you use PostgreSQL, inheritance can be really useful. If you use Rails, you can take advantage of polymorphic associations, allowing you to treat two or more models with a common API as similar. This isn't the sort of thing you'll do each day, but it's a useful skill to have on hand for cases when you need to take unusual data.

Resources

To learn how PostgreSQL allows for inheritance, read the on-line manual at www.postgresql.org/docs/8.3/static/ddl-inherit.html.

Rails Cookbook, by Rob Orsini, and published by O'Reilly, has some good information about polymorhphic associations.

The Rails Wiki has some good examples and descriptions of polymorhphic associations at wiki.rubyonrails.org/rails/pages/UnderstandingPolymorphicAssociations.

Reuven M. Lerner, a longtime Web/database developer and consultant, is a PhD candidate in learning sciences at Northwestern University, studying on-line learning communities. He recently returned (with his wife and three children) to their home in Modi'in, Israel, after four years in the Chicago area.

__________________________


Taken From: Linux Journal Issue #173, September 2008

Monday, September 29, 2008

Extract Cab Files From EXE - Pocket PC

Hi there,

I recently bougth a HTC Diamond, which unfortunately is not linux based.

I started to download some aplications and noticed that many off them were exe files for Windows XP or Vista, so i had to use Windows XP or Vista via Active Sync to install them, and that some were cab which i could copy to my PDA via Windows or Linux if when i pluged in the usb cable i select the disk option an not active sync, which made my PDA apear as a USB Pen Drive.

Later on e noticed that the Windows XP or Vista exe, only copied a cab file and executed.
So i tried to find were did the cab was and found that the cab file was temporaly in:

Windows\AppMgr\Install

so i thougth that i could catch the cab rigth before installing it, and i succeded.


Basicly what i did was to execute the exe in Windows Xp, them PDA asked if i wanted to install the cab that the exe on Windows XP tranfered i said yes, and that's when you reach the below screen




Now i hit the home button on the PDA to select the file explorer in order to copy the cab, that is temporarily at Windows\AppMgr\Install to another location on PDA, so that you can store it on your PC so that when you want to install it i can do it from linux.

So i selected the explorer as shown bellow.





And went to Windows\AppMgr\Install where the cab was temporarily and copied it to another location on my PDA and canceled the install process in the first picture, but if you wan you can install.

I have tried to copy the cab before the screen on the first picture when it asks if you want to install, but it keep disapering, and the instalation being canceled.

And thats it! Happy Cab Hunt!

Monday, August 18, 2008

Cfengine - Configuration Management for Multiple Machine

Cfengine for Enterprise Configuration Management

April 1st, 2008 by Scott Lackey

Cfengine makes it easier to manage configuration files across large numbers of machines.

Cfengine is known by many system administrators to be an excellent tool to automate manual tasks on UNIX and Linux-based machines. It also is the most comprehensive framework to execute administrative shell scripts across many servers running disparate operating systems. Although cfengine is certainly good for these purposes, it also is widely considered the best open-source tool available for configuration management. Using cfengine, sysadmins with a large installation of, say, 800 machines, can have information about their environment quickly that otherwise would take months to gather, as well as the ability to change the environment in an instant. For an initial example, if you have a set of Linux machines that need to have a different /etc/nsswitch.conf, and then have some processes restarted, there's no need to connect to each machine and perform these steps or even to write a script and run it on the machines once they are identified. You simply can tell cfengine that all the Linux machines running Fedora/Debian/CentOS with XGB of RAM or more need to use a particular /etc/nsswitch.conf until a newer one is designated. Cfengine can do all that in a one-line statement.

Cfengine's configuration management capabilities can work in several different ways. In this article, I focus on a make-it-so-and-keep-it-so approach. Let's consider a small hosting company configuration, with three administrators and two data centers (Figure 1).

Figure 1. How the Few Control the Many

Each administrator can use a Subversion/CVS sandbox to hold repositories for each data center. The cfengine client will run on each client machine, either through a cron job or a cfengine execution dæmon, and pull the cfengine configuration files appropriate for each machine from the server. If there is work to be done for that particular machine, it will be carried out and reported to the server. If there are configuration files to copy, the ones active on the client host will be replaced by the copies on the cfengine server. (Cfengine will not replace a file if the copy process is partial or incomplete.)

A cfengine implementation has three major components:

  • Version control: this usually consists of a versioning system, such as CVS or Subversion.

  • Cfengine internal components: cfservd, cfagent, cfexecd, cfenvd, cfagent.conf and update.conf.

  • Cfengine commands: processes, files, shellcommands, groups, editfiles, copy and so forth.

The cfservd is the master dæmon, configured with /etc/cfservd.conf, and it listens on port 5803 for connections to the cfengine server. This dæmon controls security and directory access for all client machines connecting to it. cfagent is the client program for running cfengine on hosts. It will run either from cron, manually or from the execution dæmon for cfengine, cfexecd. A common method for running the cfagent is to execute it from cron using the cfexecd in non-dæmon mode. The primary reason for using both is to engage cfengine's logging system. This is accomplished using the following:

*/10 * * * * /var/cfengine/sbin/cfexecd -F

as a cron entry on Linux (unless Solaris starts to understand */10). Note that this is fairly frequent and good only for a low number of servers. We don't want 800 servers updating within the same ten minutes.

The cfenvd is the “environment dæmon” that runs on the client side of the cfengine implementation. It gathers information about the host machine, such as hostname, OS and IP address. The cfenvd detects these factors about a host and uses them to determine to which groups the machine belongs. This, in effect, creates a profile for each machine that cfengine uses to determine what work to perform on each host.

The master configuration file for each host is cfagent.conf. This file can contain all the configuration information and cfengine code for the host, a subset of hosts or all hosts in the cfengine network. This file is often just a starting point where all configurations are stored in other files and “imported” into cfagent.conf, in a very similar fashion to Nagios configuration files. The update.conf file is the fundamental configuration file for the client. It primarily just identifies the cfengine server and gets a copy of the cfagent.conf.

Figure 2. Automated Distribution of Cfengine Files

The update.conf file tells the cfengine server to deploy a new cfagent.conf file (and perhaps other files as well) if the current copy on the host machine is different. This adds some protection for a scenario where a corrupt cfagent.conf is sent out or in case there never was one. Although you could use cfengine to distribute update.conf, it should be copied manually to each host.

Cfengine “commands” are not entered on the command line. They make up the syntax of the cfengine configuration language. Because cfengine is a framework, the system administrator must write the necessary commands in cfengine configuration files in order to move and manipulate data. As an example, let's take a look at the files command as it would appear in the cfagent.conf file:

files:

/etc/passwd mode=644
owner=root action=fixall
/etc/shadow mode=600
owner=root action=fixall

This would set all machines' /etc/passwd and /etc/shadow files to the permissions listed in the file (644 and 600). It also would change the owner of the file to root and fix all of these settings if they are found to be different, each time cfengine runs. It's important to keep in mind that there are no group limitations to this particular files command. If cfengine does not have a group listed for the command, it assumes you mean any host. This also could be written as:

files:
any::
/etc/passwd mode=644
owner=root action=fixall
/etc/shadow mode=600
owner=root action=fixall

This brings us to an important topic in building a cfengine implementation: groups. There is a groups command that can be used to assign hosts to groups based on various criteria. Custom groups that are created in this way are called soft groups. The groups that are filled by the cfenvd dæmon automatically are referred to as hard groups. To use the groups feature of cfengine and assign some soft groups, simply create a groups.cf file, and tell the cfagent.conf to import it somewhere in the beginning of the file:

import:
any::
groups.cf

Cfengine will look in the default directory for the groups.cf file in /var/cfengine/inputs. Now you can create arbitrary groups based on any criteria. It is important to remember that the terms groups and classes are completely interchangeable in cfengine:

groups:
development = ( nfs01 nfs02 10.0.0.17 )
production = ( app01 app02 !development )

You also can combine hard groups that have been discovered by cfenvd with soft groups:

groups:
legacy = ( irix compiled_on_cygwin sco )

Let's get our testing setup in order. First, install cfengine on a server and a client or workstation. Cfengine has been compiled on almost everything, so there should be a package for your OS/distribution. Because the source is usually the latest version, and many versions are bug fixes, I recommend compiling it yourself. Installing cfengine gives you both the server and client binaries and utilities on every machine, so be careful not to run the server dæmon (cfservd) on a client machine unless you specifically intend to do that. After the install, you should have a /var/cfengine/ directory and the binaries mentioned previously.

Before any host can actually communicate with the cfengine server, keys must be exchanged between the two. Cfengine keys are similar to SSH keys, except they are one-way. That is to say, both the server and the client must have each other's public key in order to communicate. Years of sysadmin paranoia cause me to recommend manually copying all keys and trusting nothing. Copy /var/cfengine/ppkeys/localhost.pub from the server to all the clients and from the clients to the server in the same directory, renaming them /var/cfengine/ppkeys/root-10.11.0.1.pub, where the IP is 10.11.0.1.

On the server side, cfservd.conf must be configured to allow clients to access particular directories. To do this, create an AllowConnectionsFrom and an admit section:

#cfservd.conf

control:
AllowConnectionsFrom = ( 192.168.0.0/24 )
admit:
/configs/datacenter1 *.example1.com
/configs/datacenter2 *.example2.com

To test your example client to see whether it is connecting to the cfengine server, make sure port 5803 is clear between them, and run the server with:

cfservd -v -d2

And, on the client run:

cfagent -v --no-splay

This will give you a lot of debugging information on the server side to see what's working and what isn't.

Now, let's take a look at distributing a configuration file. Although cfengine has a full-featured file editor in the editfiles command, using this method for distributing configurations is not advised. The copy command will move a file from the server to the client machine with .cfnew appended to the filename. Then, once the file has been copied completely, it renames the file and saves the old copy as .cfsaved in the specified directory. Here's the copy command syntax:

copy:
class::
<>

dest=target-file
server=server
mode=mode
owner=owner
group=group
backup=true/false
repository=backup dir
recurse=number/inf/0
define=classlist

Only the dest= is required, along with the filename to save at the destination. These can be different. Here's another example:

copy:
linux::
${copydir}/linux/resolv.conf

dest=/etc/resolv.conf
server=cfengine.example1.com
mode=644
owner=root
group=root
backup=true
repository=/var/cfengine/cfbackup
recurse=0
define=copiedresolvdotconf

The last line in this copy statement assigns this host to a group called copiedresolvdotconf. Although we don't have to do anything after copying this particular file, we may want to do some action on all hosts that just had this file successfully sent to them, such as sending an e-mail or restarting a process. As another example, if you update a configuration file that is attached to a dæmon, you may want to send a SIGHUP to the process to cause it to reread the configuration file. This is common with Apache's httpd.conf or inetd.conf. If the copy is not successful, this server won't be added to the copiedresolvdotconf class. You can query all servers in the network to see whether they are members and, if not, find out what went wrong.

A great way to version control your config files is to use a cfengine variable for the filename being copied to control which version gets distributed. Such a line may look something like this:

copy:
linux::
${copydir}/linux/${resolv_conf}

Or, better yet, you can use cfengine's class-specific variables, whose scope is limited to the class with which they are associated. This makes copy statements much more elegant and can simplify changes as your cfengine files scale:

control:

# ${resolve_conf} value depends on context,
# is this a linux machine or hpux?
linux:: resolve_conf = ( "${copydir}"/linux/resolv.conf )
hpux:: resolve_conf = ( "${copydir}"/hpux/resolv.conf )
copy:
linux::

${resolve_conf}

Here is a full cfagent.conf file that makes use of everything I've covered thus far. It also adds some practical examples of how to do sysadmin work with cfengine:

# cfagent.conf

control:
actionsequence = ( files editfiles processes )
AddInstallable = ( cron_restart )
solaris:: crontab = ( /var/spool/cron/crontabs/root )
linux:: crontab = ( /var/spool/cron/root )

files:
solaris::
${crontab}
action=touch
linux::
${crontab}
action=touch

editfiles:
solaris::
{ ${crontab}
AppendIfNoSuchLine "0,10,20,30,40,50 * * * *
↪/var/cfengine/sbin/cfexecd -F"
DefineClasses "cron_restart"
}
linux::
{ ${crontab}
AppendIfNoSuchLine "0,10,20,30,40,50 * * * *
↪/var/cfengine/sbin/cfexecd -F"
#linux doesn't need a cron restart.
}

shellcommands:
solaris.cron_restart::
"/etc/init.d/cron stop"
"/etc/init.d/cron start"

import:
any::
groups.cf
copy.cf

The above is a full cfagent configuration that adds cfengine execution from cron to each client (if it's Linux or Solaris). So effectively, once you run cfengine manually for the first time with this cfagent.conf file, cfengine will continue to run every five minutes from that host, but you won't need to edit or restart cron. The control section of the cfagent.conf is where you can define some variables that will control how cfengine handles the configuration file. actionsequence tells cfengine what order to execute each command, and AddInstallable is a variable that holds soft groups that get defined later in the file in a “define” statement, such as after the editfiles command where the line is DefineClasses "cron_restart". The reason for using AddInstallable is sometimes cfengine skips over groups that are defined after command execution, and defining that group in the control section ensures that the command will be recognized throughout the configuration.

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool. A number of independent tools will do a subset of cfengine's work (such as rsync, ssh and make), but nothing else allows a small group of system administrators to manage such a large group of servers. Centralizing configuration management has the dual benefit of information and control, and cfengine provides these benefits in a free, open-source tool for your infrastructure and application environments.

Scott Lackey is an independent technology consultant who has developed and deployed configuration management solutions across industry from NASA to Wall Street. Contact him at slackey@violetconsulting.net, www.violetconsulting.net.

Zenoss Enterprise Network Monitoring

Zenoss and the Art of Network Monitoring

August 1st, 2008 by Jeramiah Bowling in

If a server goes down, do you want to hear it?

If a tree falls in the woods and no one is there to hear it, does it make a sound? This the classic query designed to place your mind into the Zen-like state known as the silent mind. Whether or not you want to hear a tree fall, if you run a network, you probably want to hear a server when it goes down. Many organizations utilize the long-established Simple Network Management Protocol (SNMP) as a way to monitor their networks proactively and listen for things going down.

At a rudimentary level, SNMP requires only two items to work: a management server and a managed device (or devices). The management server pulls status and health information at regular intervals from the managed devices and stores the information in a table. Managed devices use local SNMP agents to notify the management server when defined behavior occurs (such as errors or “traps”), which are stored in the same table on the server. The result is an accurate, real-time reporting mechanism for outages. However, SNMP as a protocol does not stipulate how the data in these tables is to be presented and managed for the end user. That's where a promising new open-source network-monitoring software called Zenoss (pronounced Zeen-ohss) comes in.

Available for most Linux distributions, Zenoss builds on the basic operation of SNMP and uses a comprehensive interface to manage even the largest and most diverse environment. The Core version of Zenoss used in this article is freely available under the GPLv2. An Enterprise version also is available with additional features and support. In this article, we install Zenoss on a CentOS 5.1 system to observe its usefulness in a network-monitoring role. From there, we create a simulated multisystem server network using the following systems: a Fedora-based Postfix e-mail server, an Ubuntu server running Apache and a Windows server running File and Print services. To conserve space, only the CentOS installation is discussed in detail here. For the managed systems, only SNMP installation and configuration are covered.


Building the Zenoss Server

Begin by selecting your hardware. Zenoss lacks specific hardware requirements, but it relies heavily MySQL, so you can use MySQL requirements as a rough guideline. I recommend using the fastest processor available, 1GB of memory, fast enough hard disks to provide acceptable MySQL performance and Gigabit Ethernet for the network. I ran several test configurations, and this configuration seemed adequate enough for a medium-size network (100+ nodes/devices). To keep configuration simple, all firewalls and SELinux instances were disabled in the test environment. If you use firewalls in your environment, open ports 161 (SNMP), 8080 (Zenoss Management Page) and 514 (if you integrate syslog with Zenoss).

Install CentOS 5.1 on the server using your own preferences. I used a bare install with no X Window System or desktop manager. Assign a static IP address and any other pertinent network information (DNS servers and so forth). After the OS install is complete, install the following packages using the yum command below:

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started after yum installs it, start it and set it to run for your configured runlevel. Next, download the latest Zenoss Core .rpm from Sourceforge.net (2.1.3 at the time of this writing), and install it using rpm from the command line. To start all the Zenoss-related dæmons after the .rpm has been installed, type the following at a command prompt:

service zenoss start

Launch a Web browser from any machine, and type the IP address of the Zenoss server using port 8080 (for example, http://192.168.142.6:8080). Log in to the site using the default account admin with a password of zenoss. This brings up the main dashboard. The dashboard is a compartmentalized view of the state of your managed devices. If you don't like the default display, you can arrange your dashboard any way you want using the various drop-down lists on the portlets (windows). I recommend setting the Production States portlet to display Production, so we can see our test systems after they are added.

Almost everything related to managed devices in Zenoss revolves around classes. With classes, you can create an infinite number of systems, processes or service classifications to monitor. To begin adding devices, we need to set our SNMP community strings at the top-level /Devices class. SNMP community strings are like passphrases used to authenticate traffic between devices. If one device wants to communicate with another, they must have matching community names/strings. In many deployments, administrators use the default community name of public (and/or private), which creates a security risk. I recommend changing these strings and making them into a short phrase. You can add numbers and characters to make the community name more complex to guess/crack, but I find phrases easier to remember.

Click on the Devices link on the navigation menu on the left, so that /Devices is listed near the top of the page. Click on the zProperties tab and scroll down. Enter an SNMP community string in the zSNMPCommunitiy field. For our test environment, I used the string whatsourclearanceclarence. You can use different strings with different subclasses of systems or individual systems, but by setting it at the /Devices class, it will be used for any subclasses unless it is overridden. You also could list multiple strings in the zSNMPCommunities under the /Devices class, which allows you to define multiple strings for the discovery process discussed later. Make sure your community string (zSNMPCommunity) is in this list.

Installing Net-SNMP on Linux Clients

Now, let's set up our Linux systems so they can talk to the Zenoss server. After installing and configuring the operating systems on our other Linux servers, install the Net-SNMP package on each using the following command on the Ubuntu server:

sudo apt-get install snmpd

And, on the Fedora server use:

yum install net-snmp

Once the Net-SNMP packages are installed, edit out any other lines in the Access Control sections at the beginning of the /etc/snmp/snmpd.conf, and add the following lines:

##      sec.name  source      community
com2sec local localhost whatsourclearanceclarence
com2sec mynetwork 192.168.142.0/24 whatsourclearanceclarence

## group.name sec.model sec.name
group MyROGroup v1 local
group MyROGroup v1 mynetwork
group MyROGroup v2c local
group MyROGroup v2c mynetwork

## incl/excl subtree mask
view all included .1 80

## context sec.model sec.level prefix read write notif
access MyROGroup "" any noauth exact all none none

Do not edit out any lines beneath the last Access Control Sections. Please note that the above is only a mildly restrictive configuration. Consult the snmpd.conf file or the Net-SNMP documentation if you want to tighten access. On the Ubuntu server, you also may have to change the following line in the /etc/default/snmp file to allow SNMP to bind to anything other than the local loopback address:


SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid'

Installing SNMP on Windows

On the Windows server, access the Add/Remove Programs utility from the Control Panel. Click on the Add/Remove Windows Components button on the left. Scroll down the list of Components, check off Management and Monitoring Tools, and click on the Details button. Check Simple Network Management Protocol in the list, and click OK to install. Close the Add/Remove window, and go into the Services console from Administrative Tools in the Control Panel. Find the SNMP service in the list, right-click on it, and click on Properties to bring up the service properties tabs. Click on the Traps tab, and type in the community name. In the list of Trap Destinations, add the IP address of the Zenoss server. Now, click on the Security tab, and check off the Send authentication trap box, enter the community name, and give it READ-ONLY rights. Click OK, and restart the service.

Return to the Zenoss management Web page. Click the Devices link to go into the subclass of /Devices/Servers/Windows, and on the zProperties tab, enter the name of a domain admin account and password in the zWinUser and zWinPassword fields. This account gives Zenoss access to the Windows Management Instrumentation (WMI) on your Windows systems. Make sure to click Save at the bottom of the page before navigating away.


Adding Devices into Zenoss

Now that our systems have SNMP, we can add them into Zenoss. Devices can be added individually or by scanning the network. Let's do both. To add our Ubuntu server into Zenoss, click on the Add Device link under the Management navigation section. Enter the IP address of the server and the community name. Under Device Class Path, set the selection to /Server/Linux. You could add a variety of other hardware, software and Zenoss information on this page before adding a system, but at a minimum, an IP address name and community name is required (Figure 1). Click the Add Device button, and the discovery process runs. When the results are displayed, click on the link to the new device to access it.

Figure 1. Adding a Device into Zenoss

To scan the network for devices, click the Networks link under Browse By section of the navigation menu. If your network is not in the list, add it using CIDR notation. Once added, check the box next to your network and use the drop-down arrow to click on the Select Discover Devices option. You will see a similar results page as the one from before. When complete, click on the links at the bottom of the results page to access the new devices. Any device found will be placed in the /Discovered class. Because we should have discovered the Fedora server and the Windows server, they should be moved to the /Devices/Servers/Linux and /Devices/Servers/Windows classes, respectively. This can be done from each server's Status tab by using the main drop-down list and selecting Manage→Change Class.

If all has gone well, so far we have a functional SNMP monitoring system that is able to monitor heartbeat/availability (Figure 2) and performance information (Figure 3) on our systems. You can customize other various Status and Performance Monitors to meet your needs, but here we will use the default localhost monitors.

Figure 2. The Zenoss Dashboard

Figure 3. Performance data is collected almost immediately after discovery.

Creating Users and Setting E-Mail Alerts

At this point, we can use the dashboard to monitor the managed devices, but we will be notified only if we visit the site. It would be much more helpful if we could receive alerts via e-mail. To set up e-mail alerting, we need to create a separate user account, as alerts do not work under the admin account. Click on the Setting link under the Management navigation section. Using the drop-down arrow on the menu, select Add User. Enter a user name and e-mail address when prompted. Click on the new user in the list to edit its properties. Enter a password for the new account, and assign a role of Manager. Click Save at the bottom of the page. Log out of Zenoss, and log back in with the new account. Bring the settings page back up, and enter your SMTP server information. After setting up SMTP, we need to create an Alerting Rule for our new user. Click on the Users tab, and click on the account just created in the list. From the resulting page, click on the Edit tab and enter the e-mail address to which you want alerts sent. Now, go to the Alerting Rules tab and create a new rule using the drop-down arrow. On the edit tab of the new Alerting Rule, change the Action to email, Enabled to True, and change the Severity formula to >= Warning (Figure 4). Click Save.

Figure 4. Creating an Alert Rule

The above rule sends alerts when any Production server experiences an event rated Warning or higher (Figure 5). Using a filter, you can create any number of rules and have them apply only to specific devices or groups of devices. If you want to limit your alerts by time to working hours, for example, use the Schedule tab on the Alerting Rule to define a window. If no schedule is specified (the default), the rule runs all the time. In our rule, only one user will be notified. You also can create groups of users from the Settings page, so that multiple people are alerted, or you could use a group e-mail address in your user properties.

Figure 5. Zenoss alerts are sent fresh to your mailbox.


Services and Processes

We can expand our view of the test systems by adding a process and a service for Zenoss to monitor. When we refer to a process in Zenoss, we mean an active program, usually a dæmon, running on a managed device. Zenoss uses regular expressions to monitor processes.

To monitor Postfix on the mail server, first, let's define it as a process. Navigate to the Processes page under the Classes section of the navigation menu. Use the drop-down arrow next to OS Processes, and click Add Process. Enter Postfix as the process ID. When you return to the previous page, click on the link to the new process. On the edit tab of the process, enter master in the Regex field. Click Save before navigating away. Go to the zProperties tab of the process, and make sure the zMonitor field is set to True. Click Save again. Navigate back to the mail server from the dashboard, and on the OS tab, use the topmost menu's drop-down arrow to select Add→Add OSProcess. After the process has been added, we will be alerted if the Postfix process degrades or fails. While still on the OS tab of the server, place a check mark next to the new Postifx process, and from the OS Processes drop-down menu, select Lock OSProcess. On the next set of options, select Lock from deletion. This protects the process from being overwritten if Zenoss remodels the server.

Services in Zenoss are defined by active network ports instead of running dæmons. There are a plethora of services built in to the software, and you can define your own if you want to. The built-in services are broken down into two categories: IPServices and WinServices. IPservices use any port from 1-65535 and include common network apps/protocols, such as SMTP (Port 25), DNS (53) and HTTP (80). WinServices are intended for specific use with Windows servers (Figure 6).

Figure 6. Zenoss comes with a plethora of predefined Windows services to monitor.

Adding a service is much simpler than adding a process, because there are so many predefined in Zenoss. To monitor the HTTP service on our Web server, navigate to the server from the dashboard. Use the main menu's drop-down arrow on the server's OS tab arrow, and select Add→Add IPService. Type HTTP in the Service Class Field. Notice that the field begins to prefill with matches as you type the letters. Select TCP as the protocol, and click OK. Click Save on the resulting page. As with the OSProcess procedure, return to the OS tab of the server and lock the new IPService. Zenoss is now monitoring HTTP availability on the server (Figure 7).

Figure 7. Monitoring HTTP as an IPService


Only the Beginning

There are a multitude of other features in Zenoss that space here prevents covering, including Network Maps (Figure 8), a Google Maps API for multilocation monitoring (Figure 9) and Zenpacks that provide additional monitoring and performance-capturing capabilities for common applications.

Figure 8. Zenoss automatically maps your network for you.

Figure 9. Multiple sites can be monitored geographically with the Google Maps API.

In the span of this article, we have deployed an enterprise-grade monitoring solution with relative ease. Although it's surprisingly easy to deploy, Zenoss also possesses a deep feature set. It easily rivals, if not surpasses, commercial competitors in the same product space. It is easy to manage, highly customizable and supported by a vibrant community.

Although you may not achieve the silent mind as long as you work with networks, with Zenoss, at least you will be able to sleep at night knowing you will hear things when they go down. Hopefully, they won't be trees.

Jeramiah Bowling has been a systems administrator and network engineer for more than ten years. He works for a regional accounting and auditing firm in Hunt Valley, Maryland, and holds numerous industry certifications, including the CISSP. Your comments are welcome at jb50c@yahoo.com.


Taken From: Linux Journal Issue #172/August 2008 - Zenoss and the Art of Enterprise Monitoring by Jeramiah Bowling


WiiMote on Linux - Install and Config

Hack and / - Wiimote Control

August 1st, 2008 by Kyle Rankin in

Why let your Wii have all the fun? Find out how to connect your Wiimote to your computer and use it as a mouse or an input device for any number of popular gaming emulators.

If you think about it, there are almost as many ways to interface with your computer as there are Debian-based distributions—and that's a lot. Besides the trusty keyboard and optical mouse, there are trackpoint mice, touchpads, touchscreens, twiddlers, joysticks, presentation remotes and even devices that measure your brain waves. Although I mostly stick with my tried-and-true keyboard and trackpoint mouse (fingers on home row, thank you), when I started hearing about all the interesting things people were doing with the Wiimote (the main controller from the Nintendo Wii), I knew I had to give it a try.

Now traditionally, connecting a brand-new device to a Linux machine was an investment in Internet research, kernel module hacking, prayer and obscure programming skills I haven't used since college. I figured the mere fact that this was a Bluetooth device meant I was going to have to spend some quality time with hcidump. To my surprise, all the hard work already had been done for me, and I could connect and use a Wiimote on my laptop with only a few basic steps.

Configure udev

First, your kernel needs the uinput module available and loaded. This module is available in modern kernels, and my Ubuntu Gutsy install already had it. If you want to be able to connect to the Wiimote as a regular user, however, you need to add a new udev rule to extend permissions to the uinput device. I created a file called /etc/udev/rules.d/95-uinput.rules that contained the following:

KERNEL=="uinput", GROUP="plugdev"

Then, I made sure my user was a member of the plugdev group. If your system doesn't have a plugdev group, you could choose or create another group to use for this device. Next, run /etc/init.d/udev reload to make sure your changes are seen. Finally, I ran modprobe uinput to make sure the module was loaded, and I also added uinput to /etc/modules to make sure it was loaded at boot.

Install wminput

The next step is to install the wminput software. For me, this was simple, as wminput is packaged for my distribution; otherwise, you can download the source from the official site (www.cwiid.org). Then, make sure the Bluetooth device in your computer is enabled. For my laptop, I had to flip a switch on the side, but if you have an external USB Bluetooth adapter, for instance, now is a good time to plug it in. Finally, run wminput in a console and follow the directions:

greenfly@minimus:~$ wminput
Put Wiimote in discoverable mode now (press 1+2)...
Ready.

When you press buttons 1 and 2 on your Wiimote, it goes into discoverable mode, and the blue LEDs along the bottom start blinking. Sometimes you might not start discoverable mode fast enough, or wminput won't detect it, but as long as the LEDs on the Wiimote are blinking, it is still in that mode. So if wminput times out, just run the program again.

If you continually can't connect, you probably should double-check that your Bluetooth device is working. To do this, press buttons 1 and 2 on the Wiimote and then use hcitool to scan for the Wiimote. A successful scan will look like the following:

greenfly@minimus:~$ hcitool scan
Scanning ...
00:1B:7A:3E:8C:54 Nintendo RVL-CNT-01

After wminput connects, you also can look in /var/log/dmesg for confirmation that the Wiimote is connected:

[ 1226.247203] usb 3-2: new full speed USB device using
↪uhci_hcd and address 13
[ 1226.288768] usb 3-2: configuration #1 chosen
↪from 1 choice
[ 1227.922403] input: Nintendo Wiimote as
↪/devices/virtual/input/input21
Use the Wiimote as a Mouse

Once the Wiimote is connected, the default bindings use it as a mouse. The accelerometers in the Wiimote are used to move the mouse pointer, so if you point the Wiimote down or up, the mouse will move down or up, respectively, and if you roll the Wiimote to the left or right, the mouse will move left or right, respectively. If you look at /etc/cwiid/wminput/buttons, you can see the default mappings:

Wiimote.A               = BTN_LEFT
Wiimote.B = BTN_RIGHT
Wiimote.Up = KEY_UP
Wiimote.Down = KEY_DOWN
Wiimote.Left = KEY_LEFT
Wiimote.Right = KEY_RIGHT
Wiimote.Minus = KEY_BACK
Wiimote.Plus = KEY_FORWARD
Wiimote.Home = KEY_HOME
Wiimote.1 = KEY_PROG1
Wiimote.2 = KEY_PROG2
...

By default, wminput reads the configuration listed in /etc/cwiid/wminput/default to get its mappings. In this file you will see:

#acc_ptr

include buttons

Plugin.acc.X = REL_X
Plugin.acc.Y = REL_Y

Essentially, this file includes the buttons file for keybindings, and it also enables the use of the accelerometers for X and Y movements. The great thing about wminput is that all these mappings are completely configurable. If you look in /etc/cwiid/wminput, you should see a number of other example mappings you can use as inspiration. You also can store custom mappings in your home directory under ~/.cwiid/wminput. The button mappings use standard names for keys and mouse buttons that can be found in /usr/include/linux/input.h, but most of the names are pretty straightforward.

Wiimotes for NES Emulation

One of the first things I wanted to do with my Wiimote after it was connected was to use it as a controller for my various game system emulators. But, before I go any further, if you do use a game system emulator like nestra, fceu, snes9x or MAME, be sure you have full rights to use any ROMs you might have. Make an appointment with your lawyer for details, but essentially, to play a commercial ROM, you should own the corresponding game.

With the legal disclaimers aside, the Wiimote makes a great wireless NES (Nintendo Entertainment System) controller. All the basic buttons are there, and all that's left to do is rearrange the button mappings to work with either nestra or fceu NES emulators. Both programs use slightly different mappings, so I created files called buttons-fceu and buttons-nestra and placed them in ~/.cwiid/wminput. First, buttons-nestra:

Wiimote.A               = KEY_0
Wiimote.B = KEY_1
Wiimote.Up = KEY_LEFT
Wiimote.Down = KEY_RIGHT
Wiimote.Left = KEY_DOWN
Wiimote.Right = KEY_UP
Wiimote.Minus = KEY_TAB
Wiimote.Plus = KEY_ENTER
Wiimote.Home = KEY_PAUSE
Wiimote.1 = KEY_Z
Wiimote.2 = KEY_SPACE

After I set the regular NES buttons, I had a few extra to bind, so I bound the A button to pause the emulator, the B button to set nestra to normal speed and the home button to reset the game.

The fceu emulator had completely different keybindings, so here is my buttons-fceu file:

Wiimote.A               = KEY_F7
Wiimote.B = KEY_F5
Wiimote.Up = KEY_A
Wiimote.Down = KEY_S
Wiimote.Left = KEY_Z
Wiimote.Right = KEY_W
Wiimote.Minus = KEY_TAB
Wiimote.Plus = KEY_ENTER
Wiimote.Home = KEY_F10
Wiimote.1 = KEY_KP2
Wiimote.2 = KEY_KP3

In addition to the standard buttons, I bound the B button to save a game state, A to restore and home to reset the game.

Now, to use either of these configuration files, all I need to do is tell wminput to load these files instead:

greenfly@minimus:~/$ wminput -c ~/.cwiid/wminput/buttons-nestra
Put Wiimote in discoverable mode now (press 1+2)...
Ready.

Because wminput sends regular keyboard events, I don't have to do anything special to nestra or fceu.

Wiimotes for SNES Emulation

The Wiimote worked great for NES games, but how about SNES (Super Nintendo) emulation? I actually purchased a few different SNES games for the Wii virtual console, and I also bought a Classic Controller so I would have all the standard SNES buttons. It turns out that wminput also can bind keys to the Nunchuck and Classic Controller attachments, so all I had to do for it to work with snes9x was create a new configuration file that mapped all the keys. Here is my buttons-snes9x file:

Wiimote.A               = KEY_X
Wiimote.B = KEY_S
Wiimote.Up = KEY_LEFT
Wiimote.Down = KEY_RIGHT
Wiimote.Left = KEY_DOWN
Wiimote.Right = KEY_UP
Wiimote.Minus = KEY_TAB
Wiimote.Plus = KEY_ENTER
Wiimote.Home = KEY_ESC
Wiimote.1 = KEY_C
Wiimote.2 = KEY_D

Nunchuk.C = BTN_LEFT
Nunchuk.Z = BTN_RIGHT

Classic.Up = KEY_UP
Classic.Down = KEY_DOWN
Classic.Left = KEY_LEFT
Classic.Right = KEY_RIGHT
Classic.Minus = KEY_SPACE
Classic.Plus = KEY_ENTER
Classic.Home = KEY_ESC
Classic.A = KEY_D
Classic.B = KEY_C
Classic.X = KEY_S
Classic.Y = KEY_X
#Classic.ZL =
#Classic.ZR =
Classic.L = KEY_A
Classic.R = KEY_Z

Even though I planned to use the Classic Controller, I tried to map as many of the regular Wiimote keys to buttons that made sense, so you could potentially play at least some games with the regular Wiimote as well. If you notice, I also left bindings for the special ZL and ZR keys blank, so you could bind them to extra keys.

Wiimote Control for MAME

One of the best game system emulators out there is MAME. MAME emulates classic arcade games, and there are many guides out there (including in Linux Journal itself) on how to use MAME to create your own arcade cabinet. Well, I haven't cleared away the time for that project yet, but I did want to use my Wiimote and Classic Controller attachment for MAME games. MAME has a large number of bindings (press Tab in MAME to see a list), so it was difficult to choose which to bind to the extra keys. Here is a sample buttons-xmame file I created:

Wiimote.A               = KEY_P
Wiimote.B = KEY_5
Wiimote.Up = KEY_LEFT
Wiimote.Down = KEY_RIGHT
Wiimote.Left = KEY_DOWN
Wiimote.Right = KEY_UP
Wiimote.Minus = KEY_2
Wiimote.Plus = KEY_1
Wiimote.Home = KEY_F3
Wiimote.1 = KEY_LEFTCTRL
Wiimote.2 = KEY_LEFTALT

Nunchuk.C = BTN_LEFT
Nunchuk.Z = BTN_RIGHT

Classic.Up = KEY_UP
Classic.Down = KEY_DOWN
Classic.Left = KEY_LEFT
Classic.Right = KEY_RIGHT
Classic.Minus = KEY_2
Classic.Plus = KEY_1
Classic.Home = KEY_F3
Classic.A = KEY_LEFTCTRL
Classic.B = KEY_LEFTALT
Classic.X = KEY_SPACE
Classic.Y = KEY_LEFTSHIFT
Classic.ZL = KEY_5
Classic.ZR = KEY_P
Classic.L = KEY_Z
Classic.R = KEY_X

In addition to the standard bindings you might expect, the home key resets MAME; the plus key selects single player; minus selects two players; ZL on the Classic Controller and B on the Wiimote insert a coin; and ZR on the Classic Controller and A on the Wiimote pause. These are by no means perfect bindings, so I recommend you experiment with different keys that work better for you.

The possibilities with wminput go much further than what I've presented here. There also are configuration files that use the analog joystick inputs on the Classic Controller, the IR sensors on the Wiimote and the accelerometers on the Nunchuck. Wminput isn't just a handy way to play video games on your laptop or desktop. The fact that the connection to the computer is wireless makes the Wiimote a great gaming input for a MythTV client or other computer connected to your PC. As for me, I think I'll be spending a few more days trying to beat this impossible Super Mario Brothers hack that has been floating around the Internet.

Kyle Rankin is a Senior Systems Administrator in the San Francisco Bay Area and the author of a number of books, including Knoppix Hacks and Ubuntu Hacks for O'Reilly Media. He is currently the president of the North Bay Linux Users' Group.


Taken From: Linux Journal Issue #172/August 2008 - Wiimote Control by Kyle Rankin