Debian GNU / Linux on Samsung ATIV Book 9 Plus

Samsung just recently released a new piece of kit, ATIV Book 9 plus. Its their top of the line Ultrabook. Being in on the market for a new laptop, when I heard of the specs, I was hooked. Sure it doesn't have the best CPU in a laptop or even amazing amount of ram, in that regard its kind of run of the mill. But that was enough for me. The really amazing thing is the screen, with 3200x1800 resolution and 275DPI. If you were to get a stand alone monitor with similar resolution you'd be forking over anywhere from 50-200% the value of the ATIV Book 9 Plus. Anyway this is not a marketing pitch. As a GNU / Linux user, buying bleeding edge hardware can be a bit intimidating. The problem is that it's not clear if the hardware will work without too much fuss. I couldn't find any reports or folks running GNU / Linux on it, but decided to order one anyway.

My distro of choice is Debian GNU / Linux. So when the machine arrived the first thing I did was, try Debian Live. It did get some tinkering of BIOS (press f2 on boot to enter config) to get it to boot. Mostly because the BIOS UI is horrendus. In the end disabling secure boot was pretty much all it took. Out of the box, most things worked, exception being Wi-Fi and brightness control. At this point I was more or less convinced that getting GNU / Linux running on it would not be too hard.

I proceeded to installing Debian from stable net-boot cd. At first with UEFI enabled but secure boot disabled, installation went over fine but when it came time to boot the machine, it would simply not work. Looked like boot loader wasn't starting properly. I didn't care too much about UEFI so I disabled it completely and re-installed Debian. This time things worked and Debian Stable booted up. I tweaked /etc/apt/sources.list switching from Stable to Testing. Rebooted the machine and noticed that on boot the screen went black. It was rather obvious that the problem was with KMS. Likely the root of the problem was the new kernel (linux-image-3.10-3-amd64) which got pulled in during upgrade to testing. The short term work around is simple, disable KMS (add nomodeset to kernel boot line in grub).

So now I had a booting base system but there was still the problem of Wi-Fi and KMS. I installed latest firmware- iwlwifi which had the required firmware for Intel Corporation Wireless 7260. However Wi-Fi still did not work, fortunately I came across this post on arch linux wiki which states that the Wi-Fi card is only supported in Linux Kernel >=3.11.

After an hour or so of tinkering with kernel configs I got the latest kernel (3.11.3) to boot with working KMS and Wi-Fi. Long story short, until Debian moves to kernel >3.11 you'll need to compile your own or install my custom compiled package. With the latest kernel pretty much everything works this machine. Including the things that are often tricky, like; suspend, backlight control, touchscreen, and obviously Wi-Fi. The only thing remaining thing to figure out, are the volume and keyboard backlight control keys. But for now I'm making due with a software sound mixer. And keyboard backlight can be adjusted with (values: 0-4):

echo "4" > /sys/class/leds/samsung::kbd_backlight/brightness

So if you are looking to get Samsung ATIV Book 9 and wondering if it'll play nice with GNU / Linux. The answer is yes.

Debian  Hardware  LILUG  Software  linux  2013-10-05T16:11:05
Cross Compile with make-kpkg

I got myself one of the fancy shmancy netbooks. Due to a habit and some hardware issues I needed to compile a kernel. The problem here though is that it takes for ever to build a kernel on one of these things. No sweat I'll just build it on my desktop, it'll only take 5-10 minutes. But of course there is a catch. My desktop is 64bit and this new machine is an Atom CPU which only does 32bit.

The process for compiling a 32bit kernel on a 64bit machine is probably a lot easier if you don't compile it the Debian way. But this is not something I want to do, I like installing the kernels through the package manager and doing this type of cross compile using make-kpkg is not trivial. There are plenty of forum and email threads about people recommending to use chroot or virtual machines for this task, but that is such a chore to set up. So here is my recipe for cross compiling 32bit kernel on 64bit host without chroot / vm, the-debian-way.

  1. Install 32bit tools (ia32-libs, lib32gcc1, lib32ncurses5, libc6-i386, util-linux, maybe some other ones)
  2. Download & unpack your kernel sources
  3. run "linux32 make menuconfig" and configure your kernel for your new machine
  4. clean your build dirs "make-kpkg clean --cross-compile - --arch=i386" (only needed on consecutive compiles)
  5. compile your kernel "nice -n 100 fakeroot linux32 make-kpkg --cross-compile - --arch=i386 --revision=05test kernel_image" for faster compilation on multi-CPU machines run "export CONCURRENCY_LEVEL=$((cat /proc/cpuinfo |grep "^processor"|wc -l*2))" first
  6. At this point you have a 32bit kernel inside a package labeled for 64bit arch. We need to fix this, run "fakeroot deb-reversion -k bash ../linux-image-2.6.35.3_05test_amd64.deb". Open the file DEBIAN/control with vim/emacs and change "Architecture: amd64" to "Architecture: i386" exit the bash process with ctrl+d
  7. That's it, now just transfer the re-generated deb to destination machine and install it.

Many if not all ideas for this process come from reading email threads the comments made by Goswin Von Brederlow were particularly helpful, thanks.

Debian  LILUG  linux  software  2010-08-25T22:09:15
Versionless Distro

Every six months the internet lights up with stories that Canonical & Co has done the unthinkable they have increased the number following the word Ubuntu. In other words they have release a new version. This is a well understood concept to differentiate releases of software. As the version increases it is expected that new features are introduced and old bugs are removed (hopefully more are removed than added).

Versioning distributions and releasing the versions separately is a common practice, employed by most if not all distributions out there. Ubuntu has adopted the policy of releasing regularly and quite often. But there is a different approach it revolves around a concept I call "Versionless" where you do not have a hard release but instead let the changes trickle down. In the application world these releases are often called nightly builds. With distributions it is a little bit different.

First of all its worth noting that distributions are not like applications. Distributions are collection made up by applications and a kernel, the applications that are included are usually stable releases and so the biggest unpredictability comes from the combination and configuration there of. As a result one of the important roles for distro developers is to ensure that the combination of the many applications does not lead to adverse side effects. This is done in several ways, the general method is to mix all the applications in a pot, the so called pre-release and then test the combination. The testing is done by whole community, as users often install these pre-releases to see if they see any quirks through their regular use. When the pre-release becomes stable enough it is pushed out the door as a public release.

In an ideal world after this whole process all the major bugs and issues would have been resolved and users go on to re-install/update their distribution installations to the new release -- without any issues. The problem is that even if the tests passed with flying colors does not mean that on the user will not experience problems. The more complicated a configuration that a user has the more chances they will notice bugs as part of upgrade. This is particularly evident where there are multiple interacting systems. Doing a full upgrade of a distribution it is not always obvious what change in the update has caused this problem.

Versionless distributions are nothing new, they has been a staple of Debian for a while. In fact it is the process for testing package compatibility between release, but it is also a lot more. There are two Debian releases that follow this paradigm, Debian Testing and Debian Unstable. As applications are packaged they are added to Debian Unstable and after they fulfill certain criteria, IE they have spent some time in Unstable and have not had any critical bugs filed against them, they are then passed along to Debian Testing. Users are able to balance their needs between new features and stability by selecting the corresponding repository. As soon as the packages are added to the repositories the become immediately available to user for install/upgrade.

What it really comes down to is testing outside your environment is useful but it cannot be relied solely upon. And when upgrades are performed it is important to know what has changed and how to undo it. Keeping track of changes for 1000's of updates is nearly impossible. So update small and update often, use Debian. Good packages managers are your best friend, but only second to great package developers!

Debian  LILUG  linux  software  2010-05-14T19:03:54
Monitor hot plugging. Linux & KDE

Apparently Linux does not have any monitor hotplugging support which is quite a pain. Every time you want to attach a monitor to laptop you have to reconfigure the display layout. This is a tad frustrating if you have to do this several times a day. And it doesn't help that KDE subsystems are a bit flaky when it comes to changing display configuration. I've had plasma crash a on me 1/3 times while performing this operation.

Long story short I got fed up with all of this and wrote the following 3 line script to automate the process and partially alleviate this head ache

1
2
3
4
5
6
7
8
#!/bin/bash
xrandr --output LVDS1 --auto --output VGA1 --auto
sleep 1
kquitapp plasma-desktop &> /dev/null
sleep 1
kwin --replace & &> /dev/null
sleep 1
kstart plasma-desktop &> /dev/null

You probably need to adjust the xrandr line to make it behave like you want but auto everything works quite well for me. Check man page for xrandr for details.

For further reading on monitor hot plugging I encourage you read launchpad bug #306735. Fortunately there are solutions for this problem, however they are on the other side of the pond.

Update: Added the kwin replace line to fix sporadic malfunction of kwin (disappearance of window decorations) during this whole operation.

LILUG  code  debian  kde  linux  software  2010-04-10T16:58:58
dnsmasq -- buy 1 get 2 free!

I mentioned earlier that we netboot (PXE) our cluster. Before NFS-root begins, some things have to take place. Namely, the kernel needs to be served, IP assigned, DNS look-ups need to be made to figure out where servers are and so on. Primarily 3 protocols are in the mix at this time, TFTP, DHCP, DNS. We used to run 3 individual applications to handle all of this, they're all in their own right quite fine applications atftpd, Bind9, DHCP (from ISC). But it just becomes too much to look after, you have a config file for each of the daemons as well as databases with node information. Our configuration used MySQL and PHP to generate all the databases for these daemons. This way you would only have to maintain one central configuration. Which means you need to look after yet another daemon to make it all work. You add all of this together and it becomes one major headache.

Several months ago I had installed openWRT onto a router at home. While configuring openWRT I came across something called dnsmasq. By default, on openWRT, dnsmasq handles DNS and DHCP. I thought it was spiffy to merge the 2 services .. after all they are so often run together (on internal networks). The name stuck in my head as something to pay bit more attention to. Somewhere along the line I got some more experience with dnsmasq, and had discovered it also had TFTP support. Could it be possible what we use 4 daemons could be accomplished with just one?

So when the opportunity arose I dumped all node address information out of the MySQL database into a simple awk-parsable flat file. I wrote a short parsing script which took the central database and spit out a file dnsmasq.hosts (with name/IP pairs) and another dnsmasq.nodes (with MAC-address/name pairs). Finally I configured the master (static) dnsmasq.conf file to start all the services I needed (DNS, DHCP, TFTP), include the dnsmasq.hosts and dnsmasq.nodes files. Since the dnsmasq.nodes includes a category flag it is trivial to tell which group of nodes should use what TFTP images and what kind of DHCP leases they should be served.

Dnsmasq couldn't offer a more simple and intuitive configuration with 1/2 days work I was able to greatly improve upon on old system and make a lot more manageable. There is only one gripe I have with dnsmasq, I wish it would be possible to just have one configuration line per node that is have the name, IP, and mac address all in one line. If this was the case I wouldn't even need an awk script to make the config file (although it turned out to be handy because I also use the same file to generate a nodes list for torque). But its understandable since there are instances where you only want to run a DHCP server or just DNS server and so having DHCP and DNS information on one line wouldn't make much sense.

Scalability for dnsmasq is something to consider. Their website claims that it has been tested with installation of up to 1000 nodes, which might or might not be a problem. Depending on what type of configuration your building. I kind of wonder what happens at the 1000s of machines level. How will its performance degrade, and how does that compare to say the other TFTP/DHCP servers (BIND9 is know to work quite well with a lot of data).

Here are some configuration examples:

Master Flat file node database

#NODES file it needs to be processed by nodesFileGen
#nodeType nodeIndex nic# MACAddr

nfsServer 01 1
nfsServer 02 1

headNode 00 1 00:00:00:00:00:00

#Servers based on the supermicro p2400 hardware (white 1u supermicro

box) server_sm2400 miscServ 1 00:00:00:00:00:00 server_sm2400 miscServ 2 00:00:00:00:00:00 #dual 2.4ghz supermicro nodes node2ghz 01 1 00:00:00:00:00:00 node2ghz 02 1 00:00:00:00:00:00 node2ghz 03 1 00:00:00:00:00:00 ...[snip]...

#dual 3.4ghz dell nodes
node3ghz 01 1 00:00:00:00:00:00
node3ghz 02 1 00:00:00:00:00:00
node3ghz 03 1 00:00:00:00:00:00
...[snip]...

Flat File DB Parser script

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash

#intput sample
#type number nic# mac addr
#nodeName 07 1 00:00:00:00:00:00

#output sample
#ip hostname
#10.0.103.10 nodeName10
awk '
  /^headNode.*/ {printf("10.0.0.3 %s\
", $1)};                          \
  /^server_sm2400.*/ {printf("10.0.3.%d %s\
", $3, $2)};                      \
  /^nfsServer.*/ {printf("10.0.1.%d %s%02d\
", $2, $1, $2)};          \
  /^node2ghz.*/ {printf("10.0.100.%d %s%02d\
", $2, $1, $2)};          \
  /^node3ghz.*/ {printf("10.0.101.%d %s%02d\
", $2, $1, $2)};          \
  '

\ ~/data/nodes.db > /etc/dnsmasq.hosts

#output sample
#mac,netType,hostname,hostname
#00:00:00:00:00:00,net:nodeName,nodeName10,nodeName10
awk '
  /^headNode.*/ {printf("%s,net:%s,%s,%s\
", $4, $1, $1, $1)};                      \
  /^server_sm2400.*/ {printf("%s,net:%s,%s,%s\
", $4, $1, $2, $2)};              \
  /^node2ghz.*/ {printf("%s,net:%s,%s%02d,%s%02d\
", $4, $1, $1, $2, $1, $2)};      \
  /^node3ghz.*/ {printf("%s,net:%s,%s%02d,%s%02d\
", $4, $1, $1, $2, $1, $2)};      \
  '

\ ~/data/nodes.db > /etc/dnsmasq.nodes

#output sample
#hostname np=$CPUS type
#nodeName10 np=8 nodeName
awk '
  /^node2ghz.*/ {printf("%s%02d np=2 node2ghz\
", $1, $2)};              \
  /^node3ghz.*/ {printf("%s%02d np=2 node3ghz\
", $1, $2)};              \
  '

\ ~/data/nodes.db > /var/spool/torque/server_priv/nodes

#Lets reload dnsmasq now
killall -HUP dnsmasq

dnsmasq.conf

interface=eth0
dhcp-lease-max=500
domain=myCluster
enable-tftp
tftp-root=/srv/tftp
dhcp-option=3,10.0.0.1
addn-hosts=/etc/dnsmasq.hosts
dhcp-hostsfile=/etc/dnsmasq.nodes

dhcp-boot=net:misc,misc/pxelinux.0,nodeServer,10.0.0.2
dhcp-range=net:misc,10.0.200.0,10.0.200.255,12h

dhcp-boot=net:headNode,headNode/pxelinux.0,nodeServer,10.0.0.2
dhcp-range=net:headNode,10.0.0.3,10.0.0.3,12h

dhcp-boot=net:server_sm2400,server_sm2400/pxelinux.0,nodeServer,10.0.0.2
dhcp-range=net:server_sm2400,10.0.0.3,10.0.0.3,12h

dhcp-boot=net:node2ghz,node2ghz.cfg,nodeServer,10.0.0.2
dhcp-range=net:node2ghz,10.0.100.0,10.0.100.255,12h

dhcp-boot=net:node3ghz,node3ghz.cfg,nodeServer,10.0.0.2
dhcp-range=net:node3ghz,10.0.101.0,10.0.101.255,12h
Debian  LILUG  News  Software  Super Computers  2008-03-13T00:30:40
MOTD

You all probably know that the most important thing on any multi user system is a pretty MOTD. Between some other things in the past couple of weeks I decided to refresh the MOTDs for the galaxy and Seawulf clusters. I discovered 2 awesome applications while compiling the MOTD.

First is a jp2a, it takes a JPG and converts it to ASCII and it even supports color. I used this to render the milky way as part of the galaxy MOTD. While this tool is handy it needs some assistance, you should clean up and simplify the JPGs before you try to convert them.

The second tool is a must for any form of ASCII-art editing. Its called aewan (ace editor without a name). It makes editing a lot easier, it supports coloring, multiple layers, cut/paste/move, and more. Unfortunately it uses a weird format and does not have an import feature, so its PITA to import an already existing ASCII snippet -- cut and paste does work but it looses some information -- like color.

Aewan comes with a sister tool called aecat which 'cats' the native aewan format into either text (ANSI ASCII) or HTML. Below is some of my handy work. Because getting browsers to render text is PITA I decided to post the art-work as an image.
Galaxy MOTD:
galaxy motd
Seawulf MOTD:
seawulf motd
I also wrote a short cronjob which changes the MOTD every 5 min to reflect how many nodes are queued/free/down

One more resource I forgot to mention is the ascii generator. You give it a text string and it returns in a fancy looking logo.

Finally when making any MOTDs try to stick to the max width of 80 and heigh of 24. This way your art work won't be chopped even on ridiculously small terminals.

Debian  LILUG  News  Software  2008-03-02T23:41:22
NFS-root

I haven't posted many clustering articles here but I've been doing a lot of work on them recently, building a cluster for SC07 Cluster Challenge as well as rebuilding 2 clusters (Seawulf & Galaxy) from the ground up at Stony Brook University. I'll try to post some more info about this experience as time goes on.

We have about 235 nodes in Seawulf and 150 in Galaxy. To boot all the nodes we use PXE (netboot), this allows for great flexibility and ease of administration -- really its the only sane way to bootstrap a cluster. Our bootstrapping system used to have a configuration where the machine would do a plain PXE boot and then, using a linuxrc script the kernel would download a compressed system image over TFTP, decompress it to a ram-disk and do a pivot root. This system works quite well but it does have some deficiencies. It relies on many custom scripts to maintain the boot images in working order, and many of these scripts are quite sloppily written so that if anything doesn't work as expected you have to spend some time try to coax it back up. Anything but the most trivial system upgrade requires a reboot of the whole cluster (which purges the job queue and annoys users). On almost every upgrade something would go wrong and I'd have to spend a long day to figure it out. Finally, using this configuration you always have to be conscious to not install anything that would bloat the system image -- after all its all kept in ram, larger image means more waste of ram.

During a recent migration from a mixed 32/64bit cluster to a pure 64bit system. I decided to re-architect the whole configuration to use NFS-root instead of linuxrc/pivot-root. I had experience with this style of configuration from a machine we built for the SC07 cluster challenge, how-ever it was a small cluster (13 nodes, 100cores) so I was worried if NFS-root would be feasible in a cluster 20 times larger. After some pondering over the topic I decided to go for it. I figured that linux does a good job of caching disk IO in ram so any applications which are used regularly on each node would be cached on nodes themselves (and also on the NFS server), furthermore if the NFS server got overloaded some other techniques could be applied to reduce the load (staggered boot, NFS tuning, server distribution, local caching for Network File systems). And so I put together the whole system on a test cluster installed the most important software mpi, PBS(torque+Maui+gold), all the bizarre configurations.

Finally, one particularly interesting day this whole configuration got put to the test. I installed the server machines migrated over all my configurations and scripts halted all nodes. Started everything back up -- while monitoring the stress the NFS-root server was enduring, as 235 nodes started to ask it for 100s of files each. The NFS-root server behaved quite well using only 8 NFS-server threads the system never went over 75% CPU utilization. Although the cluster took a little longer to boot. I assume with just 8 NFS threads most of the time the nodes were just standing in line waiting for their files to get served. Starting more NFS threads (64-128) should alleviate this issue but it might put more stress on the NFS-server and since the same machine does a lot of other things I'm not sure its a good idea. Really a non-issue since the cluster rarely gets rebooted, especially now that most of the system can be upgraded live without a reboot.

There are a couple of things to consider if you want to NFS-root a whole cluster. You most likely want to export your NFS share as read-only to all machines but one. You don't want all machines hammering each others files. This does require some trickery. You have to address the following paths:

  • /var
    You cannot mount this to a local partition as most package management systems will make changes to /var and you'll have to go far out of your way to keep them in sync. We utilize a init script which takes /varImage and copies it to a tmpfs /var (ram file system) on boot.

  • /etc/mtab
    This is a pain in the ass I don't know who's great idea was to have this file. It maintains a list of all currently mounted file systems (information is not unlike to that of /proc/mounts). In fact the mount man page says that "It is possible to replace /etc/mtab by a symbolic link to /proc/mounts, and especially when you have very large numbers of mounts things will be much faster with that symlink, but some information is lost that way, and in particular working with the loop device will be less convenient, and using the 'user' option will fail." And it is exactly what we do. NOTE autofs does not support the symlink hack, I have a filed bug in the debian.

  • /etc/network/run (this might be a debianism)
    We use a tmpfs for this also

  • /tmp
    We mount this to a local disk partition

All in all the NFS-root system works quite well I bet that with some tweaking and slightly more powerful NFS-root server (we're using dual socket 3.4Ghz Xeon 2MB cache and 2GB of ram) the NFS-root way of boot strapping a cluster can be pushed to serve over 1000 nodes. More than that would probably require some distribution of the servers. By changing the exports on the NFS server any one node can become read-write node and software can be installed/upgraded on it like any regular machine, changes will propagate to all other nodes (minus daemon restarts). Later the node can again be changed to read-only -- all without a reboot.

Debian  LILUG  News  Software  Super Computers  2008-03-02T13:25:11
LIRC

LIRC is a software package under linux which allows you interface with remote control/controlled devices. LIRC is pretty much a must for any 1/2 decent MythTV configuration.

For my Myth setup I use LIRC both to change the channels on the cable-set-top- box and as a way to control the mythtv interface from the couch. Although this is a quite common configuration its annoying to get working.

The first thing you have to decide when setting up LIRC is what hardware you want to use. You can build your own receivers/transmitters but the simple plans make for quite crappy and unreliable devices; for something more sophisticated the cost of parts adds up to exceed the cost of kits/ready-to- use devices.

I had a (X10 based) RF serial receiver and remote (that I got a while back with my Nvidia PC cinema card). It worked with better LIRC than it ever did under windows. To control the set-top-box I first got an iguanaworks USB transceiver but it would not work since it only transmits at 36khz (It can be flashed to transmit at 58khz with a non-existing utility) and all the devices I needed to control only worked at 58khz. Money down the drain. So I decided to try again, this time I got the Serial Iguanaworks transceiver this one interfaces with LIRC more like the home-made transceivers except it has greater range (thanks to a .3f capacitor (think battery) which stores energy for transmissions).

Alright so I'm thinking I have the hardware configuring should be a breeze. I already had the controlling software installed, all I needed was to compile the drivers. I downloaded the debian driver source package it looked all very nice and neat, it allowed me to select the drivers I want and even attempted to compile the drivers automagically.. except it failed. The sources it provides are too old and were no longer compatible with my kernel. No big deal, I'll compile the vanilla drivers from LIRC -- wrong.

LIRC can't be compiled with just any combination of drivers you want, the configuration scripts compile either any ONE driver or all of them. No big deal, I thought, I'll compile all and install only the onces I need.. except all the drivers don't compile. Compilation broke on some driver that I didn't need. So I decided to hack the config scripts a bit. I downloaded the CVS version of LIRC opened the configure.in file and around line 1207

if test "$lirc_driver" = "all"; then lirc_driver="lirc_dev \ lirc_atiusb \ lirc_serial"

Trimmed down the list of drivers to only the ones that I needed. I then ran autoconf to generate all the needed Makefiles and ran ./configure --with- driver=all --with-port=0x3f8 --with-irq=4 --with-timer=65536 --with-x --with- transmitter && make && make install and things built correctly with only the drivers I wanted.

From then on configuring LIRC was a breeze, I modified the debian /etc/init.d/lirc script to use start 2 lirc daemons, one for each driver and configured them to talk to each other.

Finally I made my lircd.conf and lircmd.conf using irecord and configured MythTV, xorg and channel changing script. YAY, working mythbox.

Brief overview of all the programs and devices that make up my mythbox
A/V Hardware: Nvidia MX440 (vga/svideo out), Happauge150 (rca audio/svideo in), CHAINTECH AV-710 (optical audio out), RCA dvd/audio system
Remote controlled devices: RCA TV, Scientific Atlanta Explorer 4200 (cable box), Nvidia branded X10 RF remote

The last problem I had was the cable box being off while mythtv was trying to record, Its a nasty one. But it turns out the cable box has this nice feature where it will turn on when any numerical key is pressed on the remote (can be enabled in the settings menu). So when mythtv changes channels the cable box is either already on or is turned on auto-magically.

More of my config files.

Debian  LILUG  MythTV  Software  2007-06-29T00:04:25
Chimei 22" Nvidia

I thought the days of modelines in xorg (and linux in general) were over but I guess I'm wrong. The last 2 monitors I configured I had really difficult time with. One needed needed just a modeline but the other needed nasty config hacks. The first configuration was a Dell 21" monitor with a i945 graphics card and the other a 22" Chimei CMV-221D/A with an nvida GeForce FX 5200 card.

The Chimei monitor autodected just fine over VGA but was fuzzy and wavy, and hooking it up over DVI, the nvidia card did not want to drive it over 800x600 (instead of native 1680x1050). So I had to get down and dirty with the X configs.

Anyway here are the appropriate sections from my xorg.conf file for the Chimei (I'll post the Dell ones later)

Section "Monitor"

   Identifier      "Generic Monitor"
   HorizSync       30-83
   VertRefresh     60
   Option          "DPMS"
   UseModes        "16:10"

EndSection

Section "Device"

   Identifier      "nVidia Corporation NV34 [GeForce FX 5200]"
   Driver          "nvidia"
   Option          "NoLogo"    "true"
   #NOTE this is probably dangerous only use this line with appropriate Modeline
   Option          "UseEdidFreqs"  "false"
   Option          "ModeValidation"    "NoMaxPClkCheck,AllowNon60HzDFPModes,NoVesaModes,NoXServerModes,NoPredefinedModes"

EndSection

Section "Screen"

   Identifier      "Default Screen"
   Device          "nVidia Corporation NV34 [GeForce FX 5200]"
   Monitor         "Generic Monitor"
   DefaultDepth    24
   SubSection      "Display"
       Depth       16
       Modes       "1680x1050" "1024x768" "800x600" "640x480"
   EndSubSection
   SubSection      "Display"
       Depth       24
       Modes       "1680x1050" "1024x768" "800x600" "640x480"
   EndSubSection

EndSection

Section "Modes"

   Identifier  "16:10"
   Modeline    "1680x1050 (GTF)" 154.20 1680 1712 2296 2328 1050 1071 1081 1103

EndSection

Enjoy

Debian  LILUG  Software  2007-06-27T23:18:46
Who Wrote This Shit

Portmap by default listens to all IP addresses. However, if you are not providing network RPC services to remote clients (you are if you are setting up a NFS or NIS server) you can safely bind it to the loopback IP address (127.0.0.1)
<Yes> OR <No>

Maybe I'm slow or something but I really hate this prompt in debian. Which is accompanied by the installation of portmap. Seems like you need a degree in english logic to figure out what you need to select. If you run NFS and NIS and are Confused the hell out by this prompt just select NO.

UPDATE: Just because you select NO doesn't mean that debian will actually not bind RPC to portmap. You might want to run dpkg-reconfigure portmap again and make sure it did the right thing.. I got a nasty surprise the day after .. when 2 of the NFS servers stopped mounting. Filed bug report

Debian  LILUG  Software  WWTS  2007-05-25T21:21:52

Page 1 / 1