Monday, December 21, 2009

Both INCLUDEPATH and DEPENDPATH are usually needed in qmake .pro project files.

qmake, the build tool provided with the Qt toolkit, converts project files written in its own mini-language to platform-specific Makefiles.

This process includes adding necessary dependencies to the Makefile, so that changes in source files trigger rebuilding of the outputs that depend on said sources.

If your project is spread across directories, you'll likely add an INCLUDEPATH line to your .pro file so that the #include directives look sane -- say #include "library/foo.h" instead of #include "../library/foo.h". This can be done by adding INCLUDEPATH += "../" to the .pro project file.

This, by itself, doesn't cause the files in include directories to be treated as dependencies. This is a sane default, since you likely don't want to rebuild your whole project if a system library changes -- assuming, of course, that the library is meant to stay binary compatible between releases!

Thus, if any of your source files references a file from somewhere within INCLUDEPATH, it won't cause a dependency for that file to appear in the Makefile. You have to add the path to DEPENDPATH, too.

Thursday, December 03, 2009

Zultys WIP2 WiFi Phone with Asterisk

I have recently brought up an Asterisk extension on a Zultys WIP2 WiFi phone. The process had a few minor glitches, but I was able to resolve them all.



The system consists of:

  1. Zultys WIP2 wireless phone,
  2. HP Procurve AP530 wireless access point,
  3. HP Procurve 2625PWR managed Ethernet switch,
  4. a server running CentOS 5 with asterisk-1.6.1.9-90 from atrpms.
First of all, Zultys only provides manuals/firmware to their products in their knowledgebase. You have to sign up to use that. Even so, WIP2 was missing from that list. I have ~20 ZIP 4x4 phones also running off the 2625PWR switch, and their firmware/manuals are available in the KB. I emailed their support, and they quickly responded with the manual and 1.0.12 firmware. The phone had 1.0.3 installed.

I added a separate VLAN on a separate IP network just for the WIP2 phone and configured everything accordingly. Since WIP2 only supports WEP, it's not really secure, so I treat that VLAN as untrusted and only open up the SIP port on the server, as well as allow DHCP and TFTP read-only traffic. The phone only supports 802.11b.

I've had the AP530 set up for one 802.11a radio, and one 802.11g radio, so that had to be changed to 802.11a and 802.11b. Since I don't have external antennas on the AP, 802.11b+g cannot coexist as that configuration requires hooking up external antennas. The phone-specific SSID was configured to use WEP, run on the locked-down VLAN and was enabled only on the 802.11b radio.

The phone, as received with 1.0.3 firmware, was first manually configured with the WiFi settings (SSID and WEP key). It then rebooted, and automatically detected the DHCP and TFTP servers, and loaded the new configuration. Since I've had the ZIP 4x4 phones already configured to boot via DHCP and load their firmware updates and configuration from TFTP server, it was easy to copy the ZIP4x4_common.cfg to WIP2_common.cfg and modify to per the manual.

The lcd_contrast had to be set to 10 to make the display legible, ZIP4x4s need it to be set to 8.

Alas, no matter what the phone would not load the updated firmware automatically. Zultys phones let you enter the desired firmware version in the configuration file they load via TFTP, and they are supposed to then download that firmware, also via TFTP, should the desired and running versions differ. The only way that worked was manual update via the built-in bootloader. This is described in the manual, and it worked. The original 1.0.3 firmware's most visible bug was ignoring the power-off (red handset) button. Powering off required disconnecting the battery.

After updating the firmware, the phone would start behaving weird upon booting up -- it'd constantly lose the configuration settings as soon as the TFTP download finished. Only the antenna bars and the default "Zultys WIP2" greeting appeared. The left/right button icons were missing (no menu button!). It was possible to make calls by pressing the green handset button, but receiving calls didn't work. A subsequent reboot would not be able to access the network, as at that point the WEP key was already lost from the configuration (although it still was set in the radio up to the reboot). Perhaps the internally stored configuration format was incompatible between 1.0.12 and 1.0.3 firmware?

Somehow the configuration download would just cause the current configuration to be wiped out, in spite of the Information -> Config File menu showing "Config File OK". The failure looked, though, as if it were a partial failure, say as if an exception was thrown (and later caught) while merging the existing and downloaded configurations, since some parameters did get through.

To cure the problem, I had to remove the default greeting_message from WIP2_common.cfg. It had a slash within the string, and that seemed problematic. I also had to go to User Settings and restore the phone to factory settings. The phone was then rebooted, and I re-entered the WiFi settings. After a reboot, it did correctly load the configuration. One reboot after that, and it behaves as expected. It works quite well, although admittedly it's a very simple wireless setup with a single access point. I have not checked if wireless roaming works.

Tuesday, October 06, 2009

VMware Fusion 2.0.6 + Alibre Design 12 = Good News

For the early adopters who ran Alibre Design on Windows in VMware Fusion virtual machines, VMware 2.0.6 brings some very good news: the dreaded stuck-red-highlight bug is gone.

When you move the mouse cursor, Alibre's 3D views highlight the face or edge closest to the cursor. In VMware, this red highlight would be stuck -- or rather, it would only turn ON, but never OFF.

The issue was present in every VMware 2 up to and including 2.0.5, and in every version of Alibre Design starting with at least 10. I only tried it on MBP with NVidia graphics, so it might have been an NVidia-only issue.

I was very nicely surprised after updating VMware to 2.0.6 today. Not only did networked Windows startup take perhaps 25% less time (somehow), but Alibre's workspace starts up much faster (feels like 50%) and the workspace display is bug-free, as far as I can tell.

This, coupled with significant speed improvements in Alibre 12, means that a very solid parametric CAD is available for OS X on Intel machines for ~$300 extra over the per-set price on Windows. That $300 factors in the eBay cost of Windows XP license, and the cost of a VMware license. Never mind that you could get a basic Alibre seat for $99 in a promotion that just expired a few days ago. Sweet!

Update: things still work just fine with Fusion 3.1.1 and Alibre Design 12.1 - as shown in the picture above.

Sunday, October 04, 2009

PDF to PS to PDF on OS X: How to Fix Those That Cause Errors

Apple's OS X generates anti-distillation blurbs in the PostScript files generated from "encrypted" PDFs. Remember prohibition, anyone?

The "encrypted", or locked down, rather, PDFs happen to be mostly everything these days. Forms that are meant to be fillable, bank account statements where you want to mark things up to reconcile accounts, etc. My most recent run-in with this stupidity was Anthem's and CompanionLife's insurance forms. I actually wish we didn't have to fill out, um, modify those, right? And surely it's every insurance companies' dream to get the forms back with my dreadful handwriting on them...

So, the pdfs are marked as protected from modification. OS X's otherwise excellent Preview doesn't ignore such marks when you print to PostScript. Thus, the resulting postscript files throw an error when you try to distill them back into pdf, say using ps2pdf14.

Upon inspection of the postscript files, you can see the eexec blurb, which can be decoded using ghostscript's decode.ps. The only useful part of the blurb is cg_md begin.


Thus, if you want to clean up your postscript files printed from "protected" PDFs , you need to replace stuff between mark currentfile eexec and cleartomark with cg_md begin. This can be done using this handy dandy utility:

#! /usr/bin/env python3
# copy a postscript file from stdin to stdout, removing

# Apple's ps-to-pdf "protection"
import sys;
inside = False
for line in sys.stdin:
    if not inside:
        if line.startswith("mark currentfile eexec"):
            inside = True
        else:
            print(line, file=sys.stdout, end="")
    else:
        if line.startswith("cleartomark"):
            print("cg_md begin"file=sys.stdout)
            inside = False


Wednesday, September 23, 2009

Google Search Defaults To Wrong Country TLD in Opera

Opera 10.0 on OS X remembers the non-default country specific google TLD the first time Google redirects it to such a TLD. Having recently visited Australia, my Google search from the toolbar was stuck on google.com.au; clearing cookies and private data didn't help with that.

The culprit is the following line in ~/Library/Preferences/Opera Preferences:

$ cd ~/Library/Preferences/Opera\ Preferences/
$ grep -r 'TLD Default' *
[...]
operaprefs.ini:Keyboard Configuration={Resources}defaults/standard_keyboard.ini
operaprefs.ini:Mouse Configuration={Resources}defaults/standard_mouse.ini
operaprefs.ini:Show Default Browser Dialog=0
operaprefs.ini:Google TLD Default=.google.com.au


All it takes to restore the default google TLD (the US one, in my case) is to quit Opera and remove the highlighted line in operaprefs.ini. Due to this browser's cross-platform nature, it's likely that same approach will work on Windows and Linux. Hopefully it will save some traveller a bit of head scratching.

The ~/ refers to your home folder, it a Unix equivalent of Windows's %USERPROFILE% environment variable.

Saturday, September 05, 2009

Pascal-Style Local Functions in C (for Z8 Encore!)

When I was a kid, I used Turbo Pascal. It had one feature that I sorely miss in every embedded C compiler: local functions. Local, as in local to a block. Those become a necessity to maintain decent code performance and avoid the penalty of passing duplicate data via parameters. They also help maintain readable, decently factorized code.

A local function, were C to have it, would look like this:

void fun1(void) {
 int a;
 void fun2(void) {
   a = 0;
 }
}

I hope you see that fun2() is local to the main block of fun1(), and that the variable a is in scope within fun2().

A somewhat less clean, but equally well performing way of accomplishing this would be:

extern int a;
void fun2(void) {
  a = 0;
}
void fun1(void) {
...
  fun2();
}

Now the question remains: where do we actually define the variable a, which is supposed to be local to fun1?

It becomes easy if your compiler supports static frames. Static frames are, as their name implies, allocated statically by the linker, and are overlaid according to the call tree. With usual dynamic frames, automatic variables end up on the stack. It should not be any harder to do with dynamic frames, but I haven't checked it out yet.

ZDS II, the C IDE for Zilog's Z8 Encore! and ZNEO products, supports static frames at the assembler and linker level. A frame ends up defining a near and far segment; for a function called fun1 those segments are called ?_n_fun1 and ?_f_fun1, respectively.

Assuming our code is in a file named file.c, and we're compiling for large model, we get

// file.c
extern int a;

#pragma asm segment ?_f_fun1
#pragma asm _a ds 2
#pragma asm segment file_TEXT

void fun2(void) {
  a = 0;
}

void fun1(void) {
...
  fun2();
}

Here, we manually allocate storage for a in fun1's far frame (due to model being large). This method can be used to bring back the nifty (albeit buggy) feature of ZDS II 4.9.x, which got abandoned and is no more present in ZDS II 4.11.0: arbitrary far/near automatic variables.

The syntax used to look like this:

void fun(void) {
  near int a;
  far int b;
  ...
}

The near/far storage specification was ignored when using dynamic frames, but for static frames it allowed you selecting whether given automatic variable would be stored in near or far memory space. Accesses to far variables take an extra clock cycle per byte, and can bring an extra load penalty in some cases, where the data has to be transferred from far memory to a register using LDX, before being useable by the target opcode.

We implement this functionality as follows:

// file.c

#pragma asm segment ?_n_fun
#pragma asm _a ds 2
#pragma asm segment ?_f_fun
#pragma asm _b ds 2
#pragma asm segment file_TEXT

void fun(void) {
  extern near int a;
  extern far int b;
  ...
}

The main drawback of this method, besides it being prone to typos and suffering from relegating part of the compiler to the wrong side of the keyboard, is that the assembly-level symbols _a and _b really have file scope due to the fact that assembler sees it so. Thus, the canonical way of using this trick is to have a complete tree of nested functions all in one C source file, and having unique names for all the automatic variables that have to be accessed by the local functions.

We shoot two birds with one stone here: we regain the foreign-model automatic variables of ZDS II 4.9.x vintage, and we Pascal-style local functions, which can access the automatic variables present in the scope of the caller's call site. Naturally, this is a hack which should only be used where performance or storage limitations demand it. It could be implemented with an extra C preprocessor.

Thursday, September 03, 2009

Asterisk 1.4 Update Woes

I admin a CentOS 5 server with Asterisk 1.4 on it, hooked up to an XO T1 link. The repo I use for this is atrpms, which has been pretty much problem-free.

After a most recent upgrade, asterisk would fail to load the dahdi channel driver, with the following in /var/log/asterisk/messages:
WARNING[pid] loader.c: Error loading module 'chan_dahdi.so': /usr/lib/asterisk/modules/chan_dahdi.so: undefined symbol: ast_smdi_interface_unref
The solution, as hinted by Chris Maciejewski, is to load the res_smdi.so module. For whatever reason, my /etc/asterisk/modules.conf had the following line:
noload => res_smdi.so
Changing it to "load => ..." fixed the problem.

Tuesday, September 01, 2009

Zilog's Encore!: Changes for Change's Sake, and When Simple Won't Do

I have been using Zilog's Z8 Encore! chips since the first marketed silicon release of Z8F4801, Rev AA. The chip is a nice 8-bit MCU. As nice and useful as it is, its history can't but highlight concerning lack of vision and direction in Zilog's marketing and development efforts.

Let's start with a rather simple thing: the tool used to interface with the chip's one-wire DBG pin. It's nothing more than an autobauding half-duplex asynchronous port, in CMOS voltage levels. Such simplicity was reflected in Zilog's first debug "tool" sold with the early Z8F6401 development kits. It was a small board (approx 1 sq in), with a TTL-to-RS232 converter chip (MAX232, IIRC), a diode standing in for an open-drain driver, the DBG line pull-up, and some decoupling and charge pump capacitors, and two connectors -- one for the 6-pin target header, another for the serial cable.

Soon thereafter it became obvious that RS232 serial ports are being phased out from newly made PCs. In keeping with the times, Zilog switched to a USB-based "smart cable". This came in a plastic enclosure, and had a Zilog MCU, a USB interface chip, and assorted other circuitry. Naturally, it was undocumented, and to use it you had two choices: use the driver and DLLs that Zilog furnished, or reverse-engineer the protocol. The latter was not very attractive since the replacement was almost a no-brainer: a USB-to-serial converter, and the simple serial-to-DBG interface.

Some of you will now think: well, wait a moment, didn't the "smart" cable provide some extra functionality, compared to the crude serial-port-based interface? Oh yes, it did provide extra functionality: it could also drive the target's RESET# line. That's about it. Never mind that the DTR line could be used for the same purpose. Of course the original "dumb" cable didn't have the DTR-to-RESET# connection. I can almost visualize the development tools division manager in a meeting with Encore! line strategist and upper management: we will make a new "smart" cable provide the RESET# signal to the newfinagled 8-pin XP series MCUs, and besides it will make things much faster.

In less than a decade, Zilog has managed to put out at least four versions of the DBG-to-PC interface: the "dumb" one using MAX232, two USB smart cables, and an Ethernet smart cable. Timing full-chip program loads using the DLLs and drivers provided with most recent version of ZDS II, the "dumb" interface wins hands down, providing 25-50% speedup on programming, and snappier behavior during debugging sessions. So, a lot of engineering effort for naught. Note that the "dumb" solution can be trivially ported to USB, and even isolated, all at a rather minimal cost, even if one would like to have an option of powering the target from the USB.

In that same decade, Zilog managed to change the logo stamped on their Encore! chips no less than three times. When I get back from my travel, I have to snap some pictures, as it is somewhat entertaining. I have a cache of single pieces from various Encore! lots.

In the same decade, they managed to keep shipping a free, but admittedly rather botched C compiler/IDE combo, called Zilog Developer Studio II. During this period of time, the rather obscene code generation bugs have been left alone, while one quite useful feature was dropped, other useful features added, and generally the development progressed at a glacial pace when compared with say gcc. The ZDS II woes almost warrant another post, but I'll much rather let the sleeping dogs lie.

Some straight priorities of Zilog, if you ask me.

I will finish with rather technical look at the real (vs. Zilog-imagined) needs of a "smart" cable for Zilog's DBG pin protocol.

As it turns out, whatever "smarts" the smart cable had were wholly unnecessary. You see, Zilog's chip design folks have very thoughtfully made the half-duplex DBG pin protocol inherently streaming-friendly. Save for the oddball whole-ROM CRC calculation, every command is executed in real time and requires no pacing/waiting. The reply bytes closely follow the command bytes. By "closely" I mean a delay of a couple of system clock cycles. As expected, a full erase-upload-verify cycle on a Z8F4821, done via the FT232R USB-to-serial interface, takes a whopping couple of seconds, when done at ~150kbaud (that's 15k bytes per second using 8-N-1 format).

One thinks, of course: is it possible to really speed it up any, and would placing a CPU between the USB-to-serial chip and the DBG line really help? Yes, somewhat. Let's assume we're using FTDI's interface chips, like FT232R. Those have a small, 384 byte buffer. Since the OS can sometimes starve the USB devices of USB read transfers, it could help to have an extra layer of buffering between the stream reflected from DBG pin, and FT232R, obviously with RTS/CTS handshaking enabled.

The USB transfers are paced with a 1ms USB frame period. This means that a turnaround from the PC to the target and back is no shorter than ~1ms; in practice it is 3ms since the FTDI chip, faced with no further activity on the input, will purge the receive buffer after an extra delay of 2ms.

First the PC sends a read command to the target, the half-duplex interface reflects the command back, and the target appends results of the read. Before subsequent commands can be sent, the PC must receive the results.

This can be worked around by providing a method of pacing the transmission so that the reply will have a place to "fit in". At "slow" baud rates, such pacing should require no extra effort: we send 0xFF (all data bits set), and the target pulls some of the bits low with its open drain driver. I haven't checked if the target's contention-detection circuit gets tripped by that, though. At "fast" baud rates, where we could reasonably expect the skew in target's reply bitstream to be significant enough to corrupt the bitstream, this of course becomes problematic.

This can be "fixed" by inserting an MCU between the USB-to-serial interface, and the target. The MCU would perform only two functions:

  1. buffering the data when the host keeps RTS# deasserted, to prevent data loss due to overruns,
  2. inserting a "sense" delay between bytes coming from the host being equal to 0xFF, and other bytes.
The second function would ensure that if the host is sending 0xFF, the target is given perhaps a bit period or two time to start replying -- if so, then the 0xFF byte from the host is discarded. If the target doesn't reply on its own, it's a fair assumption that the 0xFF byte was the part of the command stream destined to the target, and must be sent to the target. Thus the host PC has an easy way of pacing its data stream to accommodate replies from the target, all without losing streaming.

Moreover, any host software can easily accommodate this functionality being absent by either enforcing a roundtrip latency -- following an FT_Write() by FT_Read(), or by selecting a lower baud rate (I'll have to check that!).

Monday, August 31, 2009

XSS Vulnerability in the Real Life: Passports

I'm traveling again. This time to Australia. I travel a few times a year, always departing from the U.S.

Each time I travel internationally, the check-in airline agent dutifully inspects my passport. The latter has acquired a sizable collection of visa labels, admittedly much flashier than the dull black-on-white photo page of the Polish passport. The visas usually occupy the whole page, look nice, and are formatted to somewhat resemble a passport page -- they have a picture, same personal data, and the two machine readable lines on the bottom.

Each and every time the airline personnel will ignore the photo page and look at the first official-enough-looking visa. Somehow it always happens to me in the U.S., and never abroad. Heck, they get angry at me for pointing out that they really should not be looking at the flashy visa labels, but at the picture page. Sigh.

Call me paranoid, but if this isn't a huge honking screaming-in-yer-face security vulnerability in the passport inspection process, then I don't know what is. It's like the cross-site scripting vulnerability. You have a trusted webpage (document), and a third party can inject arbitrary data and have it be trusted just the same. IANAL, but last time I checked anyone can stick anything into the visa pages of a passport.

Next we know, someone will find that a system that dutifully reads and trusts machine readable pages will have some sort of a null-terminated string vulnerability. Perhaps one that can lead to executable code injection. Thus we make the full circle: from code on punched cards, to code on optically machine-readable paper. Not that some card readers didn't use optical readouts, mind you :)