WyBlog, the best thing about New Jersey since the invention of the 24 hour diner.
The nine most terrifying words in the English language are "I'm from the government and I'm here to help." - Ronald Reagan
CH 2.0 Info Center
The Jersey Report
Labor Union Report
Net Right Nation
The Patriot Post Newsletter
Victor Davis Hanson
J! E! T! S! Jets! Jets! Jets!
NJ.com Caldwell Forum
The Caldwells Patch
The Jersey Tomato Press
Technorati is indexing me again! They had to make a code change to fix the problem with my blog getting stuck in their queue. Kudos to Eric M. and the guys at GetSatisfaction.com where they have "community powered support for Technorati".
Well, they're "sorta, kinda" indexing me anyway. It's on a 24 hour tape delay or something. So I never get picked up by Memeorandum because they pull from Technorati and Technorati has stuff I posted yesterday listed as my latest blog entry. And that's old news to Memeorandum.
"This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. It is being made available in an effort to advance the understanding of environmental, political, human rights, economic, democracy, scientific, social issues, etc. It is believed that this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit for research and educational purposes."
#VRWC Twitter feed:
Good news VMS fans! Reports of Itanium's death were, uh, exaggerated. Yesterday a judge ruled that Oracle breached their contract with HP when they dropped support for HP's Itanium servers.
A California judge has ruled that Oracle breached its contract when it decided to drop support for HP's Itanium servers for future versions of Oracle's database software. The judge told Oracle it must continue to make database products for Itanium "until such time as HP discontinues the sales of its Itanium-based servers."
According to the latest HP Product Roadmap, new VMS on Itanium development is guaranteed through the end of 2015, and probably beyond. HP support contracts also provide for a minimum of 24 months of extended maintenance support, thereby ensuring that VMS on Itanium will continue to be viable for at least the next 5 years.
And since this humble blog is hosted on VMS, not to mention that it's
also my bread-and-butter at
$DayJob, you won't be seeing me fading
into the sunset any time soon either.
Live At Five)
Tammy asked me yesterday "how come you haven't updated your blog in over a week"? "Huh?", was my reply, "sure I have". Then she showed me her Yahoo! home page, and sure enough the RSS feed of my blog was over a week out of date. A quick check of Feedburner found the problem, the RSS feed was timing out. There seems to be an unwritten rule that RSS feeds must be generated and returned within 10 seconds or else the feed aggregators assume your blog is down or dead. Since Blosxom is a Perl script running in a DECnet server task it sometimes takes longer than 10 seconds to parse the entries and generate the RSS or HTML code.
A Perl whiz I am not. I tried the obvious stuff like turning off the features that don't apply to RSS feeds (such as the archive listing) but it still took too long. At the same time I was working on getting an RSS 2.0 feed set up for the folks at 9rules. To kill two birds with one stone I decided to use the Blosxom static rendering feature to generate the RSS 2.0 feed. This would be stored in a text file which the web server could ship out without needing to call the Perl script.
And, what do you know, it works. I now have to do one more manual step each time I post (regenerate the static index page) but Feedburner is happy again. Hopefully any readers who dropped off (I was wondering when you were going to notice that big fat 0 in the Feedburner box! -Ed) will re-subscribe.
I find it interesting that the whole Web 2.0 paradigm of dynamic content is
no match for overly vigorous enforcement of obscure rules. Instead of being
able to generate my RSS feed on-the-fly I'm now reduced to serving up a static
text file just to be sure that my content is considered to be "timely". The
irony of the whole thing is obvious.
My venerable VaxStatation 4000 is still going strong, except now I can't actually log in to it because the monitor died. The beauty of keeping it around is that it is a perfect X Windows server when you're programming on OpenVMS. It's got the LK401 keyboard with 20 function keys and the magical "compose character" key too. OpenVMS utilities are designed to take advantage of the LK401 keyboard. It's darn near impossible to use TPU or LSE or the debugger without the keypad and function keys.
But, replacing the monitor is probably not a viable option. The VaxStation video card isn't VGA (or any variant thereof like DVI). It's output is 3 BNC connectors, and it uses "sync on green". The various used Vax distributors will sell me a refurbished monitor, for $600, with a generous 30 day warranty. Since most of these monitors are 20+ years old, that's probably not a great deal. Plus, they weigh something like 100 pounds, so shipping one to New Jersey won't be cheap either.
So, sitting next to the VaxStation is my PC. It's got a nice flat screen monitor, plenty of hard disk space, and a fast cpu. It's got PowerTerm 525 installed, and that does a pretty good job of mapping the pc keyboard onto what OpenVMS needs to see. In particular, it lets OpenVMS dictate how the keypad works instead of using NUM LOCK to toggle its functions. OpenVMS expect the key where NUM LOCK is located to be the PF1 key, and when it comes to using OpenVMS utilities, PF1 is the most important key there is. It's used to modify the behavior of other keys, and without it, navigating the utilities is impossible. I can't just use terminal emulation though, I'm too used to how the Language Sensitive Editor and Debugger work in Decwindows (the OpenVMS implementation of X11R6). So, I decided to investigate using a PC based X Server. And thus my nightmarish decent into keyboard hell began.
I started by downloading
Cygwin/X since it's free (licensed under the GPL). It integrates very
nicely with MS Windows XP; the X clients display just like windows apps.
Unfortunately though, the keyboard works just like a windows keyboard. The
NUM LOCK key most certaily does not act like OpenVMS expects the PF1 key to act.
I spent most of yesterday with Google trying to find out how to remap the
keyboard with Cygwin/X. There's lots of information on
that probably makes sense to somebody, but it's all Greek to me. It talks
about keysyms and keycodes but what I think I need it to do is send escape
sequences to the host.
The Cygwin/X website says the maintainer quit in 2005 so the chances of
anyone adding LK401 keyboard mapping to it seem small (that's one of the
problems with open source software, most of it is not actively maintained
by anyone). Jack, who knows his way around Linux pretty well, couldn't
xmodmap to setup the keypad right either.
HP includes an X Server with their Pathworks distribution. It's called eXcursion, and it hasn't been updated since the Reagan administration. I tried it anyway. And, the keyboard works! It's also slow as molasses in winter, and it doesn't exactly play nicely with Windows XP. The only display option is full screen, completely occluding the Windows desktop. To switch back to XP I can hit CTRL-Escape, but then mouse clicks on the XP task bar are also delivered to the X Windows server. The CDE control panel lives at the bottom of the screen, so whenever I click an XP icon, I get an X client starting up (whichever one is displayed under that part of the task bar). Any XP app windows always occludes the X Windows apps, so I have to minimize all of them in order to use OpenVMS.
HP's website had a link to Attachmate and their Reflection X Server. I downloaded an evaluation copy (good for 60 days) and installed it. This product integrates nicely with MS Windows XP. The CDE control panel sits right above the XP task bar (no conflicts!) and it can make the X Windows desktop transparent so the Windows desktop is displayed instead. Inactive X clients can be minimized to icons, and active clients (like the DECterm I'm typing this blog post into) coexist with other XP apps. Reponse time is lightning fast.
They keyboard started out like the one from Cygwin/X. Fortunately there is the Reflection X Manager utility which has a "keyboard" settings tab. Under it there is a "Dec" radio button. Selecting that and restarting the X Server enabled me to work almost normally. The pc keyboard only has 12 function keys, and LSE makes use of LK401 keys F17 - F20. I haven't found those yet, but they've got to be mapped somewhere. eXcursion mapped them to the Print Screen, Scroll Lock, and Pause keys but Reflection X passes those keys on to Windows XP, with sometimes interesting results.
Reflection X is kind of pricey, too. They don't sell directly to end-users; you have to go through one of their authorized distributors. The distributors are all set up to handle large corporate or government accounts; trying to buy just one copy of Reflection X seems pretty hard to do. CDW is the only one with a 1-user license listed on their web site (for $382.99, ouch!) but with no indication of how to get the media (download?) or documentation. Ideally I'd like to purchase 2 copies so I can install it on my home PC too.
I've got 59 more days to put Reflection X through it's paces before I have
to make a final decision. I'll also keep looking for other (hopefully less
Now that I have my own domain name (wyblog.us) naturally I want the web server on Ceres to serve the blog pages from that address, while still serving all my old pages (pictures, static pages, various pdf docs, etc) from their old URLs. It turns out that doing this wasn't intuitively obvious, and the online docs for the OSU web server didn't shed much light.
So, I embarked on some experimentation. The first config change was easy, adding a 2nd ip address to Ceres. Tcpip$config.com makes this trivial, and the servermaint script allowed me to assign ceres.datalife.com to the old ip address and wyblog.us to the new one.
Then I encountered my first "feature". Any URL that worked on ceres.datalife.com also worked on wyblog.us. It took Googlebot about 5 minutes to start indexing all of my static pages using the new URL. My first reaction was, "boy those guys are fast", quickly followed by "how do I make it stop?".
The magic configuration commands for ip-based multi-homing are in www_root:[system]http_paths.conf. The servermaint script had put in the 2 ip addresses this way:
# Configure multi-homing (ip based and host/cname-based) root pages and # log files. # .ITERATE localaddress cname $hname ;\ AccessLog $cn_logfile $cn_extflags ;\ map / $cn_root .ITERATE localaddress $addr $name ;\ AccessLog $mh_logfile $mh_extflags ;\ map / $mh_root .next 188.8.131.52 ceres.datalife.com access-ceres.log / .next 184.108.40.206 wyblog.us access-wyblog.log 1 /blog localaddress # terminate localaddress blocks. map /blog* /htbin/blosxom.pl* map /wyblog* /htbin/blosxom.pl*
I thought this would map anything for wyblog.us into the Blosxom.pl script. Hah! It mapped the root page, but anything else fell through into the rest of my (gargantuation) path remapping maze. This is how Googlebot was able to start grabbing my static pages via the new URL.
I tried a whole bunch of different combinations of "map" trying to get things to go to the right place when I finally had an "aha!" moment. That "map /" in the localaddress directive, what if it was "map /*"? I was on to something for sure! A few tweaks later here's how I'm configured now:
.ITERATE localaddress $addr $name ;\ AccessLog $mh_logfile $mh_extflags ;\ map /* $mh_root .next 220.127.116.11 ceres.datalife.com access-ceres.log /ceres/* .next 18.104.22.168 wyblog.us access-wyblog.log 1 /blog/* # # now map the ceres and blog paths back # this will prevent wyblog.us from serving anything but the blog and its images map /blog/images/* /images/* map /blog/blog* /blog* map /ceres/* /* map /blog* /htbin/blosxom.pl* map /wyblog* /htbin/blosxom.pl*
This works like a charm. The trick is to take anything coming in for ceres.datalife.com and prepend "/ceres/" on to it. Then take anything coming in for wyblog.us and prepend "/blog/" on that. The only things other than blog posts that should be served by wyblog.us are the image files used by the blog templates. So I map all the blog stuff into the script rule for /blog and then map all the ceres stuff back to the root.
Now when wyblog.us gets a request for a page that ought to be served
from Ceres it feeds the request into the Blosxom.pl script which doesn't
know how to serve those requests so the requestor gets a blank page.
I've been chasing a nuisance problem for the past two days. I have an application that accesses files in a directory depending on their modification date. Sometimes I need to update one of these files, but I don't want the application to reprocess it. For many years (starting with Vax/VMS) I've used the "FILE" utility (submitted to Decus by Joe Meadows) to reset this date.
I recently ported part of this application to OpenVMS 8.3 on an Alpha DS20.
This Alpha is part of a cluster that shares disks with my Vaxstation. I
moved the files over, and reset the modification dates on the Vax. But,
when I logged in to the Alpha, the files showed today as the modification
$DIR/DATE=MODIFIED on both system produced different
I started pulling my hair out! I checked file caching parameters, I checked
the firmware rev on the MSA1000 SAN, everything seemed OK. The HP ITRC web
site wasn't very helpful either. Finally a Google search brought up the
VMS 8.2 New Features Manual. In it, under a section describing changes made
for Posix compliance, HP mentions that the modification date displayed by
$DIRECTORY command is actually generated on the fly
based on a new set of dates stored in an extended file header. In previous
versions of VMS, the modification date was a part of the file's directory
entry. Now there are 4 dates (revision date, accessed date, attributes date,
and data modification date). The date displayed in a directory listing is the
latest of these 4 dates.
But wait, it gets better. You can only see and modify these new dates if
you also set the disk volume parameter
/VOLUME_CHARACTERISTICS=ACCESS_DATES and this parameter is not
set by default. If this parameter is not set, a
the 4 new dates as blank! I modified the volume attributes, and saw that
$FILE/REVISION_DATE was setting the data modification date, but
then OpenVMS set the attributes date to today (which makes sense since the
utility had indeed modified the file attributes).
Fortunately, there is a DCL command to override and set these dates (only on
OpenVMS Alpha though, it's not implemented on the Vax). The command is
$SET FILE/ATTRIBUTES=(ATTDATE=date,MODDATE=date). Using it
I was able to see the same results for
on both the Vax and Alpha systems. Hooray!
Thirty years ago today, on October 25, 1977, Digital Equipment Corporation released VAX/VMS V1.0 for the VAX-11/780. The Vax was designed to be the replacement for the PDP-11, a 16-bit minicomputer that was a real workhorse in its own right. With a 32-bit address space, virtual memory, and twice the number of registers, the Vax opened a whole new world of mid-range computing. The original Vaxen were actually dual-architecture; they had the full PDP-11 instruction set and could run PDP-11 binaries in Compatibility Mode. Bootstrap (we didn't call it "startup" back then) always began in Compatibility Mode. The procedure entailed loading a floppy disk that executed a subset of RSX (a PDP-11 operating system) with just enough smarts to locate the VMS system image, load it into memory, and switch the CPU to Native Mode to begin execution of the VAX/VMS operating system.
I remember seeing a VAX-11/780 at RPI in the basement of the Amos Eaton building, probably around 1979. I vaguely recall meeting a guy who was involved with working on it, but frankly I didn't pay the Vax much attention. I was heavily invested in IBM System/360 assembly language programming and since RPI had just upgraded to a 3083 I was a lot more excited about that. Besides, the Vax didn't use punch cards so how could it be a Real Computer? Hey, what did I know? I was just a geeky math major.
While I was in college I also worked here at Data Life during the summers. We had an IBM 360/20 back then, and a PDP-11/34 running RSTS. When I graduated from RPI I joined DL full-time. The lease on the 360 was coming up for renewal and IBM was pushing us to upgrade to a 4341. I thought this was a swell idea; after all I was true-blue-through-and-through. Simon hooked up with a DEC sales guy who showed him a new Vax model, the VAX-11/750. The Vax was way less expensive than the IBM 4341, and DEC promised us help with migrating our IBM Cobol applications, so early in 1981 one of the first VAX-11/750's to be installed in New Jersey landed here in Verona.
It had 256 kilobytes of memory, 2 RK07 removable disk drives with a capacity of 71 megabytes each, and a pair of MT11 9-track tape drives. (This was mostly a step up from what we had on the 360 - 128 K of memory, a bank of 5 and 10 megabyte disk packs, and 4 high-speed vacuum column tape drives.) Most of our data was on magtape, our processing consisted of reading data from one tape, massaging it, and writing it to another tape. Each "job" had from 10 to 15 "steps" in it, and each "step" involved reading one tape and writing another. We had a full-time tape librarian, his whole job consisted of cataloguing, cleaning, and tracking the thousands of tape reels we used each month.
The guys from DEC helped a lot with the conversion. We became experts in
the use of the
LIB$TRA_EBC_ASC library routine. (IBM systems
stored data in EBCDIC format but DEC systems use Ascii. Microsoft chose
Ascii for the PC so I guess Ascii "won".) The VAX-11/750 ran VAX/VMS V2.0,
and yes, ours had a card reader. The thing that was revolutionary to me
was the Vax could run our programs interactively; we didn't have to feed
them in via punch cards or execute them in "background" (IBM's OS/360 term for
batch processing). We could also load data in from tape and write it to
disk. 71 megabytes seemed like an ocean of space since we were used to only
having 5 or 10 megabytes of "scratch" space per job.
In the IBM world, programming consisted of modifying decks of punch cards,
feeding them into the card reader (along with the deck of punch cards that
contained the compiler), and parsing the line printer output for
error messages. In VAX/VMS the programs lived in disk files, we could
modify them using a text editor, the compiler was always there via the
$ COBOL command, and error messages were immediately displayed
on our terminals. We could even run the programs right there and look at
the output before it was printed. We were in heaven!
Then, much like now, the life insurance industry's lifeblood was data. We took in reams of data each month from each of our clients and shipped back boxes and boxes of printed reports. We had a staff of keypunch operators who spent their entire day translating mountains of coded forms into boxes of punched cards. Once we got most of our applications converted to the Vax Mr-big-shot here thought it would be much more efficient if the keypunch girls could enter the data directly into the Vax. We could replace their 029 keypunch desks with a VT100 terminal!
Me and my big mouth. Change comes slowly in the life insurance world, and changing coding forms, especially for a 3rd party data processor, comes not at all. If a client had to go through the bother of learning a new set of coding forms he very well may decide to switch to a new DP vendor too. So, we could not change the data entry formats. There was also the small matter of retraining the keypunch operators. Today everybody is familiar with the ubiquitous PC keyboard with its numeric keypad, and the DEC VT100 terminals had the same format. That is, the numbers were laid out like this:
7 8 9 4 5 6 1 2 3 0
The problem was, despite trying really hard to adapt to this keypad, the keypunch girls were totally used to the 029 keypad, which like telephones and adding machines, had the top and bottom rows switched:
1 2 3 4 5 6 7 8 9 0
If we were going to phase out the keypunches, we had to find a way to modify the numeric keypad. The Keypunch Emulator was born. The VT100 terminal has the ability to switch the keypad into "application mode" whereby it transmits escape sequences instead of numbers to the host. No problem, I said, we can just intercept these escape sequences, translate them into the numbers the data entry folks expect, and display them.
It turned to be just a tad bit harder than that, but with help from one of the other fellows here, we finally got it to work. Simon had custom keycaps made up for a bunch of terminals (due to the sloping nature of the keyboard, simply exchanging the 1,2,3 keys with 7,8,9 resulted in bruised fingers because the 7,8,9 keys were thicker than the 1,2,3 keys). The keypunch emulator eventually came to include just about every feature found on an IBM 029 keypunch. We supported virtual drum cards, and the tab key inserted the appropriate number of spaces to jump to the next stop. They used the left arrow key to go back, and a "rubout" key (literally a key that punched every hole in the column) to correct mistakes, so we had to emulate that too. And, of course, we had to emulate "verify" mode (after one keypunch operator had entered the data, another operator would rekey the same data in verify mode, which checked that the data on the cards matched the data being rekeyed).
We don't actually use the keypunch emulator anymore. I think the last time it was needed was back around 1990. But, here's why VMS is so great. I just pulled it up. We've switched Vaxen (we upgraded the 750 to a 3400, and then to a 4100, and migrated to Alpha (AS2100, ES40, and DS20) and Itanium (rx2600). The keypunch emulator still works, even on my X-windows display that's pretending to be a VT100. The original compiled version (developed on VAX/VMS V2.4) runs unmodified on OpenVMS Vax V7.3. I ran it through the Alpha binary translator (VEST) and it ran fine on OpenVMS Alpha V8.3. I then took the translated Alpha program, ran it through the AEST Itanium binary translator. It translated fine, but didn't run correctly. Hmmm, this is supposed to work. A little time with the debugger exposed an error handling bug that had lain dormant for 25 years. So, as the saying goes, I "used the source, Luke", recompiled, and now it works on Itanium too.
Yes, time and technology have marched on, yet VMS (now OpenVMS) is still going strong. I chalk that up to its robust feature set, and the foresight of its developers. VMS has adapted to the 21st century while still preserving the legacy of the original Vax. The same Alpha system that ran my old keypunch emulator is also running the web server hosting this blog, using software written in Perl (a language that didn't exist 30 years ago). It's connected to a SAN array with 2.5 terabytes of storage (a 3½ million percent increase over the disk space on our first Vax). We still do all of our life insurance data processing on VMS, only now the applications are web-enabled, the magtapes have been replaced by XML files, and the output is produced in PDF format and emailed worldwide.
HP is still deeply committed to OpenVMS development; they have announced plans
for updates and enhancements through 2012. If they stay true to form, some time
next year, we'll see the release of an update schedule for 2013. Taped to my
office wall is a bumper sticker that DEC put out for the 20th
anniversary of VAX/VMS. The tag line is "Nothing Stops It". They weren't
The client who wanted us to take in an XML file and use it in our application has now asked if we can format one of our outputs in XML. It sounds simple enough; I just have to take the RMS record, interpret it using the CDD record definition, and output the field values surrounded by XML tags. Which of course means formatting each field as a text string, and deciding how many decimal places they'll need, and making sure I consistently round the values.
The original file is from an old Cobol application. The records use the
OCCURS DEPENDING ON clauses. Ugh.
I guess I'll just output all instances of redefined fields, and for the
array elements, I'll have to add a tag with the array index.
It's too bad Datatrieve doesn't know how to output XML from the Report
Writer. That would make my life a lot easier!
One of our clients proposed sending some data to us on a nightly basis using XML. All I really wanted was a flat file, with comma separated fields, but they insisted they could only send me XML.
I've heard about XML, and most of our web-enabled applications know how to use it, but our Life Insurance Administration system runs on VMS, and we've never had to read an XML file there.
OK, then, let's go find out how to parse XML on VMS. HP has a free download of XML toolkits for OpenVMS Alpha (and IA64) based on Java and C++. We've got C++, so I downloaded that kit and installed it on OpenVMS 8.3. And, then the real fun began.
The documentation is very Unix-centric. Aside from the installation instructions, there is virtually no VMS-specific info to be found on HP's web site. HP spends a lot of web site space telling me all about how they can support "Xalan" and "Xerces" (but not what "Xalan" and "Xerces" actually are). Some time with Google and I finally got the concept that Xalan and Xerces are programming API's, and that there is an Xalan utility which can translate XML.
So, I search the XML installation directory tree, and find XalanTransform.exe. Hmmm. That sounds like the ticket, so I define a foreign command for it and ask for help in the typical Unix fashion:
$ xalan :== $xalan-c$root:[c.bin]XalanTranform $ xalan -h Usage: XalanTransform XMLFileName XSLFileName [OutFileName]
What is XSLFileName?, I hear you cry. Ah, XSL is the markup language used to describe XML! Hey, you ask a silly question, you get a silly answer.
Back to Google. XSL (eXtensible Stylesheet Language) is another type of XML document that describes how to process the data in the XML document. I read this tutorial on XSLT (XSLT = XSL Transformations).
But, when I tried to practice with their examples using XalanTransform on VMS, it didn't work. All it did was output my XSL file. The XML toolkit came with some samples too, and they worked. What was different about my XSL and XML files?
VMS file attributes, that's what. In Unix-ville, every line in a file ends with a "newline" character (what we VMS folks call a "linefeed"). Sure enough, the samples from the XML distribution all had record attributes of "stream_lf". Which is not the default on VMS. I had created my files with EDT, where "Variable" is the default, and with DCL, where the default is "VFC, 2 byte header". Thirty seconds with the FDL editor, and I had converted the "variable" format records into "stream_lf", and the examples from the tutorial were working!
I cobbled together a XSL file for the XML data being sent by our client, and successfully output a flat file, with comma separated data. I wrote a quick Basic test program to verify I could read this flat file, and all was well.
Tommorrow I have to figure out how to fetch the XML file from our client using
something called SOAP.
OK, I'll admit it. For purely sentimental reasons, I wanted to keep all of this stuff running on Wyvax. I've had this workstation since 1991. I like it. So, I upgraded to VMS 7.3. I installed the latest good version of Perl for Vax (5.6.1). I tried to get the "optimized" Perl engine for the OSU web server to work too.
The performance of the blog script was not impressive. Wyvax was a good machine in its day. But, its day has passed.
I installed the OSU web server (V3.10A) on Ceres, an AlphaServer DS20, and I got the latest supported PCSI install kit for Perl from HP. Then, I copied the Blosxom script over there too. It runs like gangbusters. You probably noticed how quickly the blog came up. And, that annoying Content-type: HASH line is no longer there either when viewed in MSIE.
So, I reconfigured the web server on Wyvax to redirect to Ceres. For you, my faithful web site reader, the change will be transparent. You'll ask Wyvax for my web pages, and he'll tell your browser to go ask Ceres instead. Ceres will send you all that you have requested, and it will do it a whole lot faster than Wyvax ever could.
Wyvax will still live on, mostly as an X terminal for me.
If you're viewing this with Microsoft Internet Explorer, you're probably wondering what that "Content-type: HASH(0x3d743c)" line is doing up there.
Beats the crap out of me. I can't figure out where it's coming from. Neither can Jack. But at least the blog postings display correctly now.
As for how I got it to display at all, I hacked the Perl script engine to insert a CR LF combo in front of the script output if the browser is MSIE.
UPDATE 31 May 2007 11:21:
Geeky stuff for folks who care about how I got Blosxom correctly configured on VMS.
The head.html file needed a <!DOCTYPE> tag, and a meta tag with "content-type" set to "text/html". This is what fixed the MSIE display problems.
The other problem I had was getting plugins to work. Blosxom has an extensive
library of useful Perl macros that enhance its functionality. The basic idea
is you put the plugin code into a directory, and name the file "plugin" (no
file extension). Blosxom reads this directory, and uses Unix "grep" magic to
grab the file names. The problem is, VMS file names don't look like Unix
file names, and VMS always puts a "." after the file
name. So the Unix file
/lib/blosxom/plugins/update. on VMS. The Perl code which strips out the plugin
name (update) was choking on the "." character. Ugh. So, I got rid of the
grep code, thereby grabbing all the files in the /plugin directory (yes, I know
it's a cheesy solution - it's my plugin directory and I can make sure that
only plugins go in it). Then I put in some Perl code to strip the trailing
"." from the file name.
Voila! The plugins work now. This update is brought to you by the "update" plugin.
This blog displays fine with Firefox. Which is what I always use. I realize that lots of other folks use MS Internet Explorer. So, I tried viewing it with MS IE Version 7. No joy.
It thinks the blog is some kind of file that it has to save to disk. The content type header is "text/html". I think it has something to do with IE wanting to interpret pages using its own internal table of content types.
I'm going to have to ask Jack for help. Sigh.