-
PDF Printing with Headless Chrome
I’ve got docs for a bunch of products in Jekyll, written mostly in Markdown, but also some HTML output from DITA and Flare. It works great, but doc reviews are a nightmare. We typically use Acrobat shared reviews for doc review, which mostly works, but the question became how to make PDFs of this stuff, especially the Markdown.
There are a lot of obvious and not-so-obvious half-answers here. Like, why not save as PDF from your Markdown editor? (You don’t get any of the templating from Jekyll, and none of the variables show up. You also have to do it from each file, by hand.) I spent years chasing a good solution for this, and tried Pandoc, Prince, PrawnPDF, wkhtmltopdf, dita-ot, and gotenberg, hitting some roadblock with each one.
My short-term solution was starting Jekyll locally, going to each page in a browser, doing a Ctrl-P, and printing to a PDF. For every page. Every single page. For every review. That was obviously not a great way to handle it.
This is when I found something genius: Headless Chrome.
Basically, headless Chrome runs the Chrome browser in the background, and then you can throw commands at it and get the output. If you ever used the venerable DZBatcher back in the day for non-interactive FrameMaker, it’s a similar concept. If you’re a Node programmer, there’s a bunch of crazy stuff you can do with it. But there’s also a command-line interface where you can do some basic stuff from a shell script.
If you have a version of Chrome newer than mid-2017, you’re set. You can do something like this:
chrome --headless --disable-gpu --print-to-pdf https://www.jonkonrath.com/
Another fun command is
--screenshot
, which will dump the page to a PNG.There are a bunch of caveats, and there is almost no logging or error messages thrown by Chrome, so it’s maddening trying to debug this. So keep the following in mind:
- Headless Chrome uses the location of your Chrome binary as the working directory, and tries to save files there. If you’re using Windows, chances are you can’t write files in
C:\Program Files (x86)
, but it won’t tell you that if it fails. And if it does work, it will pollute that directory with output as you wonder where it’s putting the PDFs. You can specify an output file with--print-to-pdf="/full/path/file.pdf"
and give a full absolute directory there. - There is an
--enable-logging
switch, which is vaguely helpful. - The URL you specify has to have a protocol in front of it (http:// or https://) or it will silently fail.
- If you do something wrong, Chrome will go sideways, but keep running in the background without complaint. You’ll need to stop every Chrome process to get it to work again.
I can’t publish the Windows CMD script I wrote for two reasons: one, it belongs to my employer because they pay me, and two, it’s the worst piece of garbage you’ve ever seen. The five lines of code I had to write to generate a timestamped directory look like if one of my cats slept on my keyboard for 45 minutes. Anyway, what I basically did:
- Pass in a .txt file with a list of URLs in the order I eventually want the output.
- Create a new timestamped directory.
- Loop through the text file, and run the print-to-pdf command once per line, dumping them into the timestamped directory.
When I named the output files, I stripped off the protocol and hostname from the URL, and numbered the files, and replaced the slashes with dashes in the file. So if I printed http://example.com/this/is/the/index.html, it would create 1-this-is-the-index.html. The path thing is so duplicate files don’t overwrite each other.
Then when I have that directory of PDFs, I Ctrl-A, right-click, and combine the files into a PDF, then run a shared review on that. The beauty is that when I do round two of the review, I simply run the script again, and the files are in the same order. (Otherwise, the files are going to be in alphabetical order, which probably isn’t what your reviewers want.)
A handy way to get that file started is with a
dir /b > file-list.txt
from the command line in your top-level source directory. Don’t forget to edit out the images and directory names from your list.This isn’t a great way to do production-ready PDFs, and you’ll have no good control over headers, footers, page layout, or anything else. Also, all of the links in the PDF won’t work right; they’ll go to localhost if you’re running Jekyll locally. But it’s close enough for review purposes.
More info on Headless Chrome: https://developers.google.com/web/updates/2017/04/headless-chrome
- Headless Chrome uses the location of your Chrome binary as the working directory, and tries to save files there. If you’re using Windows, chances are you can’t write files in
-
xml.etree.ElementTree and OS randomness
OK, I’ll start by doing something you aren’t supposed to do: parse HTML using a bunch of regular expressions in Python. Don’t do this. Life is too short. Use something like BeautifulSoup. I couldn’t, because I had to use standard libraries only, so here I am.
Anyway, my program manipulated a tree of XHTML using
xml.etree.ElementTree
. Typical stuff like any tutorial on it:import xml.etree.ElementTree as ET ... root = ET.parse(htmlfname).getroot() bod = root.find('body') ...
Then after a bunch of scraping, I used a regex like this to get the address in the href of the link:
anchor_addr = re.search("<a href=\"([^\"]*)\"",anchor_base).group(1)
(I’m converting HTML to Markdown. Don’t ask why.)
Anyway, this worked fine on Windows. I pushed the code, and a Unix machine in the CICD pipeline built it, and failed on this line. The match would fail, and
anchor_addr
wouldn’t have a.group(1)
, because it was aNoneType
.At first, I thought it was the typical problem with linefeeds, like my code was splitting strings into arrays with
\n
and on Windows it had\r\n
. After messing around with that, I found it it wasn’t the case.Here’s the problem:
xml.etree.ElementTree
uses a dictionary to store the attributes of an element it parses. Python dictionaries are inherently unordered. Or they can be unordered; it’s an implementation detail. And it looks like the version of Python I was using on Windows was ordering them, but the ones I was using on my home Mac and on this unix build machine were returning the attributes alphabetically. So<a href="foo" alt="hi">
was becoming<a alt="hi" href="foo">
and breaking my regexp.My code didn’t really need to find the entire element and pull the value of the attribute, because it was already inside the element. So I was able to change that regexp to
"href=\"([^\"]*)\""
and that worked, provided it was neverHREF
orHREF='foo'
.Long story short, don’t use regular expressions to parse HTML.
-
A few quick updates
Five years, five updates. Sounds about right.
A few quick updates, in the interest in posting more than once a year:
- I’ve been managing other writers for almost four years now, so I should probably change my byline to reflect that. I’m still writing and working on tool/architecture stuff. But at some point, I should write down some snappy articles with management tips and tricks. Maybe some stuff on synergy and paradigm shifts.
- My byline also mentioned being a FrameMaker nerd. I’ve barely used FrameMaker in the last five years, except to get stuff out of it. Aside from a few things still lingering in
*.fm
files, most of my company uses DITA, and some of use Markdown and Jekyll. But the eventual plan is to move everything to MadCap Flare. So maybe I’ll write about that more later. For now, strike FrameMaker from the byline. - I’ve been doing some programming lately, mostly in the quest to get a bunch of DITA converted over to Flare. Whenever I’ve run into a conversion or production program that required custom code, I’ve always said “I’m not a programmer” and then spent months trying to get developer time to fix it. I think it’s time I change that to “I’m not a very good programmer” and work on improving that. As of late, that’s involved a lot of Python, so you may see some posts on that.
Of course, I’m saying all of this, and I may not post again until 2024 because I’m so busy. But, stay tuned.
-
Git choose-your-own-adventure
I use git on a daily basis. Not to age myself, but I went from rcs to cvs to svn to git, with some Perforce and SourceSafe sprinkled in there, so I have a long background in source control. But git is just different enough to throw me sometimes. And for whatever reason, a lot of git documentary gets a bit, well, academic. It’s hard to find a quick answer sometimes, and they’re often buried in very theoretical arguments about whether or not git submodules are essential or the worst thing since Agent Orange.
Anyway, here’s my go-to for answers when I break something: http://sethrobertson.github.io/GitFixUm/fixup.html
There are two things I like about this page. First, it gives you answers without a lot of guff. Less is more, etc.
The second thing is that it’s presented as a choose-your-own-adventure. Are you trying to do this? Did you do this? Do you want to do this? It’s a simple troubleshooting flowchart, something I’ve written in docs before, and something you’ll run into every time you call tech support. (There it gets a bad rap because the first steps are always “did you try restarting?” and that seems insulting. It’s also the problem 90% of the time. That’s another topic, though.)
But as a kid who grew up devouring Bantam books from the book fair at my grade school, and memorizing _The Cave of Time _way back then, there’s something very intuitive and appealing about this form of docs. I’m not going to say this makes a git branching disaster where you think you lost everything fun, but it makes it easier to drill through the problem and find a solution. I don’t really have any docs at work where I could pull something like this, but as docs in general become more informal and user-oriented, this would be a great format to use.
-
Contributing to Open Source
One of the questions I’m always asked is how to get experience in tech writing. It’s a chicken/egg problem: you need a job to get experience, but you need experience to get a job. You can attend a code camp or complete a certificate program to get some real-world experience to add to your resume, but what I always tell people is to get started with open-source software.
There is a lot of open-source software out there these days. I got started in computing before this was the case, but was an early adopter of the Linux OS in the early 90s, which changed everything. Now, entire communities have sprung up around the development of operating systems, servers, programming languages, and desktop software. Tools like GitHub have centralized participation, putting source code in a centralized location, and providing tools for quick communication about projects, instead of crufty old mailing lists. It’s made it very easy to explore projects and contribute to them.
Here’s a great article about how to get started on this:
This example is specifically about a developer contributing to the Node.js programming language. But there are a lot of opportunities for tech writers, because documentation for projects is not always that great. The only issue is that there aren’t always that many documentation bugs listed in projects. It requires hunting down a project you like (check out the “explore” section of GitHub to browse through things) and deciding what needs improvement.
Overall, it’s a great way to gain experience, and also improve tools that everyone will use. And, maybe you’ll someday get a job out of it.