Skip to main content

Paul Lindner


1 min read

My wife Julie passed away from breast cancer last month. Today I published her obituary in the Star Tribune, San Francisco Chronicle, the Rochester Post Bulletin and at

We knew right away that we completed each other and fell deeper in love for 27 years. She meant everything to me.

Thank you for bearing witness to Julie’s life, for the condolences and for honoring Julie's memory.


Paul Lindner

Game Services and Digital Preservation

2 min read

I think it's time for a Legal Deposit scheme for Games.

- Game publishers would put their games in Escrow when they publish.  
- Game Services could publish a spec on how to interpret the game contents.
- 'Orphan' games would actually be preserved.- Users that purchased the Game would then be entitled to a copy of the escrowed item, plus the design on how to run them.

This, combined with an export of user-generated data would allow for usability after Stadia or the Game Publisher sunsets the service/game.

And to be honest I'd love to see this extended to all Online "Stores" that don't let you export usable contents.

Barring that Game Services could enter a Ulysses Pact with users if they are serious about the long-haul.. 

 For each purchase a user makes put 10x in a locked escrow fund.  When the service cancels that money can be used to migrate the games to a new provider or payout back the user.

- If a Game Service gets few users it's not a lot of money to exit and actually would increase satisfaction.
- If a Game Service does get popular then there's an explicit feedback loop that reinforces the durability of the system and alignment of interests.

Evernote announced something like this, but never really followed through.   A small company called Forever actually does have a preservation fund that is purpose driven.

Paul Lindner


4 min read

Sit back a spell and let me tell you a story about the Association of Concerned Employees (ACEs).

Back in 1991 the academic computing teams at the University of Minnesota were set to be laid off and we could "apply" for a job with the new quasi-private sector outfit the "Minnesota Supercomputer Center" (MSC).
So none of us liked that.  Over 300 of us set up an effort to stop it.  The VAX/Unix/MVS/Unix/PC/CDC units stopped fighting for crumbs and joined forces.  Letters to the editor were written, politicians where contacted, petitions were circulated.

Mailing lists, and even a BBS were put into use to coordinate.

Even, ahem, a listening device was placed in the Board of Regents office.

We were not in a union but AFSCME supported our demands and upped the pressure.

The efforts worked.  The privatization was called off.  There were job losses, but we had a voice that we used to make the best of the situation.  We had input into how we could reorganize the units and better support the students and faculty.

Oh and we finally got the audit of the corrupt MSC a few years later:

`U' backs off of plan to privatize computer services
Published: October 23, 1991
By Jim Dawson; Staff Writer 

Intense pressure from many of the 330 civil service employees who
operate the University of Minnesota's computer systems apparently has
forced school officials to back away from a plan to privatize computer
services and place them under the Supercomputer Center.
Ettore Infante, vice president for academic affairs, who
announced the privatization plan last week met with computer workers
      He told them that because of concerns regarding his
original plan, a reorganization of computer services would occur
"without the involvement of the Minnesota Supercomputer Center or a subsidiary of it."
    Instead, Infante said, an outside consultant will be hired to
 determine the best way to  consolidate and reorganize the university's
several computer service centers.
  About half of the 330 computer specialists would have been laid off at the end of the year under Infante's privatization plan.
       There will probably be layoffs under any new plan, but how many and when hasn't been determined.

  Infante's privatization announcement caught the computer
specialists by surprise last week, but they quickly used a computerized
electronic mail network to organize their opposition.
    Their main objection focused on the involvement of the Supercomputer Center.
       The center, a quasiprivate corporation partially owned by
the university, is not subject to public accounting.  Gov. Arne Carlson
recently cut $8 million in state funding from the center's budget, and
many of the computer specialists believed that Infante's move was simply
 a way to funnel new funds into the center.
    Infante denied that charge and cited the inefficient,
outdated computer systems and networks throughout the university as his
reason for consolidation.
    His move yesterday was welcomed by most employees, but many remained skeptical of his motives.
       "I was encouraged that they seem to be backing down," said
 Cheryl Vollhaber, a specialist with academic computing services.
    The employees demanded to be involved in the planning for
consolidating the computer systems, something most agree is badly
      However, Infante was noncommittal about employee participation.
    The computer specialists said that they have been calling for
 a consolidation and reorganization of computer services for a long
time, but that the administration has ignored them.   They are
frustrated, several said, because although they are the computer
experts, they are not being consulted.
    "Our focus will be having an employee representative on the
planning board," said Stephen Collins, of the university's
micro-computer center.


Paul Lindner

When Pong played Humans

3 min read

It was a blistering July day in Las Vegas, with temps hitting 109.  Inside the SIGGRAPH 91 convention hall Yello's Rubberbandman looped on the speakers. On each chair: a red/green paddle.

I was a student volunteer, stamping the finest hands in Computer Graphics.  Those hands (and my own) each controlled those paddles.  Then 5000 people looked up and saw a Pong Game appear on the screen.

And then..  the machine started playing us.

In response to visual stimuli we changed the color of our paddle.  The ball moved left, then right.  The crowd shouting "red red red", "green!" and cheering as the game played on.

The rules of the game and the feedback loops directed our actions.  It was a complex adaptive system with emergent behavior.

And luckily there is some footage of this moment.  Watch this excerpt from "Machines of Loving Grace" that talks about this moment in history:

Loren Carpenter Experiment at SIGGRAPH '91 from Zachary Murray on Vimeo.

Loren Carpenter cofounded Pixar.  Check out the TurboGopher appearance at the 5:00 minute mark.

Today the simple pong game is now the multilayered technological environment we interact with on a daily basis. Instead of red/green paddles with 1 bit of data we carry phones that generate a wealth more.  These devices also provide the aural/visual and haptic stimuli.    With that our collective actions power all kinds of "games" today:

  • Aggregated location data and movement speed generates traffic data in maps.
  • Aggregated search queries and click data deliver better search results.
  • Aggregated likes, views and interactions with content power trending data and even news and politics.

As technologists we need to remember that by controlling the game, we are indirectly controlling the players.  The choices we allow (and forbid) define the behavior.  The game "plays" the player.  And often the only way to be free is to not play at all.

Except that is if maybe, just maybe, the people start playing a different game than the one we designed.  In the giddy demonstration it was assumed that people wanted to win at Pong.  But we didn't play long enough for abuse or scheming.  It would have only taken a few people to cross over to sabotage the other side, or for trolls to have changed the outcome.

Finally this level of power and control demands great responsibility.  The only thing worse than control used for malicious purposes is control wielded without thought, without thinking of the consequences.  So the next time you're designing a product think about the whole system and all the inputs and ask "who's really in control?".

h/t to the General Intellect Unit podcast and their Machines of Loving Grace episode for reminding me of this unsung moment in history.

Paul Lindner

Investing in a better Internet: Resonate, a music coop

4 min read

Do you want a better internet?  One that balances the needs of creators and consumers?  A more democratic internet?  I do.  That's why I'm investing in a music coop: Resonate.

Stream to Own

I've been a member-owner of Resonate for a while, and listen every day.  It provides an eclectic mix similar to a high quality college radio station.  At first glance Resonate is a streaming service like Soundcloud or Spotify.  But dig deeper and the you'll find major differences:

  • You only pay for what you listen to.
  • Each listen debits your balance a small amount.
  • On the 9th listen you own the track. 

This tiered pricing model incentivizes discovery.  Owning actual tracks helps fans develop deeper ties to the music they love.

Stream to Own Model and Graph 

And I own more than just tracks.  My member share means that I own a portion of Resonate, I can vote on how the business is operated and at the end of the year I can share in the profits.

Over the past year Resonate has added more content, more features, and most importably a sustainable organization where fans, musicians, employees and labels can work together towards common goals.   This is the kind of “cooperative internet” that I always imagined would emerge back in the pre-web era.


“Purpose above Profits”


"Purpose above Profits" was the slogan at REI as I shopped for the holidays.  It’s a reminder that the REI is a Member Cooperative.  With my $20 lifetime membership I get dividends based on my purchases while supporting outdoor and environmental causes.  In 2016 REI gave back 70% of profits.

This is but one example of how Coops can offer sustainable services for the communities they serve.  Growing up I had electric power from an Coop.  When I lived in Switzerland there’s a huge retail chain literally named “Coop”.  I currently use and support my Credit Union.

Overall Coop businesses are more sustainable, and are oriented to the long term interests of their member-owners.

But the growth of the Internet and the Web bypassed the cooperative model.  This despite the fact that open source and much of the shared internet infrastructure are structured like coops.  It wasn't until 2014 that the concept of Platform Cooperative was coined.   The rise of pseudo-"sharing" platforms like Uber and AirBnB and the rise of decentralized technologies like blockchains were two key reasons that many now embrace the concept.


Early Stage Capital

But a problem emerges, how do you bootstrap a Cooperative where there are significant barriers to entry?  That’s where Supporter Shares come in.  Anyone can invest in these shares.  Each year the co-op sets aside 10% of profits and issues dividends to Supporter Share owners.

Resonate Voting Diagram

But remember that Supporter Shares don't get you extra voting power.  A cooperative is still one-person, one-vote.  The upside is that there are no leveraged buyouts, no dual share structures or non-voting shares.


The Future Internet

The Internet I want is a democratic one where creators, consumers, supporters and employees can work together towards common, sustainable goals.  By using and investing in Resonate I hope to advance those goals.  Liz Pelly captured the sentiment in "Protest Platforms" that "Resonate is particularly interesting for the way it advocates for broad decentralization of data, power, and money in music".

The Resonate Project Map details where the project is going and the plan to achieve it.  I’ll admit that the content catalog is small, (but growing!) and the technology is very beta (but improving!).  I still use and enjoy it every day.

I hope that you'll consider joining the coop as a member owner and see for yourself.  If you want to accelerate this type of work consider purchasing Supporter Shares.

And finally, I hope that you'll consider supporting a new generation of online platforms that include the same kind of values that Resonate promotes.  All while listening to and supporting the artists we love.

Paul Lindner

Moving your Google +1s to Pinboard

2 min read

So the +1 button on the web is riding off into the sunset.  But you can still make good use of the data that you've collected over the years via Google Takeout!  I like to keep my bookmarks in Pinboard, so here's how I did it and you can too.


1. Visit in your browser.  You'll see something like this:

2. Click Select None, then click on the checkmark next to +1s.

3. Scroll to the bottom and click Next

4. The next screen has some choices for file format.  Change if you want, but the defaults should be fine and will email you a link to a zip file you can download.

5. You'll receive an email with a link to the zip file.  Expand the file and you'll find something like this:


Import to Pinboard

Now that you have the +1s.html file you can import it to Pinboard.  (Or other sites that support the Netscape Bookmark file format)

1. Pinboard 'tags' imports with the name of the file.  I wanted to use the tag 'plusone' so I renamed my file from +1s.html to plusones.html

2. Next visit the Pinboard settings page, then click import (or just click on this link)  You'll see something like this:

3. Click on the Choose File button, select your html file (in my case plusones.html) and click upload.

4. After a little bit of time Pinboard will have your imported bookmarks!  You can then view all of them based on the tag (plusones).  Click on the tag and you can browse/clean them up. Woohoo!


Other Places

Once you have the exported bookmark html file you can also import to other products.

Contact me if you have more.  I'll add them here!


Paul Lindner

The Mail Must Go Through - Decentralized Customer Service

1 min read

Some kudos to the US Postal Service.  I sent Express Mail to a PO Box for Saturday delivery.  Saturday comes and  I realize that the post office is only open from 8 to 10:30, but delivery is only guaranteed by 3pm.  Oops.

So I look up the Post Office and notice that a local number is available.  With skepticism I called the local number.  3 rings later I'm talking to a small-town Postmaster.  She knows the recipient, takes the tracking number and promises to call back.  15 minutes later she has found out where it was and promised to receive it after hours and deliver it.

Shocked I ask her what can I do to thank you.  Her response is simple - "The mail must go through!".

Paul Lindner

“Digital objects last forever—or five years, whichever comes first."

1 min read

“Digital objects last forever—or five years, whichever comes first."

You owe it to yourself to read "Through A Glass, Darkly: Technical, Policy, and Financial Actions to Avert the Coming Digital Dark Ages"  Saving the bits isn't enough.

Paul Lindner

Did someone say DNS DDoS Attack? Remembering PharmaMaster vs Blue Security, 2006

1 min read

Blue Security Graph

Yeah, I was there... Back in May of 2006 Typepad, LiveJournal and TuCows got taken down by a massive (at the time) DDoS.  I recall it was 2-4 GBps of reflective DNS traffic.  Scott Berinato covered it pretty well in the Wired article Attack of the Bots.

For the record we were able to get back up using Akamai DNS Hosting, MCI/UUNet DDoS mitigations, and a cleverly placed GRE tunnel.  Oh and a bunch of great Ops work from Lisa Phillips, Matt Peterson, Peter Wohlers and others.  I think I still have the commemorative t-shirt we did with TuCows.

And here we are 10 years later.  Same stuff, yet in many ways worse.

It's high time we get to fixing the underlying protocols and infrastructure to make these types of attacks a thing of the past.  It's time to Redecentralize.

 [Fancy graph from: Netcraft, Blue Security Shuts Down, Citing DDoS Attacks]


Paul Lindner

The Whiz Kids - Tech Role Models of the 80s

2 min read

Reading this passage from Ready Player One1 I was reminded of a major influence that I had all but forgotten:

It was a Friday night, and I was spending another solitary evening doing research, working my way through every episode of Whiz Kids , an early-’80s TV show about a teenage hacker who uses his computer skills to solve mysteries.  Ready Player One, Ernest Cline, Chapter 18.

So I was prepared when I was recently asked "What brought you here?" (in relation to technology). My answer? ... The Whiz Kids. I can directly trace my interest in online services to that white-hat hacking, war dialing, speech synthesizing, BASIC programming gang of kids2.  I can only hope that today's teens have something as good or better.

Trying to find the video also made me realize that Youtube is providing a vital preservation service.  You see the Whiz Kids episodes were never released, not on DVD, not even on VHS. You won't find them in any library. Anywhere. But there it is, in 10 minutes chunks3, captured and uploaded off a grainy, noisy videotape recording.

Cultural Artifacts, preserved... for now.

  1. RP1, soon to be a major motion picture from Steven Spielberg.
  2. It was also probably the first time I ever heard about the NSA ("No one knows if they even exist")
  3. Here's a full Full Playlist
Image from IMDB

Paul Lindner

Slack no more. Why you should use and

3 min read

There's been a trend where open source projects start a Slack for team communication.  I understand why.  The Slack UI is refined, you get searchable, synced conversions on all devices and even emails when you're away.  Nice!  Except the price you pay is vendor lock-in and a closed source code base.  Plus aren't you fed-up with creating dozens of slack accounts for each projects?  I know I am.

What if I told you there was an open alternative?  One that even included access to your favorite IRC channels? Well there is.  For the past month I've replaced Slack usage with (aka and and I am very, very happy with the results.  

Let's start with the UI.  Here's my Web UI right now:



On the left: rooms/channels. I've customized mine into high/low priority with full control over notification settings.

In the middle: the  IRC channel on Freenode.  Read/unread state is maintained on the server so I can easily switch to the Android or iOS app and participate there.

On the right: the member roster.  You can hide it, or use it to Initiate direct messages.

And look, here's the same UI, on Android showing the Matrix HQ Room:

As you can see Riot supports video/audio calls using WebRTC and file upload too.  Works really well!

Did I mention that these super high quality clients are all open source?

So what about the underlying service?  Well, we're in luck.  The service is also well designed, fast, interoperable and open.  So what exactly is it?  From their FAQ:

Matrix’s initial goal is to fix the problem of fragmented IP communications: letting users message and call each other without having to care what app the other user is on - making it as easy as sending an email.

The longer term goal is for Matrix to act as a generic HTTP messaging and data synchronisation system for the whole web - allowing people, services and devices to easily communicate with each other, empowering users to own and control their data and select the services and vendors they want to use.

Bold and ambitious, and the FAQ has answers to some common questions like why not XMPP and more.

What all this means in practice is that anyone can run Matrix protocols using their own servers.   Want your own private internal system?  Run your own server disconnected from the network.  Want your chats to stay on your own server?  Run your own; with the benefit of interoperating and communicating with other servers in the mesh.  Want to bridge to another chat system, like IRC?  Yes, you can.

And the IRC integration is very, very good.  As you saw above identity and channel state is carried through, direct messages are supported. Offline for a while?  Scroll back to your unread indicator.  Or just check your email:

A Matrix notification shown in an email browser window

So there you have it.  An open system that enables chat.  A highly polished front end.  Full support for one to one and one-to-many conversations. Yes, it's beta, so there are some rough edges.

Give it a try.  You can find me at or just drop into some IRC channels, my nick is plindner.

Paul Lindner

1500 Word MTU has a POSSE: Week 2 Update

3 min read

I'm still pretty happy my indieweb publishing experiment.

Content is flowing in all the right ways.  Posts end up as Posts.  Photos are uploaded native with backlinks. POSSE via just works.  You can see that polls Google+, and then saves what it finds back to the original post by sending Webmentions.  The result is a full archive of activity around this content.

Oh and cross posting to SoundCloud worked perfectly.  And so do embeds..


After a fix from the Known Team WebHooks are working.  I get a POST whenever content changes.  To test this out I send the URL to the Internet Archive Save Page.  Voila!  Instant archiving of my content.  [Next up, backups in IPFS]

I was able to set up the Known open source software on my own server.  Next step is to pull a backup from the hosted version I'm using so I can experiment further and contribute back to the project.

Mobile Posting via Chrome on Android is working well.  You can access the Camera and a rudimentary file picker.  HTML editing is workable, but not great.  I installed the Url Forward app so I can also have native sharing intents.



Of course there are some issues encountered...

Spelling errors mean you Publish Once, Edit Everywhere.  Or if you messed up the URL, Publish Once, Delete Everywhere

I tried using a native web mention to reply to another post, but it didn’t appear on the target site.  There wasn't any visible UX feedback.

I found that there’s no UI support for backdating posts.  Okay, I’ll try Micropub to post.  Nope, very rough implementations, but Quill seems nice.  Eventually I wrote a stub post in Wordpress, exported, imported and edited.  Phew!

But.. it appears that doesn’t syndicate to old posts like this.  Even when I went back and pointed links at each other.  I’ll have to followup on that.

Also, I lost the first version of this post due to a CSRF error since I left it sitting too long in the browser.  Oops.

TinyMCE still is a pain and loves using &nbsp; and CMD-9 is bound to <address>..   I might have to use Markdown instead.

I miss @ mentioning people, and wish there was a UI for that.

Native Google+ support in needs an API.


But still overall quite happy with the way this is going.  I hope you're enjoying the journey with me.


Paul Lindner

1500 Word MTU Experiment: Day #1

2 min read

End of day #1 with Known.  I'm quite pleased with the results.

Good Stuff

  • is awesome.  Having +1's, likes and comments consolidated is so nice.
  • Webhooks!  I'm thinking of writing one to automatically archive pages to
  • PuSH appears to be fully working.  Again, could extend things there..
  • Google+ renders images well.
  • The editor saves drafts.
  • Lightweight page editor should be useful.
  • AMP support is there (add ?_t=amp to any page)  Some validation issues, but works.
  • Real anchor tags and hyperlinks.  No more writing [1] [2] in posts with multiple links (like lynx)

Rough Edges

  • The built-in Photo type doesn't send the permalink to Twitter, so now I have a weird post without context.  Flickr, Facebook working perfectly, might try another setting.
  • I need to get to writing a Google+ outbound connector.  I'm doing those by hand now.
  • TInyMCE sucks.  It has always sucked!  If only Medium would open source their editor.  At least markdown is an option.
  • Looks like syndicated Google+ links are using instead of
  • Some profile pics cloned from G+ are coming back with size 0.  This shows as broken images.
  • Long status posts have extra long permalink URLs.
  • Built-in analytics are weak.  Would rather avoid using GA for that.
  • Limited import options.  Will need to convert Typepad export file to Wordpress format.
  • Bulleted lists line-height is tight, tight, tight.

Overall I'm pretty happy and excited about getting more content in place.

And who knew that a post on SSL/TLS certs would be soooo exciting?


Screenshot of a Known Post



Paul Lindner

Welcome to 1500 Word MTU

2 min read

This is an experiment.  Can I take control of my online life and move it to a place where I have more control?  Can I pull my content out of multiple silos?  And can I import existing content from other platforms and keep it (somewhat) synced over time so I have a full record of my public online life?

We're going to find out..

The trigger for me was an article about my early days working with the Internet Gopher Community.  I had saved most of the email from back then and it was quite easy to reconstruct and remember what happened.  I don't think I'll have the luxury for much of what's happening recently.  The digital ephemera is spread out too far and wide to reconstruct and reflect.

To get there I'm experimenting with the hosted version of Known, a publishing platform that supports the things that matter to me.  I like that it's open source, interoperable and respectful of human effort -- it also supports a number of Indieweb technologies out of the box like WebMention, and to pull back content from the Silos.

So.. you're going to see more content in more places as I'll be syndicating out to Facebook, Twitter, LinkedIn and Google+.  And I'll be sharing more as I document this process.



Silos by Doc Searls / CC BY 2.0


Paul Lindner

Gopher 25 years on. Long fun, read

1 min read

Twenty-five years ago, a small band of programmers from the University of Minnesota ruled the internet. And then they didn’t.

 Gopher Team 

Read more at The rise and fall of the Gopher protocol via MinnPost


On: Google+, FacebookLinkedIn

Paul Lindner

Social Search Part 1 - Connect All the Accounts

3 min read

Do you create content on the web?  Do you want to make that content eligible for inclusion in Google's new social search?  Of course you do! 

Read on for the first part in my series of tips and tricks on how to make social search work better for your content.

1: Connect All the Accounts.

Social search uses your Google identity plus your extended social graph to help you find personalized content.  The extended social graph is found via links everyone adds to their Google+ profile.  More links means more personalized data.

Connect and Verify the accounts you use across the web on the Connected Accounts settings page.  Then add these and other profile links on your Google+ profile.  Remember to add links to accounts across the web, places where you actually create content: your postings, comments, photos, videos and so on.

The best results come from two-way links so consider adding links back to your Google+ profile.  For best results paste in your Google+ profile and remove the /u/# and suffixes.  Your profile link should look like this:

I recently added links to my Google+ profile on these sites. I've included the direct link so you can too.  I'd love to know about more, just leave the site name and link in the comments!

And for those of you self-hosting your own blog or site you can manually put a link back to your Google+ profile by editing your HTML markup to include a link to your Google+ profile.  Here's a simple example:

   <a rel="me" href="">

    My Google+ Profile


The important part is the rel="me"  That tells Google that the linked page is your profile.

That wraps it up for Part 1 -- stay tuned for Part 2 where I go over how to mark up authorship for your content!  Thanks for plussing!

Paul Lindner

Making the Internet Better - Google Edition

2 min read

I've been very fortunate in my career.  I've had many opportunities and been successful in making the Internet a better place for end-users and developers.  From the early days of Gopher to the mainstreaming of open-source at Red Hat to the rise of blogging at Six Apart and on to forming the social web with Opensocial -- I've been a part of many game-changing technologies first hand.   It's one of the most satisfying parts of my work.

That's why I'm happy to announce that I'm joining Google today.  My gut tells me that this is the right company, the right team, and the right time to contribute to and help define another major change that betters the internet and the entire world.

The decision to work for Google did not come easy.  My time at LinkedIn has been truly amazing. The people are smart, the technology is stellar and the opportunities to learn and contribute are limitless.   In the past year and half the company doubled in size while the Platform team launched dozens of great new products and enhancements. I'm especially proud of the small parts that I played in helping launch LinkedIn's open developer program and am equally excited about a number of future projects that will launch in the near future.  I cherish the friendships and knowledge gained and will miss everyone there greatly.

I look forward to the exciting things that I'll be able to accomplish soon.  Here's to the next evolution and revolution!


Paul Lindner lives here now...

1 min read

I just completed exporting my Vox to Typepad. Quite a trip down memory lane; back to the golden age of blogging. I'm thinking kind thoughts for Six Apart right now -- I know this can't be an easy transition they're going through.

Paul Lindner

Fedora 12, Dracut, dmraid, mdadm, oh my!

3 min read

It appears that Fedora 12 moved to a new boot init system called dracut.  Sadly due to a number of odd circumstances this has caused me much pain.  Here's my basic config

  • /boot and /  on /dev/sda
  • /var and /home on a partitioned software raid on /dev/sd{cd}
After an yum-based upgrade to Fedora 12 I rebooted.  We get to the point where we initialize the software raid and boom.  failure.  I'd seen this before, partitioned raid has always had some trouble in fedora.  Previously I had to modify the rc.sysinit script to reset the raid partitions, so I tried that again, moving that init to later in the boot sequence.  Reboot and yes, it works..
However then I noticed some odd things.  I was only getting a single drive in my mirrored RAID.  Further investigation revealed that I had a device dm-1 instead of sdc or sdd listed in /proc/mdstat...  Uh oh..
Looking more closely it appears that my drives were getting set up by dmraid as a fake-raid mirror:  

# dmraid -r 
/dev/sdd: sil, "sil_aiabafajfgba", mirror, ok, 488395120 sectors, data@ 0
/dev/sdc: sil, "sil_aiabafajfgba", mirror, ok, 488395120 sectors, data@ 0

I tried adding the nodmraid option to grub.conf but then the new dracut system started an infinite spew of messages generated by this mdadm error message string (lifted from Assemble.c)

fprintf(stderr, Name ": WARNING %s and %s appear"
" to have very similar superblocks.\n"
" If they are really different, "
"please --zero the superblock on one\n"
" If they are the same or overlap,"
" please remove one from %s.\n",
devices[best[i]].devname, devname,
inargv ? "the list" :
"the\n DEVICE list in mdadm.conf"

Drats! the mirrored fake raid had already mangled my second drive by duplicating the superblock!  Plus since all this was going on in dracut I couldn't fix it.  So I removed the nodmraid option in grub during boot and dug a little deeper. I found that I could keep dracut from doing all this nonsense by adding the following kernel options:

rd_NO_MD rd_NO_DM nodmraid

This allows for a minimal boot without dmraid or mdadm.  After that I was dropped into single user mode with the dupe superblock message.  To fix this required zeroing the superblock of sdd

mdadm --zero-superblock /dev/sdd1

And then rebooting (again!)

Once past this things started working somewhat normally.  To get my raid mirrored again I did the normal thing:

# mdadm --manage /dev/md_d0 --add /dev/sdd1

To get rid of the false-positive fake raid setup I found that you can do this with the dmraid tool itself:

[root@mirth ~]# dmraid -E -r /dev/sdd

Do you really want to erase "sil" ondisk metadata on /dev/sdd ? [y/n] :y

[root@mirth ~]# dmraid -E -r /dev/sdc

Do you really want to erase "sil" ondisk metadata on /dev/sdc ? [y/n] :y

The really odd thing about this whole incident is that I never had these drives in a fake raid setup before. 
In any case, hope this helps the few other people who might have this same problem.

Paul Lindner

Gopher on MTV

1 min read

I dug this little gem out of the archives.  Enjoy!

Gopher World Tour T-Shirt on MTV

Paul Lindner

Email Clients Full Circle

2 min read

In the beginning I used elm to read my mail.  This was somewhat radical, especially as I worked with the team that created POPMail for the mac and Minuet for the PC, and everyone else moved to pine.  Then came Mutt -- happy days -- I was able to slice and dice email with amazing speed.

A couple of years ago I converted over to -- mostly because of the contacts and calendar integrations, and the fact that I could merge personal email and corp email accounts.  In the intervening time I had to move to comcast, which meant running my own imap server proved more difficult than it was worth, so I moved to Google Apps for Your Domain, all of a sudden my personal domain is running Gmail, and I discovered it has key bindings.
All of a sudden it's mutt deja-vu. navigation with vi j/k keys? yes.  Single window view (inbox/message)? yes again.  Tagging messages? yes.  Blazingly fast? you bet.  The only thing I miss is keystroke filtering of messages.
That's one reason why I see things like Google Wave working out so well, I might be late to the gmail party, but plenty of folks have been using this as their primary mode of communication for a long long time.

Paul Lindner

Tomcat and SSL Accelerators

3 min read

Using an SSL Accelerator like a Netscaler is really useful, you can offload a lot of work to a device that supports this in hardware and can use SSL session affinity to send requests to the same backend.  In the simplest setup the SSL Accelerator accepts the request and proxies it to your internal set of hosts running on port 80.

However, code that generates redirects and URLs works poorly because the servletRequest.getScheme(), getSecure() and getServerPort() will return http/false/80 for SSL and non-SSL connections.
One way to solve this is listen on multiple ports.  Create a Connection on 80 and 443, but do not run SSL on either port.  Then for the 443 port you configure it with secure="true" and scheme="https".  This is suboptimal however as then you have to manage yet another server pool in your load balancer and you end up sending twice the health checks.  Not so good.
You might try to solve this by using a ServletFilter.  You can use an HttpServletRequestWrapper instance to change the scheme/port/and secure flag.  Sadly this doesn't work, because of the way tomcat implements HttpServletResponse, it uses the original request object to ascertain the scheme/secure flag/port.  Overriding these will allow application logic to see the updated values.  You get into trouble when you call encodeRedirectURL() or sendRedirect() with non-absolute URLs.
Lucky for us Tomcat supports a way to inject code into the connection handling phase via Valves.  A valve can query and alter the Catalina and Coyote request objects before the first filter is run.  
To make your Valve work you'll need to configure your load balancer to send a special header when SSL is in use.  On the Netscaler this can be done by setting owa_support on.  With that enabled the http header Front-End-Https: On is sent for requests that use SSL.
Once we have these pieces in place the Valve is fairly straightforward:


import javax.servlet.ServletException;

import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.valves.ValveBase;

public class NetscalerSSLValve extends ValveBase {

        public void invoke(Request req, Response resp) throws IOException, ServletException {
                if ("On".equals(req.getHeader("Front-End-Https"))) {
                if ( getNext() != null ) {
                        getNext().invoke(req, resp);

Compile this, stick it in the tomcat lib directory, add an entry in your server.xml and away you go.

Paul Lindner

Google I/O Today

1 min read

Speaking at "Meet the Containers", "Shindig 101" and "OpenSocial Fireside Chat".

All at Moscone West, check it out!

Paul Lindner

The Mysteries of Java Character Set Performance

4 min read

"Two Characters Sets?  Seems like plenty!"

So I've been pushing Java to it's limits lately and finding some real nasty concurrency issues inside the JRE code itself.  Here's one particulary ugly one -- we had 700 threads stuck here:

       java.lang.Thread.State: BLOCKED (on object monitor)                                                                    
         at sun.nio.cs.FastCharsetProvider.charsetForName(
         - waiting to lock <0x00002aab4cdf91b8> (a sun.nio.cs.StandardCharsets)
         at java.nio.charset.Charset.lookup2( 
         at java.nio.charset.Charset.lookup(
         at java.nio.charset.Charset.isSupported( 
         at java.lang.StringCoding.lookupCharset( 
         at java.lang.StringCoding.decode(                                                                      
         at java.lang.String.( 
Digging deeper we find the lookupCharset is called all over the place.  The app in question is functions as a web proxy, so it's constantly reading and writing data from web pages in a variety of character sets.  The method charsetForName() uses a synchronized data structure to lookup defined character sets.  (Yay serialized access....)
But wait, lookup and lookup2 provide us with a cache so we can avoid the big bad synchronized method..  Sigh, here's the implementation:
     private static Charset lookup(String charsetName) {
         if (charsetName == null)
             throw new IllegalArgumentException("Null charset name");
         Object[] a;
         if ((a = cache1) != null && charsetName.equals(a[0]))
             return (Charset)a[1];
         // We expect most programs to use one Charset repeatedly.
         // We convey a hint to this effect to the VM by putting the
         // level 1 cache miss code in a separate method.
         return lookup2(charsetName);
     private static Charset lookup2(String charsetName) {
         Object[] a;
         if ((a = cache2) != null && charsetName.equals(a[0])) {
             cache2 = cache1;
             cache1 = a;
             return (Charset)a[1];
         Charset cs;
         if ((cs = standardProvider.charsetForName(charsetName)) != null ||
             (cs = lookupExtendedCharset(charsetName))           != null ||
             (cs = lookupViaProviders(charsetName))              != null)
             cache(charsetName, cs);
             return cs;
         /* Only need to check the name if we didn't find a charset for it */
         return null;
Yes, a whopping 2-entry cache!!
Also, the keys used are not canonical, so if my app asks for "UTF-8", "utf-8", and "ISO-8859-1" with regularity this 2 entry cache is worthless, every call ends up blocking in the evil thread-synchronized data structure.
Someone send them a copy of the ConcurrentHashMap doc.  please.

Paul Lindner

Social Graph Meat-up

1 min read

Dinner not for vegans at O'Reilly.

Social Graph Meat-up

Social Graph Meat-up

Paul Lindner

Paul Lindner


1 min read

Why am I so tired?

Been working hard to implement features decribed here..:

hi5 Launches New Music Applications By iLike and Qloud

No more music royalties for hi5.  Cost center is now a profit center...

Paul Lindner


1 min read

'nuff said...

Paul Lindner

OpenSocial Roundup

3 min read

 At hi5 we've been busy busy busy getting OpenSocial up and running.  We released our developer sandbox, and are rapidly implementing features.  So check out the following URLs

Campfire One Highlights: Introducing OpenSocial

Also, here's a copy of my response to Tim O'Reilly's blog post:

OpenSocial: It's the data, stupid

Hi folks,

Good comments all around. However I'd like to posit that data access is _not_ the problem. We've had universal standards for years now with little uptake., Typepad, LiveJournal and others have supported FOAF for many, many years, which encompasses the OpenSocial Person and Friends APIs. Not much has come of that -- there isn't a large enough base there to get people interested.

Now you have a broad industry consensus on a single way to provide all of the above plus activity stream data. You have a rich client platform that allows you to crack open that data and use it in interesting ways, and finally you have a common standard for social networks to interact with each other based on the REST api.

So Patrick's statement at the Web 2.0 Expo is correct, a app running inside a container only allows you to see what that container shows you. However that does not mean that a container could not contain friend references to external social networks via it's own federation mechanism. Movable Type 4.0 has shown that you can support any OpenID login in a single system, there's no reason to believe that social networks could not leverage OAuth to do the same.

And here's a final point to consider -- you have Myspace opening up to developers. That's huge. That alone is going to draw more developer attention to this problem than much of the oh-so academic discussions of the past few years.

I suggest people that _want_ OpenSocial to solve all the social graph ills get involved on the API mailing list and make sure that those elements are addressed as OpenSocial evolves.

There's a tremendous amount of momentum. Let's not waste this chance.

Paul Lindner


1 min read

Paul Lindner


1 min read

Paul Lindner

ILike at Campfire One

1 min read

In hi5, Orkut, and Ning!

Paul Lindner


1 min read

This has got to be a bug....

Dear Customer,

We've noticed that customers who have purchased or rated White Noise Critical: Text and Criticism (Viking Critical Library) by Don DeLillo have also purchased Caught in the Machinery: Workplace Accidents and Injured Workers in Nineteenth-Century Britain by Jamie Bronstein. For this reason, you might like to know that Caught in the Machinery: Workplace Accidents and Injured Workers in Nineteenth-Century Britain will be released on October 10, 2007.  You can pre-order yours by following the link below.

Caught in the Machinery: Workplace Accidents and Injured Workers in Nineteenth-Century Britain
Jamie Bronstein
Price:    $55.00
Release Date: October 10, 2007

Paul Lindner

Found in Hi5 Lunch Room

1 min read

Update:  On the back we find the fine, fine web site (Enter if you dare!) and a bio of Romeo, a rapper I have never heard of, but my colleage Brett tells me was once a featured artist on Hi5.

Paul Lindner

Free WiFi in San Francisco

1 min read

Meraki is building a free mesh network in San Francisco.  This is probably the best hope for getting this type of service in the city now that the Google/Earthlink deal fell apart.

Join up!

Go to and help build the network.  When the router comes in I'll have 7th and Howard covered with 1Mbps of donated bandwidth.

Paul Lindner

Widgets, APIs and more

2 min read

I'm happy to announce that Hi5 has Widget support.  Yes, I know that this is soooo last year. However there's a twist that makes it better.

We worked closely with Rock You and Slide to integrate tightly with our site, using open standards wherever possible.  For example, for slideshows we created Atom Feeds for each photo album, and a feed-of-albums feed for the list of all albums.  And when it came time to share profile information for horoscopes (birthday) and languages spoken we used FOAF.  Thus we get partners to adopt open standards, plus the work we did for them is usable by everyone. 

The only tricky part was authentication and authorization.  Right now it's using our own AuthToken implementation, but it could probably be done in a better way.  I looked into OpenID as a mechanism, but's way too end-user centric for this type of thing.

Coming soon we should have full Atom endpoints (both in/out with WSSE auth), OpenID provider, and a few other standards based things like XMPP vCard support.  All of this is being done with an Web Services Aspect Oriented toolkit called Enunciate, which has made writing these services a very enjoyable experience.

Paul Lindner

Peruvian Earthquake

1 min read

Earthquake in Peru, logins drop immediately.  Hope everyone is safe....

Paul Lindner

Got yer back 6A

1 min read

So I've spent a good chunk of today defending Six Apart from the cheap shots being leveled at them today.  I won't link to them, they don't deserve the pagerank.

I'm particularly angered at the audacity of the bald-faced lies in some comments.

I may not be employed at Six Apart today, but I put my heart and soul into building it.  I won't let a bunch of hacks harm the people still there.  So, if you see anyone anywhere putting the hurt on Six Apart let me know.  I'll use discourse, reason and wit to set the record straight.

Paul Lindner

Skins, Updates, More

1 min read

Just caught up 10 days worth of Neighborhood posts.  I now have Vox fatigue combined with Vox guilt.  I didn't even read comments, for shame :(  After this post I'll need to check on the 'ol LiveJournal Friends page.  Don't even ask about the umpteem BlogLines blogs stuck at 200 posts...

Hi5 has a new Skins system that actually can make profile pages look good.  I had some input early on and made sure Vox and the SixApart styles were part of the inspiration.  It's coming out really well and we've received over 200 submissions.  Check out the snazzy new profile page?  Designers can check out the specs page.

Embeds are evil.  They mess up divs and tables and are often pasted in haphazardly.  Amit  came up with an amazing solution.  Use JTidy to clean up the user submitted content.  Tags match and broken html goes bye-bye!

Now back to the super-secret Hi5 Project Funk.

Paul Lindner

Internet Blackout 2007

1 min read

Like many others (and Vox/LJ itself) Hi5 was affected by the power outage in Colo 4 in 365 Main.  We blogged about it over at the Hi5 Blog.

Paul Lindner


1 min read

Paul Lindner

Mmmm Lunch 2.0 @ Socializr

1 min read

Primo Patio catered food and specialties cookies...

Paul Lindner

Hi5 Blog goes live today

1 min read

We're living on the edge over here at Hi5.  Our new Movable Type 4 base blog is now available at

The whole company is getting involved and you'll see plenty of interesting information to come.

Also, from a technical standpoint, MT4 has proved a winner.  The memcached support in Data::ObjectDriver means that we can run via plain CGI, saving a bunch of time and effort to get this up and going.

We should have 3-4 posts per week.  Sadly I didn't get a chance to finish implementing userpics for MT4, but that should come shortly.

Paul Lindner

Playstation Party?

1 min read

big crowd at the metreon...

Paul Lindner


1 min read

Saw demos of Loopt,, and others. I didn't see anyone I knew though.

Paul Lindner

Paul Lindner

Hi5 Winery Trip

1 min read

Sebastiani winery - wine and cheese pairing, yum!

Paul Lindner

Yelp for Toilets

1 min read

I suppose it was just a matter of time...

Found via uncov.

Paul Lindner

PostgreSQL & Hi5 - Users Group Meeting

1 min read

We had a great turnout at the latest PostgreSQL users group meetup -- around 35 people showed. (Oh and not the group of stylish "Hi5 folk" you see to the right :)

Ram and I went over the PostgreSQL based DB architecture we use at Hi5 after the obligatory pizza feed.  Quite an interesting crowd, some newbies, and some old hands.

My best line of the night was in response to a question asking us when we were going to use a specific feature -- my answer was that there were more people in the room than there were employees at Hi5.  :)

The complete presentation is online for the curious.

Paul Lindner

Privacy International - Fools

1 min read

I see that Hi5 made the list of Privacy International as posing a substantial threat to users' privacy.  I find that their methodology is extremely suspect.  I can't spot any consistency in the way they treat sites.

These guys dinged us because our point of contact for Privacy is our legal counsel.  He is, but he's also the guy calling Malaysia at 3AM to get phishing sites shut down.  We do a lot around here.

Also, these guys claim they had a pop-up advertisement show up when they clicked on the privacy page.  I know for a fact that this is not possible.  No advertising code is used on those pages, never has, never will.  These idiots must have had some kind of malware installed to cause that to occur.

In any case, we'll let Google and them fight it out.  We don't need validation from some poor excuse for a privacy group.  We protect our users and give them the tools to protect their privacy.