Skip to main content

Paul Lindner

Paul Lindner

Everything in our world will soon be technology-mediated. @anildash offers some wisdom on how we can make these changes in a net-positive way. Recommended.

https://www.linkedin.com/pulse/12-things-everyone-should-understand-tech-anil-dash/

Paul Lindner

When Pong played Humans

3 min read

It was a blistering July day in Las Vegas, with temps hitting 109.  Inside the SIGGRAPH 91 convention hall Yello's Rubberbandman looped on the speakers. On each chair: a red/green paddle.

I was a student volunteer, stamping the finest hands in Computer Graphics.  Those hands (and my own) each controlled those paddles.  Then 5000 people looked up and saw a Pong Game appear on the screen.

And then..  the machine started playing us.

In response to visual stimuli we changed the color of our paddle.  The ball moved left, then right.  The crowd shouting "red red red", "green!" and cheering as the game played on.

The rules of the game and the feedback loops directed our actions.  It was a complex adaptive system with emergent behavior.

And luckily there is some footage of this moment.  Watch this excerpt from "Machines of Loving Grace" that talks about this moment in history:

Loren Carpenter Experiment at SIGGRAPH '91 from Zachary Murray on Vimeo.

Loren Carpenter cofounded Pixar.  Check out the TurboGopher appearance at the 5:00 minute mark.

Today the simple pong game is now the multilayered technological environment we interact with on a daily basis. Instead of red/green paddles with 1 bit of data we carry phones that generate a wealth more.  These devices also provide the aural/visual and haptic stimuli.    With that our collective actions power all kinds of "games" today:

  • Aggregated location data and movement speed generates traffic data in maps.
  • Aggregated search queries and click data deliver better search results.
  • Aggregated likes, views and interactions with content power trending data and even news and politics.

As technologists we need to remember that by controlling the game, we are indirectly controlling the players.  The choices we allow (and forbid) define the behavior.  The game "plays" the player.  And often the only way to be free is to not play at all.

Except that is if maybe, just maybe, the people start playing a different game than the one we designed.  In the giddy demonstration it was assumed that people wanted to win at Pong.  But we didn't play long enough for abuse or scheming.  It would have only taken a few people to cross over to sabotage the other side, or for trolls to have changed the outcome.

Finally this level of power and control demands great responsibility.  The only thing worse than control used for malicious purposes is control wielded without thought, without thinking of the consequences.  So the next time you're designing a product think about the whole system and all the inputs and ask "who's really in control?".

h/t to the General Intellect Unit podcast and their Machines of Loving Grace episode for reminding me of this unsung moment in history.

Paul Lindner

Scrobbling for @resonatecoop is now available thanks to the efforts of Malachi Soord @inversechi
https://github.com/web-scrobbler/web-scrobbler

Paul Lindner

Heard a CashCall radio ad to refinance and “buy the bitcoin dip”. Shades of 1999 when our WaMu loan officer told us to invest our down payment...

Paul Lindner

Highly recommended talk by @aparrish that illustrates principles of The Law of Requisite Variety and the Good Regulator Theorem.

http://opentranscripts.org/transcript/programming-forgetting-new-hacker-ethic/

Paul Lindner

brb checking my memcached commits on github.

I also think there's a future for a Branding Agency specializing in vulnerability names. We're no longer content with CVEs any more.

https://blog.cloudflare.com/memcrashed-major-amplification-attacks-from-port-11211/

Paul Lindner

Friday update for @resonatecoop. I discovered the esteemed electronica duo @Coldcut has their catalog there. Nine listens means I now own "Quality Control" and other tracks.

https://resonate.is/profile/546/

Paul Lindner

Here's my first friday update for @resonatecoop. With 9 listens I now own 'Doll' by @feralfive off the excellent Man Cat Doll Machine EP.

https://resonate.is/song/3095/feral_five-doll/

Paul Lindner

As a long time @matrixdotorg supporter I'm very happy with this new funding that provides long term viability.

https://matrix.org/blog/2018/01/29/status-partners-up-with-new-vector-fueling-decentralised-comms-an...

Paul Lindner

Investing in a better Internet: Resonate, a music coop

4 min read

Do you want a better internet?  One that balances the needs of creators and consumers?  A more democratic internet?  I do.  That's why I'm investing in a music coop: Resonate.

Stream to Own

I've been a member-owner of Resonate for a while, and listen every day.  It provides an eclectic mix similar to a high quality college radio station.  At first glance Resonate is a streaming service like Soundcloud or Spotify.  But dig deeper and the you'll find major differences:

  • You only pay for what you listen to.
  • Each listen debits your balance a small amount.
  • On the 9th listen you own the track. 

This tiered pricing model incentivizes discovery.  Owning actual tracks helps fans develop deeper ties to the music they love.

Stream to Own Model and Graph 

And I own more than just tracks.  My member share means that I own a portion of Resonate, I can vote on how the business is operated and at the end of the year I can share in the profits.

Over the past year Resonate has added more content, more features, and most importably a sustainable organization where fans, musicians, employees and labels can work together towards common goals.   This is the kind of “cooperative internet” that I always imagined would emerge back in the pre-web era.

 

“Purpose above Profits”

REI

"Purpose above Profits" was the slogan at REI as I shopped for the holidays.  It’s a reminder that the REI is a Member Cooperative.  With my $20 lifetime membership I get dividends based on my purchases while supporting outdoor and environmental causes.  In 2016 REI gave back 70% of profits.

This is but one example of how Coops can offer sustainable services for the communities they serve.  Growing up I had electric power from an Coop.  When I lived in Switzerland there’s a huge retail chain literally named “Coop”.  I currently use and support my Credit Union.

Overall Coop businesses are more sustainable, and are oriented to the long term interests of their member-owners.

But the growth of the Internet and the Web bypassed the cooperative model.  This despite the fact that open source and much of the shared internet infrastructure are structured like coops.  It wasn't until 2014 that the concept of Platform Cooperative was coined.   The rise of pseudo-"sharing" platforms like Uber and AirBnB and the rise of decentralized technologies like blockchains were two key reasons that many now embrace the concept.

 

Early Stage Capital

But a problem emerges, how do you bootstrap a Cooperative where there are significant barriers to entry?  That’s where Supporter Shares come in.  Anyone can invest in these shares.  Each year the co-op sets aside 10% of profits and issues dividends to Supporter Share owners.

Resonate Voting Diagram

But remember that Supporter Shares don't get you extra voting power.  A cooperative is still one-person, one-vote.  The upside is that there are no leveraged buyouts, no dual share structures or non-voting shares.

 

The Future Internet

The Internet I want is a democratic one where creators, consumers, supporters and employees can work together towards common, sustainable goals.  By using and investing in Resonate I hope to advance those goals.  Liz Pelly captured the sentiment in "Protest Platforms" that "Resonate is particularly interesting for the way it advocates for broad decentralization of data, power, and money in music".

The Resonate Project Map details where the project is going and the plan to achieve it.  I’ll admit that the content catalog is small, (but growing!) and the technology is very beta (but improving!).  I still use and enjoy it every day.

I hope that you'll consider joining the coop as a member owner and see for yourself.  If you want to accelerate this type of work consider purchasing Supporter Shares.

And finally, I hope that you'll consider supporting a new generation of online platforms that include the same kind of values that Resonate promotes.  All while listening to and supporting the artists we love.

Paul Lindner

Moving your Google +1s to Pinboard

2 min read

So the +1 button on the web is riding off into the sunset.  But you can still make good use of the data that you've collected over the years via Google Takeout!  I like to keep my bookmarks in Pinboard, so here's how I did it and you can too.

Export

1. Visit https://takeout.google.com/settings/takeout in your browser.  You'll see something like this:

2. Click Select None, then click on the checkmark next to +1s.

3. Scroll to the bottom and click Next

4. The next screen has some choices for file format.  Change if you want, but the defaults should be fine and will email you a link to a zip file you can download.

5. You'll receive an email with a link to the zip file.  Expand the file and you'll find something like this:

 

Import to Pinboard

Now that you have the +1s.html file you can import it to Pinboard.  (Or other sites that support the Netscape Bookmark file format)

1. Pinboard 'tags' imports with the name of the file.  I wanted to use the tag 'plusone' so I renamed my file from +1s.html to plusones.html

2. Next visit the Pinboard settings page, then click import (or just click on this link)  You'll see something like this:

3. Click on the Choose File button, select your html file (in my case plusones.html) and click upload.

4. After a little bit of time Pinboard will have your imported bookmarks!  You can then view all of them based on the tag (plusones).  Click on the tag and you can browse/clean them up. Woohoo!

 

Other Places

Once you have the exported bookmark html file you can also import to other products.

Contact me if you have more.  I'll add them here!

 

Paul Lindner

Challenge: Redesign classic games for DAU and ARPU metrics. I'll start: Monopoly - do not pass go for 24 hours unless you buy these gems.

Paul Lindner

The Mail Must Go Through - Decentralized Customer Service

1 min read

Some kudos to the US Postal Service.  I sent Express Mail to a PO Box for Saturday delivery.  Saturday comes and  I realize that the post office is only open from 8 to 10:30, but delivery is only guaranteed by 3pm.  Oops.

So I look up the Post Office and notice that a local number is available.  With skepticism I called the local number.  3 rings later I'm talking to a small-town Postmaster.  She knows the recipient, takes the tracking number and promises to call back.  15 minutes later she has found out where it was and promised to receive it after hours and deliver it.

Shocked I ask her what can I do to thank you.  Her response is simple - "The mail must go through!".

Paul Lindner

Typo of the day -- MEATADATA

Paul Lindner

Telling the Guatemala based @UPS phone support reps that "Even Comcast gives you a 2 hour window" is a losing strategy

Paul Lindner

“Digital objects last forever—or five years, whichever comes first."

1 min read

“Digital objects last forever—or five years, whichever comes first."

You owe it to yourself to read "Through A Glass, Darkly: Technical, Policy, and Financial Actions to Avert the Coming Digital Dark Ages"  Saving the bits isn't enough.

Paul Lindner

Paid for AF4 shipping, got BE1

Paul Lindner

Dyn acquired by Oracle?

I've been wanting to move my secondary DNS off them for a while. Recommendations anyone?

https://techcrunch.com/2016/11/21/oracle-acquires-dns-provider-dyn-subject-of-a-massive-ddos-attack-...

Paul Lindner

Typo of the day -- derp learning

Paul Lindner

Listening to DEF CON soundtracks today, including what must be the only song ever written about ssh...

http://music.gravitasrecordings.com/track/ssh-to-your-heart-featuring-shannon-morse

Paul Lindner

Did someone say DNS DDoS Attack? Remembering PharmaMaster vs Blue Security, 2006

1 min read

Blue Security Graph

Yeah, I was there... Back in May of 2006 Typepad, LiveJournal and TuCows got taken down by a massive (at the time) DDoS.  I recall it was 2-4 GBps of reflective DNS traffic.  Scott Berinato covered it pretty well in the Wired article Attack of the Bots.

For the record we were able to get back up using Akamai DNS Hosting, MCI/UUNet DDoS mitigations, and a cleverly placed GRE tunnel.  Oh and a bunch of great Ops work from Lisa Phillips, Matt Peterson, Peter Wohlers and others.  I think I still have the commemorative t-shirt we did with TuCows.

And here we are 10 years later.  Same stuff, yet in many ways worse.

It's high time we get to fixing the underlying protocols and infrastructure to make these types of attacks a thing of the past.  It's time to Redecentralize.

 [Fancy graph from: Netcraft, Blue Security Shuts Down, Citing DDoS Attacks]

 

Paul Lindner

The Whiz Kids - Tech Role Models of the 80s

2 min read

Reading this passage from Ready Player One1 I was reminded of a major influence that I had all but forgotten:

It was a Friday night, and I was spending another solitary evening doing research, working my way through every episode of Whiz Kids , an early-’80s TV show about a teenage hacker who uses his computer skills to solve mysteries.  Ready Player One, Ernest Cline, Chapter 18.

So I was prepared when I was recently asked "What brought you here?" (in relation to technology). My answer? ... The Whiz Kids. I can directly trace my interest in online services to that white-hat hacking, war dialing, speech synthesizing, BASIC programming gang of kids2.  I can only hope that today's teens have something as good or better.

Trying to find the video also made me realize that Youtube is providing a vital preservation service.  You see the Whiz Kids episodes were never released, not on DVD, not even on VHS. You won't find them in any library. Anywhere. But there it is, in 10 minutes chunks3, captured and uploaded off a grainy, noisy videotape recording.

Cultural Artifacts, preserved... for now.


  1. RP1, soon to be a major motion picture from Steven Spielberg.
  2. It was also probably the first time I ever heard about the NSA ("No one knows if they even exist")
  3. Here's a full Full Playlist
Image from IMDB

Paul Lindner

Paul Lindner

Slack no more. Why you should use Riot.im and Matrix.org

3 min read

There's been a trend where open source projects start a Slack for team communication.  I understand why.  The Slack UI is refined, you get searchable, synced conversions on all devices and even emails when you're away.  Nice!  Except the price you pay is vendor lock-in and a closed source code base.  Plus aren't you fed-up with creating dozens of slack accounts for each projects?  I know I am.

What if I told you there was an open alternative?  One that even included access to your favorite IRC channels? Well there is.  For the past month I've replaced Slack usage with Riot.im (aka vector.im) and Matrix.org and I am very, very happy with the results.  

Let's start with the UI.  Here's my Web UI right now:

 

 

On the left: rooms/channels. I've customized mine into high/low priority with full control over notification settings.

In the middle: the  IRC channel on Freenode.  Read/unread state is maintained on the server so I can easily switch to the Android or iOS app and participate there.

On the right: the member roster.  You can hide it, or use it to Initiate direct messages.

And look, here's the same UI, on Android showing the Matrix HQ Room:

As you can see Riot supports video/audio calls using WebRTC and file upload too.  Works really well!

Did I mention that these super high quality clients are all open source?

So what about the underlying service?  Well, we're in luck.  The matrix.org service is also well designed, fast, interoperable and open.  So what exactly is it?  From their FAQ:

Matrix’s initial goal is to fix the problem of fragmented IP communications: letting users message and call each other without having to care what app the other user is on - making it as easy as sending an email.

The longer term goal is for Matrix to act as a generic HTTP messaging and data synchronisation system for the whole web - allowing people, services and devices to easily communicate with each other, empowering users to own and control their data and select the services and vendors they want to use.

Bold and ambitious, and the FAQ has answers to some common questions like why not XMPP and more.

What all this means in practice is that anyone can run Matrix protocols using their own servers.   Want your own private internal system?  Run your own server disconnected from the network.  Want your chats to stay on your own server?  Run your own; with the benefit of interoperating and communicating with other servers in the mesh.  Want to bridge to another chat system, like IRC?  Yes, you can.

And the IRC integration is very, very good.  As you saw above identity and channel state is carried through, direct messages are supported. Offline for a while?  Scroll back to your unread indicator.  Or just check your email:

A Matrix notification shown in an email browser window

So there you have it.  An open system that enables chat.  A highly polished front end.  Full support for one to one and one-to-many conversations. Yes, it's beta, so there are some rough edges.

Give it a try.  You can find me at @lindner:matrix.org or just drop into some IRC channels, my nick is plindner.

Paul Lindner

Just finished the six part documentary Capitalism. Feels like the 1st time I read The Peoples History of the USA. Oh and

http://capitalism.vhx.tv/

Paul Lindner

1500 Word MTU has a POSSE: Week 2 Update

3 min read

I'm still pretty happy my indieweb publishing experiment.

Content is flowing in all the right ways.  Posts end up as Posts.  Photos are uploaded native with backlinks. POSSE via brid.gy just works.  You can see that Brid.gy polls Google+, and then saves what it finds back to the original post by sending Webmentions.  The result is a full archive of activity around this content.

Oh and cross posting to SoundCloud worked perfectly.  And so do embeds..

 

After a fix from the Known Team WebHooks are working.  I get a POST whenever content changes.  To test this out I send the URL to the Internet Archive Save Page.  Voila!  Instant archiving of my content.  [Next up, backups in IPFS]

I was able to set up the Known open source software on my own server.  Next step is to pull a backup from the hosted version I'm using so I can experiment further and contribute back to the project.

Mobile Posting via Chrome on Android is working well.  You can access the Camera and a rudimentary file picker.  HTML editing is workable, but not great.  I installed the Url Forward app so I can also have native sharing intents.

 

Bumps

Of course there are some issues encountered...

Spelling errors mean you Publish Once, Edit Everywhere.  Or if you messed up the URL, Publish Once, Delete Everywhere

I tried using a native web mention to reply to another post, but it didn’t appear on the target site.  There wasn't any visible UX feedback.

I found that there’s no UI support for backdating posts.  Okay, I’ll try Micropub to post.  Nope, very rough implementations, but Quill seems nice.  Eventually I wrote a stub post in Wordpress, exported, imported and edited.  Phew!

But.. it appears that brid.gy doesn’t syndicate to old posts like this.  Even when I went back and pointed links at each other.  I’ll have to followup on that.

Also, I lost the first version of this post due to a CSRF error since I left it sitting too long in the browser.  Oops.

TinyMCE still is a pain and loves using &nbsp; and CMD-9 is bound to <address>..   I might have to use Markdown instead.

I miss @ mentioning people, and wish there was a UI for that.

Native Google+ support in brid.gy needs an API.

 

But still overall quite happy with the way this is going.  I hope you're enjoying the journey with me.

Tagged:

Paul Lindner

1500 Word MTU Experiment: Day #1

2 min read

End of day #1 with Known.  I'm quite pleased with the results.

Good Stuff

  • brid.gy is awesome.  Having +1's, likes and comments consolidated is so nice.
  • Webhooks!  I'm thinking of writing one to automatically archive pages to archive.org.
  • PuSH appears to be fully working.  Again, could extend things there..
  • Google+ renders images well.
  • The editor saves drafts.
  • Lightweight page editor should be useful.
  • AMP support is there (add ?_t=amp to any page)  Some validation issues, but works.
  • Real anchor tags and hyperlinks.  No more writing [1] [2] in posts with multiple links (like lynx)

Rough Edges

  • The built-in Photo type doesn't send the permalink to Twitter, so now I have a weird post without context.  Flickr, Facebook working perfectly, might try another setting.
  • I need to get to writing a Google+ outbound connector.  I'm doing those by hand now.
  • TInyMCE sucks.  It has always sucked!  If only Medium would open source their editor.  At least markdown is an option.
  • Looks like syndicated Google+ links are using profiles.google.com instead of plus.google.com.
  • Some profile pics cloned from G+ are coming back with size 0.  This shows as broken images.
  • Long status posts have extra long permalink URLs.
  • Built-in analytics are weak.  Would rather avoid using GA for that.
  • Limited import options.  Will need to convert Typepad export file to Wordpress format.
  • Bulleted lists line-height is tight, tight, tight.

Overall I'm pretty happy and excited about getting more content in place.

And who knew that a post on SSL/TLS certs would be soooo exciting?

 

Screenshot of a Known Post

 

Tagged:

Paul Lindner

I just got an SSL cert as a 1-liner. exciting!

https://letsencrypt.org/

letsencrypt --text --email lindner@inuus.com --domains www.inuus.com,inuus.com,mirth.inuus.com --agree-tos --renew-by-default --standalone --standalone-supported-challenges http-01 certonly

Paul Lindner

Welcome to 1500 Word MTU

2 min read

This is an experiment.  Can I take control of my online life and move it to a place where I have more control?  Can I pull my content out of multiple silos?  And can I import existing content from other platforms and keep it (somewhat) synced over time so I have a full record of my public online life?

We're going to find out..

The trigger for me was an article about my early days working with the Internet Gopher Community.  I had saved most of the email from back then and it was quite easy to reconstruct and remember what happened.  I don't think I'll have the luxury for much of what's happening recently.  The digital ephemera is spread out too far and wide to reconstruct and reflect.

To get there I'm experimenting with the hosted version of Known, a publishing platform that supports the things that matter to me.  I like that it's open source, interoperable and respectful of human effort -- it also supports a number of Indieweb technologies out of the box like WebMention, and brid.gy to pull back content from the Silos.

So.. you're going to see more content in more places as I'll be syndicating out to Facebook, Twitter, LinkedIn and Google+.  And I'll be sharing more as I document this process.

 

Silos

Silos by Doc Searls / CC BY 2.0

Tagged:

Paul Lindner

Gopher 25 years on. Long fun, read

1 min read

Twenty-five years ago, a small band of programmers from the University of Minnesota ruled the internet. And then they didn’t.

 Gopher Team 

Read more at The rise and fall of the Gopher protocol via MinnPost

Tagged:

On: Google+, FacebookLinkedIn

Paul Lindner

Social Search Part 1 - Connect All the Accounts

3 min read

Do you create content on the web?  Do you want to make that content eligible for inclusion in Google's new social search?  Of course you do! 

Read on for the first part in my series of tips and tricks on how to make social search work better for your content.

1: Connect All the Accounts.

Social search uses your Google identity plus your extended social graph to help you find personalized content.  The extended social graph is found via links everyone adds to their Google+ profile.  More links means more personalized data.

Connect and Verify the accounts you use across the web on the Connected Accounts settings page.  Then add these and other profile links on your Google+ profile.  Remember to add links to accounts across the web, places where you actually create content: your postings, comments, photos, videos and so on.

The best results come from two-way links so consider adding links back to your Google+ profile.  For best results paste in your Google+ profile and remove the /u/# and suffixes.  Your profile link should look like this:

https://plus.google.com/117259934788907243749

I recently added links to my Google+ profile on these sites. I've included the direct link so you can too.  I'd love to know about more, just leave the site name and link in the comments!

And for those of you self-hosting your own blog or site you can manually put a link back to your Google+ profile by editing your HTML markup to include a link to your Google+ profile.  Here's a simple example:

   <a rel="me" href="https://plus.google.com/117259934788907243749">

    My Google+ Profile

 </a>

The important part is the rel="me"  That tells Google that the linked page is your profile.

That wraps it up for Part 1 -- stay tuned for Part 2 where I go over how to mark up authorship for your content!  Thanks for plussing!

Paul Lindner

Making the Internet Better - Google Edition

2 min read

I've been very fortunate in my career.  I've had many opportunities and been successful in making the Internet a better place for end-users and developers.  From the early days of Gopher to the mainstreaming of open-source at Red Hat to the rise of blogging at Six Apart and on to forming the social web with Opensocial -- I've been a part of many game-changing technologies first hand.   It's one of the most satisfying parts of my work.

That's why I'm happy to announce that I'm joining Google today.  My gut tells me that this is the right company, the right team, and the right time to contribute to and help define another major change that betters the internet and the entire world.

The decision to work for Google did not come easy.  My time at LinkedIn has been truly amazing. The people are smart, the technology is stellar and the opportunities to learn and contribute are limitless.   In the past year and half the company doubled in size while the Platform team launched dozens of great new products and enhancements. I'm especially proud of the small parts that I played in helping launch LinkedIn's open developer program and am equally excited about a number of future projects that will launch in the near future.  I cherish the friendships and knowledge gained and will miss everyone there greatly.

I look forward to the exciting things that I'll be able to accomplish soon.  Here's to the next evolution and revolution!

 

Paul Lindner

paul.vox.com lives here now...

1 min read

I just completed exporting my Vox to Typepad. Quite a trip down memory lane; back to the golden age of blogging. I'm thinking kind thoughts for Six Apart right now -- I know this can't be an easy transition they're going through.

Paul Lindner

Fedora 12, Dracut, dmraid, mdadm, oh my!

3 min read

It appears that Fedora 12 moved to a new boot init system called dracut.  Sadly due to a number of odd circumstances this has caused me much pain.  Here's my basic config

  • /boot and /  on /dev/sda
  • /var and /home on a partitioned software raid on /dev/sd{cd}
After an yum-based upgrade to Fedora 12 I rebooted.  We get to the point where we initialize the software raid and boom.  failure.  I'd seen this before, partitioned raid has always had some trouble in fedora.  Previously I had to modify the rc.sysinit script to reset the raid partitions, so I tried that again, moving that init to later in the boot sequence.  Reboot and yes, it works..
However then I noticed some odd things.  I was only getting a single drive in my mirrored RAID.  Further investigation revealed that I had a device dm-1 instead of sdc or sdd listed in /proc/mdstat...  Uh oh..
Looking more closely it appears that my drives were getting set up by dmraid as a fake-raid mirror:  

# dmraid -r 
/dev/sdd: sil, "sil_aiabafajfgba", mirror, ok, 488395120 sectors, data@ 0
/dev/sdc: sil, "sil_aiabafajfgba", mirror, ok, 488395120 sectors, data@ 0

I tried adding the nodmraid option to grub.conf but then the new dracut system started an infinite spew of messages generated by this mdadm error message string (lifted from Assemble.c)

fprintf(stderr, Name ": WARNING %s and %s appear"
" to have very similar superblocks.\n"
" If they are really different, "
"please --zero the superblock on one\n"
" If they are the same or overlap,"
" please remove one from %s.\n",
devices[best[i]].devname, devname,
inargv ? "the list" :
"the\n DEVICE list in mdadm.conf"

Drats! the mirrored fake raid had already mangled my second drive by duplicating the superblock!  Plus since all this was going on in dracut I couldn't fix it.  So I removed the nodmraid option in grub during boot and dug a little deeper. I found that I could keep dracut from doing all this nonsense by adding the following kernel options:

rd_NO_MD rd_NO_DM nodmraid

This allows for a minimal boot without dmraid or mdadm.  After that I was dropped into single user mode with the dupe superblock message.  To fix this required zeroing the superblock of sdd

mdadm --zero-superblock /dev/sdd1

And then rebooting (again!)

Once past this things started working somewhat normally.  To get my raid mirrored again I did the normal thing:

# mdadm --manage /dev/md_d0 --add /dev/sdd1

To get rid of the false-positive fake raid setup I found that you can do this with the dmraid tool itself:

[root@mirth ~]# dmraid -E -r /dev/sdd

Do you really want to erase "sil" ondisk metadata on /dev/sdd ? [y/n] :y

[root@mirth ~]# dmraid -E -r /dev/sdc

Do you really want to erase "sil" ondisk metadata on /dev/sdc ? [y/n] :y

The really odd thing about this whole incident is that I never had these drives in a fake raid setup before. 
In any case, hope this helps the few other people who might have this same problem.

Paul Lindner

Gopher on MTV

1 min read

I dug this little gem out of the archives.  Enjoy!

Gopher World Tour T-Shirt on MTV

Paul Lindner

Email Clients Full Circle

2 min read

In the beginning I used elm to read my mail.  This was somewhat radical, especially as I worked with the team that created POPMail for the mac and Minuet for the PC, and everyone else moved to pine.  Then came Mutt -- happy days -- I was able to slice and dice email with amazing speed.

A couple of years ago I converted over to Mail.app -- mostly because of the contacts and calendar integrations, and the fact that I could merge personal email and corp email accounts.  In the intervening time I had to move to comcast, which meant running my own imap server proved more difficult than it was worth, so I moved to Google Apps for Your Domain, all of a sudden my personal domain is running Gmail, and I discovered it has key bindings.
All of a sudden it's mutt deja-vu. navigation with vi j/k keys? yes.  Single window view (inbox/message)? yes again.  Tagging messages? yes.  Blazingly fast? you bet.  The only thing I miss is keystroke filtering of messages.
That's one reason why I see things like Google Wave working out so well, I might be late to the gmail party, but plenty of folks have been using this as their primary mode of communication for a long long time.

Paul Lindner

Tomcat and SSL Accelerators

3 min read

Using an SSL Accelerator like a Netscaler is really useful, you can offload a lot of work to a device that supports this in hardware and can use SSL session affinity to send requests to the same backend.  In the simplest setup the SSL Accelerator accepts the request and proxies it to your internal set of hosts running on port 80.

However, code that generates redirects and URLs works poorly because the servletRequest.getScheme(), getSecure() and getServerPort() will return http/false/80 for SSL and non-SSL connections.
One way to solve this is listen on multiple ports.  Create a Connection on 80 and 443, but do not run SSL on either port.  Then for the 443 port you configure it with secure="true" and scheme="https".  This is suboptimal however as then you have to manage yet another server pool in your load balancer and you end up sending twice the health checks.  Not so good.
You might try to solve this by using a ServletFilter.  You can use an HttpServletRequestWrapper instance to change the scheme/port/and secure flag.  Sadly this doesn't work, because of the way tomcat implements HttpServletResponse, it uses the original request object to ascertain the scheme/secure flag/port.  Overriding these will allow application logic to see the updated values.  You get into trouble when you call encodeRedirectURL() or sendRedirect() with non-absolute URLs.
Lucky for us Tomcat supports a way to inject code into the connection handling phase via Valves.  A valve can query and alter the Catalina and Coyote request objects before the first filter is run.  
To make your Valve work you'll need to configure your load balancer to send a special header when SSL is in use.  On the Netscaler this can be done by setting owa_support on.  With that enabled the http header Front-End-Https: On is sent for requests that use SSL.
Once we have these pieces in place the Valve is fairly straightforward:

import java.io.IOException;

import javax.servlet.ServletException;

import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.valves.ValveBase;

public class NetscalerSSLValve extends ValveBase {

        @Override
        public void invoke(Request req, Response resp) throws IOException, ServletException {
                if ("On".equals(req.getHeader("Front-End-Https"))) {
                    req.setSecure(true);
                    req.getCoyoteRequest().scheme().setString("https");
                    req.getCoyoteRequest().setServerPort(443);
                }
                if ( getNext() != null ) {
                        getNext().invoke(req, resp);
                }
        }
}

Compile this, stick it in the tomcat lib directory, add an entry in your server.xml and away you go.

Paul Lindner

Google I/O Today

1 min read

Speaking at "Meet the Containers", "Shindig 101" and "OpenSocial Fireside Chat".

All at Moscone West, check it out!

http://code.google.com/events/io/

Paul Lindner

The Mysteries of Java Character Set Performance

4 min read

"Two Characters Sets?  Seems like plenty!"

So I've been pushing Java to it's limits lately and finding some real nasty concurrency issues inside the JRE code itself.  Here's one particulary ugly one -- we had 700 threads stuck here:

       java.lang.Thread.State: BLOCKED (on object monitor)                                                                    
         at sun.nio.cs.FastCharsetProvider.charsetForName(FastCharsetProvider.java:118)
         - waiting to lock <0x00002aab4cdf91b8> (a sun.nio.cs.StandardCharsets)
         at java.nio.charset.Charset.lookup2(Charset.java:450) 
         at java.nio.charset.Charset.lookup(Charset.java:438)
         at java.nio.charset.Charset.isSupported(Charset.java:480) 
         at java.lang.StringCoding.lookupCharset(StringCoding.java:85) 
         at java.lang.StringCoding.decode(StringCoding.java:165)                                                                      
         at java.lang.String.(String.java:516) 
Digging deeper we find the lookupCharset is called all over the place.  The app in question is functions as a web proxy, so it's constantly reading and writing data from web pages in a variety of character sets.  The method charsetForName() uses a synchronized data structure to lookup defined character sets.  (Yay serialized access....)
But wait, lookup and lookup2 provide us with a cache so we can avoid the big bad synchronized method..  Sigh, here's the implementation:
     private static Charset lookup(String charsetName) {
         if (charsetName == null)
             throw new IllegalArgumentException("Null charset name");
 
         Object[] a;
         if ((a = cache1) != null && charsetName.equals(a[0]))
             return (Charset)a[1];
         // We expect most programs to use one Charset repeatedly.
         // We convey a hint to this effect to the VM by putting the
         // level 1 cache miss code in a separate method.
         return lookup2(charsetName);
     }
 
     private static Charset lookup2(String charsetName) {
         Object[] a;
         if ((a = cache2) != null && charsetName.equals(a[0])) {
             cache2 = cache1;
             cache1 = a;
             return (Charset)a[1];
         }
 
         Charset cs;
         if ((cs = standardProvider.charsetForName(charsetName)) != null ||
             (cs = lookupExtendedCharset(charsetName))           != null ||
             (cs = lookupViaProviders(charsetName))              != null)
         {
             cache(charsetName, cs);
             return cs;
         }
 
         /* Only need to check the name if we didn't find a charset for it */
         checkName(charsetName);
         return null;
     }
Yes, a whopping 2-entry cache!!
Also, the keys used are not canonical, so if my app asks for "UTF-8", "utf-8", and "ISO-8859-1" with regularity this 2 entry cache is worthless, every call ends up blocking in the evil thread-synchronized data structure.
Someone send them a copy of the ConcurrentHashMap doc.  please.
....

Paul Lindner

Social Graph Meat-up

1 min read

Dinner not for vegans at O'Reilly.

Social Graph Meat-up

Social Graph Meat-up

Paul Lindner

Paul Lindner

Tired

1 min read

Why am I so tired?

Been working hard to implement features decribed here..:

hi5 Launches New Music Applications By iLike and Qloud

No more music royalties for hi5.  Cost center is now a profit center...


Paul Lindner

Bugathon!

1 min read

'nuff said...

Paul Lindner

OpenSocial Roundup

3 min read

 At hi5 we've been busy busy busy getting OpenSocial up and running.  We released our developer sandbox, and are rapidly implementing features.  So check out the following URLs

Campfire One Highlights: Introducing OpenSocial


Also, here's a copy of my response to Tim O'Reilly's blog post:

OpenSocial: It's the data, stupid

Hi folks,

Good comments all around. However I'd like to posit that data access is _not_ the problem. We've had universal standards for years now with little uptake. Tribe.net, Typepad, LiveJournal and others have supported FOAF for many, many years, which encompasses the OpenSocial Person and Friends APIs. Not much has come of that -- there isn't a large enough base there to get people interested.

Now you have a broad industry consensus on a single way to provide all of the above plus activity stream data. You have a rich client platform that allows you to crack open that data and use it in interesting ways, and finally you have a common standard for social networks to interact with each other based on the REST api.

So Patrick's statement at the Web 2.0 Expo is correct, a app running inside a container only allows you to see what that container shows you. However that does not mean that a container could not contain friend references to external social networks via it's own federation mechanism. Movable Type 4.0 has shown that you can support any OpenID login in a single system, there's no reason to believe that social networks could not leverage OAuth to do the same.

And here's a final point to consider -- you have Myspace opening up to developers. That's huge. That alone is going to draw more developer attention to this problem than much of the oh-so academic discussions of the past few years.

I suggest people that _want_ OpenSocial to solve all the social graph ills get involved on the API mailing list and make sure that those elements are addressed as OpenSocial evolves.

There's a tremendous amount of momentum. Let's not waste this chance.

Paul Lindner

11/1/07

1 min read

Paul Lindner

11/1/07

1 min read

Paul Lindner

ILike at Campfire One

1 min read

In hi5, Orkut, and Ning!

Paul Lindner

Suggestions

1 min read

This has got to be a bug....

Dear Amazon.com Customer,

We've noticed that customers who have purchased or rated White Noise Critical: Text and Criticism (Viking Critical Library) by Don DeLillo have also purchased Caught in the Machinery: Workplace Accidents and Injured Workers in Nineteenth-Century Britain by Jamie Bronstein. For this reason, you might like to know that Caught in the Machinery: Workplace Accidents and Injured Workers in Nineteenth-Century Britain will be released on October 10, 2007.  You can pre-order yours by following the link below.

Caught in the Machinery: Workplace Accidents and Injured Workers in Nineteenth-Century Britain
Jamie Bronstein
Price:    $55.00
Release Date: October 10, 2007

Paul Lindner

Found in Hi5 Lunch Room

1 min read




Update:  On the back we find the fine, fine web site http://www.rapsnacks.com/ (Enter if you dare!) and a bio of Romeo, a rapper I have never heard of, but my colleage Brett tells me was once a featured artist on Hi5.