Thursday, February 4, 2016

Graphing with Goats

Slides, presented comme ça! Links at the end. Presentation posted on youtube of the meetup.



1


2


3 (I am known for teh kittehs)


4 (graphs-я-borin' is another talk I gave at SanFran GraphConnect 2015)


5. The Beach (not Plastic)


6


7


8. ACID means something very important to DBMSes(eses)(eses)


9. neo4j Graph of Amino Acids (data table, Haskell code)


(geddit? links to linkurio.us? geddit?)


11. "sed-butt, sed-butt, sed-butt" my daughters chant around the house all day

Now: Graph-applications:


12. Social Media


13. The Markets


14. The Markets (again) / Managing Complexity


15. Search / (Fraud) Detection


16. Scoping / (Requirements) Analysis


17. Clustering


18. Errybody say: "YAAAAAAYYYYYY!"


19. Links, en-text-ified:

20. Buh-bay!

Monday, February 1, 2016

January 2016 1HaskellADay Problems and Solutions

  • January 29th, 2016: Yesterday we monaded, for today's #haskell problem, we COMonad! ... with STREAMS! Oh, yeah! http://lpaste.net/2853437990695337984 onesies and twosies, duplicate to our solutionseis! http://lpaste.net/2531970919929217024
  • January 28th, 2016: Today: Monads. Tomorrow? COMonads! But today's #haskell problem: monads. http://lpaste.net/3895602141393321984 Todayed we Monaded! Oh, yeah! http://lpaste.net/8618821627204861952
  • January 27th, 2016: Today's #haskell problem: A Date-client! http://lpaste.net/3557542263343022080 Not what you're thinking, naughty children! *scold-scold*
  • January 26th, 2016: For today's #haskell problem we create a DayOfWeek web service! Woot! http://lpaste.net/5212178701889830912
  • January 25th, 2016: Per @SirElrik idea, this week we'll do #Haskell #µservices Today's problem is to JSONify a Day -> DayOfWeek function http://lpaste.net/150850 Date, JSONified http://lpaste.net/7633349000409645056
  • January 20th, 2016: Yesterday's problem showed us MLK-day was not a trading day, but WHAT WEEK DAY WAS IT? Today's #haskell problem: http://lpaste.net/3912063664412164096 The solutioneth giveth us the dayth of the weeketh! http://lpaste.net/703919211096834048
  • January 19th, 2016: Today's #haskell problem asks: Was yesterday a #trading day? http://lpaste.net/1968281888535609344 And a #haskell solution to the trading calendar? Monoids, of course! http://lpaste.net/1299918534133940224
  • January 18th, 2016: Today's #haskell problem is a mathematical conundrum concerning poetry ... yes, poetry http://lpaste.net/4733337870415167488 Langston Hughes and Rob't Frost give us the solution: http://lpaste.net/8014739098407272448
  • January 15th, 2016: Yesterday was the Repeatinator2000, for today's #haskell problem we have the GAPINATOR3004!! YES! http://lpaste.net/1481736263689043968 Well, we see HALF the stocks are only mentioned once. But minGaps are NOT telling! Hm. http://lpaste.net/5017845158461308928 
  • January 14th, 2016: In the sea of data we look for some repeaters for today's #haskell problem http://lpaste.net/781423227393015808 AN (H)istogram? A HISTogram? eh, whatevs. #haskell soln shows LOTS of low frequency mentions http://lpaste.net/8518180312847482880
  • January 13th, 2016: One chart to rule them all, one chart to find them, one chart to bring them all, and in the darkness bind them http://lpaste.net/161563874967945216 Big Up Chart ... in #haskell, ya! http://lpaste.net/2722111763528024064 
  • January 12th, 2016: Printing out buy/sell Orders for further analysis http://lpaste.net/2893303782647529472 The charts, ... with the #haskell program that generated them: http://lpaste.net/333157576608841728


  • January 11th, 2016: Prolog. Lists. *drops mic http://lpaste.net/8013339712162889728 For the solution we represent PrologList as a difference list http://lpaste.net/3349987882864476160
  • January 8th, 2016: '$NFLX and Chili?' is today's #haskell problem http://lpaste.net/3944517274819362816 What is this fascination with eating chili whilst watching movies? Case study: $NFLX a solution with several buy/sell scenarios and some open questions remaining http://lpaste.net/6187369537755676672
  • January 5th, 2016: We are Y2K16-compliance officers for today's #haskell problem http://lpaste.net/4805789218464858112
  • January 4th, 2016: Happy New Year! Today's #haskell problem looks at the World of WarCr–... Oops, I mean the World of Work-flow! http://lpaste.net/5383485916327182336

Tuesday, January 5, 2016

December 2015 1HaskellADay 1-Liners

One-liners
  • December 30th, 2015: You have a string of 'digits' in base whatever to convert to an Int
    debase :: [Int] -> Int -> Int
    debase [12,21,3] 26 ~> 8661
    Define debase
    • Gautier DI FOLCO @gautier_difolco
      import Data.Bifunctor
      debase = curry (sum . uncurry (zipWith (*)) . bimap reverse (flip iterate 1 . (*)))
    • bazzargh @bazzargh
      debase a b = sum $ zipWith (*) (reverse a) (map (b^) [0..])
    • obadz @obadzz
      or debase l b = foldl (\ p n -> p * b + n) 0 l
    • bazzargh @bazzargh
      that's better than mine. how about:
      flip (foldl1 . ((+) .) . (*))
  • December 12th, 2015: #math You have this sequence: [1,1,1,1,1,1,2,1,1,1,3,3] What is this pattern? Is there one? Write #haskell to generate this list.
  • December 3rd, 2015: Lens-y again Points-free-itize the following correctName :: Row -> Row correctName r = set lastName (init (view lastName r)) r
  • December 3rd, 2015: Let's get a little lens-y with this one: accumer :: Getter a t a -> t -> [a] -> [a] accumer f s acc = ans where ans = view f s:acc
    • Define the curried-accumer function that curries away the acc-argument.
    • What would the curried definition be if the function type were: accumer :: Getter a t a -> t -> Set a -> Set a
  • December 3rd, 2015: define minimax :: Ord eh => eh -> (eh, eh) -> (eh, eh) such that, e.g.: minimax 1 (2,3) ~> (1,3) minimax 10 (2,3) ~> (2,10)
    • Thomas Dietert @thomasdietert In that case, minimax n (x,y) = (minimum [n,x,y], maximum [n,x,y])
    • joomy @cattheory minimax = liftM2 (***) min max
  • December 1st, 2015: define (->>) :: (a -> m ()) -> (a -> m ()) -> a -> m () All monadic effects must be evaluated.
    • Jérôme @phollow (->>) f g a = f a >> g a 
      • then liftM2 (>>)
      • Nicoλas @BeRewt but the full applicative is: liftA2 (*>)

Monday, January 4, 2016

December 2015 1HaskellADay Problems and Solutions

December 2015

  • December 30th, 2015: For today's #haskell problem we convert valid airport IATA codes to ints and back http://lpaste.net/9126537884587786240 And a Happy New Year solution: http://lpaste.net/2748656525433110528 Safe travels should you be Haskelling by air!
  • December 28th, 2015: So, remember ADVENT? What happens when your INV becomes full? Today's #haskell problem looks at that http://lpaste.net/5153847668711096320
  • December 23rd, 2015: Warm and fuzzy December, so we have a warm and fuzzy #haskell problem for today http://lpaste.net/6855091060834566144
  • December 21st, 2015: For today's #haskell problem we are to deduce stock splits using LOGIC and SCIENCE http://lpaste.net/7920293407518359552
  • December 18th, 2015: Today's #haskell problem... 'may' be thematic with that 'Star ...' what was the name of that movie? http://lpaste.net/7186856307830292480 Gosh! Star ... something! eh, whatevs: just let the Wookie win (always the best policy) http://lpaste.net/4229765238565634048 
  • December 17th, 2015: For today's #haskell problem we look at reporting out periodically on an investment and, BONUS! charting it! http://lpaste.net/638111979885559808 And we've charted our AAPL investment growth, too! http://lpaste.net/5472780506909114368 
  • December 16th, 2015: For (coming onto) today's #haskell problem we demask the masked data to unmaskify it, yo! http://lpaste.net/3793517022239784960 And then the solution unmasked that masked data! (that's convenient.) http://lpaste.net/5286190256939335680
  • December 15th, 2015: So yesterday we masked some rows, but what happened to the masking dictionary? Today's #haskell problem we save it http://lpaste.net/7703498271758483456 Ah! So that's where that cipher went! http://lpaste.net/2548817005729808384
  • December 14th, 2015: We look at a way of masking data for today's #haskell problem http://lpaste.net/4428582328419221504 And the solution gives us some lovely masked rows http://lpaste.net/7169250386480463872
  • December 11th, 2015: Today's #haskell problem asks: WHAT DOES THE TRANSFORMED JSON SAY! http://lpaste.net/5253460432890363904 (okay, that was weaksauce)
  • December 10th, 2015: For today's #Haskell problem we try to find relevancy in our daily lives http://lpaste.net/4994238781251911680 ...well, in #JSON, which is the same thing.
  • December 9th, 2015: Today's #haskell problem asks you to read in some rows of #JSON http://lpaste.net/3949063995918385152 We'll be looking into this data set through the week "Watcha readin'?" "JSON." "Cool! ... No, ... wait." http://lpaste.net/218583096285462528
  • December 8th, 2015: My main man, magic Mike (m^5 to his friends) said: "You're doing it wrong."Do it right for today's #haskell problem http://lpaste.net/6186236014281883648 A first stab at the solution, not taking into account splits, is posted at http://lpaste.net/6854975053767901184 And the split-adjusted solution here: http://lpaste.net/2656933684896071680 ~3700%-gain. It's time for me to sing the "You're the Top"-song to @MacNN_Mike
  • December 7th, 2015: In today's #haskell problem we cry «On y va!» and have at it! http://lpaste.net/5243343693259210752 En garde, you pesky investment problem, you! The solution shows Dr. Evil does NOT get ONE MILLION DOLLARS! http://lpaste.net/177540921380831232 Nor piranhas with laser beams on their heads, either.
  • December 4th, 2015: We write a web-proxy to help out poor, little Ajax go cross domain for today's #haskell problem http://lpaste.net/5120963316033781760
  • December 3rd, 2015: We meet the Ip-man for today's #haskell problem. Hiya! http://lpaste.net/8926203451507998720 Today's solution shares a little-known fact http://lpaste.net/5525895581479731200 AND ALSO uses <$> as well! Is there a weekly limit on these thingies?
  • December 2nd, 2015: Synthetic data generation for today's #haskell problem http://lpaste.net/4208367754446635008 NOT ONLY did we write a Synthetic data generator in a day http://lpaste.net/6059310371951345664 but we learned all 50 states AND used <$> and <*> – BONUS!
  • December 1st, 2015: In today's #haskell problem, we see @geophf lose his cool. http://lpaste.net/6645397898311761920 No... wait... That's not news now, is it. (RID analysis). Unwordin' down-low on the arrr-eye-dee, LIKE A GANGSTA! http://lpaste.net/7371239202307964928

Tuesday, December 1, 2015

November 2015 1HaskellADay One-liners

  • November 3rd, 2015:
Why is 'a' the standard label for type-variables? If you don't care what the type is, shouldn't the type be 'eh'? #imponderables
  • November 3rd, 2015:
{-# LANGUAGE OverloadedStrings #-}import Network.HTTPtype URL = StringrespBodyAsText :: URL -> IO Stringdefine respBodyAsTextrespBodyAsText url = simpleHTTP (getRequest url) >>= getResponseBody
  • November 2nd, 2015: 
You have f :: a -> IO b, g :: b -> IO (), h :: b -> IO AnsYou wish to sequence f, g, h as j :: a -> IO AnsDefine j points-freeDimitri Sabadie @phaazon_ fmap snd . runKleisli (Kleisli g &&& Kleisli h) . f

November 2015 1HaskellADay Problems and Solutions

November 2015

  • November 30th, 2015: Pride and Prejudice on the command-line? No. Today's #haskell problem: read in a stream http://lpaste.net/7832470045098770432 The solution defines a new Kleisli arrow. http://lpaste.net/4713907846389956608 ... AND Jane Austen prefers the pronouns SHE and HER. So there's that.
  • November 27th, 2015: Simply getting the command-line arguments for #BlackFriday #haskell problem http://lpaste.net/131531689812819968 ...and then there's that bonus. OH NOES! 'Simple' solution, am I right? http://lpaste.net/1418503822422048768
  • November 26th, 2015: A little worker-pool in #haskell wishes you Happy Thanksgiving from the #USA or today's problem: Erlangesque-Haskellhttp://lpaste.net/2732286163095126016 And today, a #haskell solution says ('sez') "Go get'm Black Friday dealz, yo!" http://lpaste.net/7453476641931526144 (but: caveat emptor!)
  • November 25th, 2015: Today's #haskell problem has a Secret Decoder Ring! http://lpaste.net/317245813698854912 ... as long as you use the HaHaJK-type. BREAKING: SHA1-HASH DECODED using #haskell! http://lpaste.net/7305841715271696384 Reported here first show my bonnie lies over BOTH the ocean AND the sea!
  • November 24th, 2015: For today's #haskell problem we look at parsing URI ... not Andropov https://en.wikipedia.org/wiki/Yuri_Andropov ... not today. http://lpaste.net/3031598688741883904 Today's #haskell URI-parsing exercise makes Yuri (Andropov) SAD and MAD ... Don't worry, Yuri: URIs are just a FADhttp://lpaste.net/8338275656215298048
  • November 23rd, 2015: For today's #haskell problem we ride West on ol' Silver declaiming: "JSON! Ho!" http://lpaste.net/7278810874737852416 And the solution allows us to look at JSON and declaim: HA! http://lpaste.net/2880528191972179968
  • November 20th, 2015: Today's #haskell problem comes with honorable mentions and stuff! http://lpaste.net/7575693578471473152 ♫ My heart...beats...fasta!
     
    ... AAANNNNNDDDDD our solution, down to 4.6 seconds from 151 seconds. http://lpaste.net/2479927048856928256 Not a bad start!
  • November 19th, 2015: In today's #haskell problem we say: '@geophf your RID-analyzer is SO efficient!' http://lpaste.net/6802158616863309824 ... NOT! Update: today geophf cries Efficienc-me? No! Efficienc-you! http://lpaste.net/7547436765292789760
  • November 18th, 2015: Today JSON and the Argonauts sail off into the high seas of the RID to adventures beyond the Thunderdome! http://lpaste.net/7016479864345591808 No...wait.
  • November 17th, 2015: Today's #haskell problem generates a report with no title... o! the irony! http://lpaste.net/4139233297970495488 The solution shows Jane Austen getting her Aggro on ... even if just a little bit http://lpaste.net/8111201736003158016 
  • November 16th, 2015: New Regressive Imagery Dictionary/RID(-structure)? That means New-NEW JSON for today's #Haskell problem http://lpaste.net/40452467304955904 And there is the RID, in all its JSON-iferific glory! http://lpaste.net/262135232898007040
  • November 13th, 2015: Today's #haskell problem–Project RIDenberg–classifies a big-ole document with FULL! ON! RID! http://lpaste.net/2340251327956779008 (exclamation mandatory) Today's solution shows us that the RID is as fascinating as ... well: Mr. Wickham. http://lpaste.net/5788646192996941824 (There. I said it.)
  • November 11th, 2015: Today's #haskell problem goes the Full Monty... NO! WAIT! NOT 'MONTY'! WE GO FULL RID! (Regressive Imagery Dictionary) http://lpaste.net/2598821222603030528 ... annnnnndddd this #haskell solution gives us the full RID as a graph http://lpaste.net/3885537396636254208 
  • November 10th, 2015: For today's #haskell problem we look at parsing a (small) document and matching it to a (small) RID http://lpaste.net/237534433320632320 QWERTY-style! Our solution (also) answers that age-old nagging question: "What DOES the fox say?" http://lpaste.net/6858775962385907712 … No, really: I need to know this.
  • November 9th, 2015: LAST week we looked at cataloguing the RID/Regressive Imagery Dictionary. Today's #haskell problem completes that. http://lpaste.net/7116276040808267776 Not that every problem and every solution can be modeled as a Graph, but ... the solution-as-Graph is here: http://lpaste.net/2599543283914899456 *blush
  • November 6th, 2015: Today's #haskell problem looks at the RID as Friends and Relations http://lpaste.net/3230687675795111936 ... actually: it just looks at RID-as-relations Ooh! Pritty Bubblés for a solution to the RID-as-relations problem http://lpaste.net/3885537396636254208 
  • November 5th, 2015: Today's #haskell problem is to JSONify the RID because JSON, and because indentation as semantic-delimiters is weird http://lpaste.net/3601240609232257024 A solution shows PRIMARY-RID-JSON in 420 lines, as opposed to the raw text at over 1800 lines. Cool story, bro! http://lpaste.net/2808636942017626112
  • November 4th, 2015: For today's #Haskell problem please Graph that RID! YEAH! http://lpaste.net/7822865785260343296 I WANT YOUR SEX(y graph of the RID) poses a solution at http://lpaste.net/5625628956930605056
  • November 3rd, 2015: YESTERDAY we used a Python program to map a document to the RID in #Haskell TODAY we map part of the RID to #Haskellhttp://lpaste.net/4829042411923570688 A solution gives us a Pathway to BACON, http://lpaste.net/3419022958092353536 ... because priorities.
  • November 2nd, 2015: We look at EQ for today's #Haskell problem; not the Class Eq, but the Emotional Quotient of a document. Fun! http://lpaste.net/856570908666494976 runKleisli (from @phaazon_) and (>=>) to the rescue for the solution today http://lpaste.net/5618280040253882368

Tuesday, November 24, 2015

(Really) Big Data: from the trenches

Okay, people throw around 'big data' experience, but what is it really like? What does it feel like to manage a Petabyte of data. How do you get your hands around it? What is the magic formula that makes it all work seamlessly without Bridge Lines opened on Easter Sunday with three Vice Presidents on the line asking for status updates by the minute during an application outage?

Are you getting the feel for big data yet?

Nope.

Big data is not terabytes, 'normal'/SQL databases like Oracle or DB2 or GreenPlum or whatever can manage those, and big data vendors don't have a qualm about handling your 'big data' of two terabytes even though they are scoffing into your purchase order.

"I've got a huge data problem of x terabytes."

No, you don't. You think you do, but you can manage your data just fine and not even make Hadoop hiccough.

Now let's talk about big data.

1.7 petabytes
2.5 billion transactions per day.
Oh, and growing to SIX BILLION transactions per day.

This is my experience. When the vendor has to write a new version of HBase because their version that could handle 'any size of data, no matter how big' crashed when we hit 600 TB?

Yeah. Big data.

So, what's it like?

Storage Requirements/Cluster Sizing


1. Your data is bigger than you think it is/bigger than the server farm you planned for it.

Oh, and 0. first.

0. You have a USD million budget ... per month.

Are you still here? Because that's the kind of money you have to lay out for the transactional requirements and storage requirements you're going to need.

Get that lettuce out.

So, back to 1.

You have this formula, right? from the vendor that says: elastic replication is at 2.4 so for 600 TB you need 1.2 Petabytes of space.

Wrong.

Wrong. Wrong. WRONG.

First: throw out the vendors' formulae. They work GREAT for small data in the lab. They suck for big data IRL.

Here's what happens in industry.

You need a backup. You make a backup. A backup is the exact same size as your active HTables, because the HTables are in bz2-format already compressed.

Double the size of your cluster for that backup-operation.

Not a problem. You shunt that TWO PETABYTE BACKUP TO AWS S3?!?!?

Do you know how long that takes?

26 hours.

Do you know how long it takes to do a restore from backup?

Well, boss, we have to load the backup from S3. That will take 26 hours, then we ...

Boss: No.

me: What?

Boss: No. DR ('disaster recovery') requires an immediate switch-over.

Me: well, the only way to do that is to keep the backup local.

Boss: Okay.

Double the size of your cluster, right?

Nope.

What happens if the most recent backup is corrupted, that is, today's backup, because you're backing up every day just before the ETL-run, then right after the ETL-run, because you CANNOT have data corruption here, people, you just can't.

You have to go to the previous backup.

So now you have two FULL HTable backups locally on your 60-node cluster!

And all the other backups are shunted, month-by-month, to AWS S3.

Do you know how much 2 petabytes, then 4 petabytes, then 6 petabytes in AWS S3 costs ... per month?

So, what to do then?

You shunt the 'old' backups, older than x years old, every month, to Glacier.

Yeah, baby.

That's the first thing: your cluster is 3 times the size of what it needs to be, or else you're dead, in one month. Personal experience bears this out: first you need the wiggle room or else you stress out your poor nodes of your poor cluster, and you start getting HBase warnings and then critical error messages about space utilization, second, you need that extra space when the ETL job loads in a billion row transaction of the 2.5 billion transactions you're loading in that day.

Been there. Done that.

Disaster Recovery


Okay, what about that DR, that Disaster Recovery?

Your 60 node cluster goes down, because, first, you're not an idiot and didn't build a data center and put all those computers in there yourself, but shunted all that to Amazon and let them handle that maintenance nightmare.

Then the VP of AWS Oregon region contacts you and tells you everything's going down in that region: security patch. No exceptions.

You had a 24/7 contract with 99.999% availability with them.

Sorry, Charlie: you're going down. A hard shutdown. On Thursday.

What are you going to do?

First, you're lucky if Amazon tells you: they usually just do it and let you figure that out on your own. So that means you have to be ready at any time for the cluster to go down with no reason.

We had two separate teams monitoring our cluster: 24/7. And they opened that Bridge Line the second a critical warning fired.

And if a user called in and said the application was non-responsive?

Ooh, ouch. God help you. You have not seen panic in ops until you see it when one user calls and come to find it's because the cluster is down with no warning catching that.

Set up monitoring systems on your cluster. No joke.

With big data, your life? Over.

Throughput


Not an issue. Or, it becomes an issue when you're shunting your backup to S3 and the cluster gets really slow. We had 1600 users that we rolled out to, we stress-tested it, you know. Nobody had problems during normal operations, it's just that when you ask the cluster to do something, like ETL or backup-transfer, that engages all disks of all nodes in reads and writes.

A user request hits all your region servers, too.

Do your backups at 2 am or on the weekends. Do your ETL after 10 pm. We learned to do that.

Maintenance


Amazon is perfect; Amazon is wonderful; you'll never have to maintain nor monitor your cluster again! It's all push-of-the-button.

I will give Amazon this: we had in-house clusters with in-house teams monitoring our clusters, 'round the clock. Amazon made maintenance this: "Please replace this node."

Amazon: "Done."

But you can't ask anything other than that. Your data on that node? Gone. That's it, no negotiations. But Hadoop/HBase takes care of that for you, right? So you're good, right?

Just make sure you have your backup/backout/DR plans in place and tested with real, honest-to-God we're-restarting-the-cluster-from-this-backup data or else you'll never know until you're in hot water.

Vendors


Every vendor will promise you the Moon ... and 'we can do that.' Every vendor believes it.

Then you find out what's what. We did. Multiple times, multiple vendors. Most can't handle our big data when push came to shove, even though they promised they can handle data of any size. They couldn't. Or they couldn't handle it in a manageable way: if the ETL process takes 26 hours and it's daily, you're screwed. Our ETL process got down to 1.5 hours, but that was after some tuning our their part and on ours: we had four consultants from the vendor in-house every day for a year running. Part of our contract-agreement. If you are blazing the big data trail, your vendor is, too: we were inventing stuff on the fly just to manage the data coming in, and to ensure the data came out in quick, responsive ways.

You're going to have to do that, too, with real big data, and that costs money. Lots.

And, ... but it also costs cutting through what vendors are saying to you, and what their product can actually handle. Their sales people have their sales-pitch, but what really happened is we had to go through three revisions of their product just so it could be an Hadoop HBase-compilant database that could handle 1.7 petabytes of data.

That's all.

Oh, and grow by 2.5 billion rows per day.

Which leads to ...

Backout/Aging Data


Look, you have big data. Some of it's relevant today, some of it isn't. You have to separate the two, clearly and daily, if you're not, then a month, two months, two years down the road you're screwed, because you're now dealing with a full-to-the-gills cluster AND having to disambiguate data you've entangled, haven't you? with the promise of looking at aging data gracefully ... 'later.'

Well, later is right now, and your cluster is full and in one month it's going critical.

What are you going to do?

Have a plan to age data. Have a plan to version data. Have a data-correction plan.

These things can't keep being pushed off to be considered 'later' because 'later' will be far too late, and you'll end up crashing your cluster (bad) or corrupting your data when you slice and dice it the wrong way, come to find (much, much worse). Oh, and version your backups, tying them to the application version, because when you upgrade your application, your data gets all screwy, being old, or your new data format on your old application when somebody pulls up a special request to view three-year-old data is all screwy.

Have a very clear picture of what your users need, the vast majority of the time, and deliver that and no more.

We turned a 4+hour query that terminated when it couldn't deliver a 200k+ row query on GreenPlum...

Get that? 4+hours to learn your query failed.

No soup for you.

To a 10 second query against Hadoop HBase that returns 1M+ rows.

Got that?

We changed peoples' lives. What was impossible before for our 1600 users was now in hand in 10 seconds.

But why?

Because we studied all their queries.

One particular query was issued 85% of the time.

We built our Hadoop/HBase application around that, and shunted the other 15% of the queries other tools that could manage that load.

Also, we studied our users: all their queries were in transactions of within the last month.

We kept two years of data on-hand.

Stupid.

And that two years grew to more, month by month.

Stupider.

We had no graceful data aging/versioning/correcting plans, so, 18 months into production we were faced with a growing problem.

Growing daily.

The users do queries up to a month? No problem: here's your data in less than 10 seconds, guaranteed. You want to do research, you put in a request.

Your management has to put their foot down. They have to be very clear what this new-fangled application is delivering and the boundaries on what data they get.

Our management did, for the queries, and our users loved us. You put in a query and it takes four hours, and only 16 queries are allowed against the system to run at any one time to: anyone, anywhere can submit a query and it returns right away?

Life-changing, and we did psychological studies as well as user-experience studies, too, so I'm not exaggerating.

What our management did not do is put bounds on how far back you could go into the data set. The old application had a 5 year history, so we thought two years was good. It wasn't. Everybody only queried on today, or yesterday, or, rarely: last week or two weeks ago. We should have said: one month of data. You want more, submit a request to defrost that old stuff. We didn't and we paid for it in long, long meetings around the problem of how to separate old data from new and what to do to restore old data, if, ever (never?) a request for old data came. If we had a monthly shunt to S3 then to Glacier, that would have been a well-understood and automatic right-sizing from the get-go.

You do that for your big data set.

Last Words


Look. There's no cookbook or "Big Data for Dummies" that is going to give you all the right answers. We had to crawl through three vendors to get to one who didn't work out of the box but who could at least work with us, night and day, to get to a solution that could eventually work with our data set. So you don't have to do that. We did that for you.

You're welcome.

But you may have to do that because you're using Brand Y not our Brand X or you're using Graph databases, not Hadoop, or you're using HIVE or you're using ... whatever. Vendors think they've seen it all, and then they encounter your data-set with its own particular quirks.

Maybe, or maybe it all will magically just work for you.

And let's say it does all magically work, and let's say you've got your ETL tuned, and your HTables properly structured for fast in-and-out operations.

Then there's the day-to-day daily grind of keeping a cluster up and running. If your cluster is in-house ... good luck with that. Have your will made out and ready for when you die from stress and lack of sleep. If your cluster is from an external vendor, just be ready for the ... eh ... quarterly, at least, ... times they pull the rug out from under you, sometimes without telling you and sometimes without reasonably fair warning time, so it's nights and weekends for you to prep with all hands on deck and everybody looking at you for answers.

Then, ... what next?

Well: you have big data? It's because you have Big Bureaucracy. The two go together, invariably. That means your Big Data team is telling you they're upgrading from HBase 0.94 to HBase whatever, and that means all your data can go bye-bye. What's your transition plan? We're phasing in that change next month.

And then somebody inserts a row in the transaction, and it's ... wrong.

How do you tease a transaction out of an HTable and correct it?

An UPDATE SQL statement?

Hahaha! Good joke! You so funny!

Tweep: "I wish twitter had an edit function."

Me: Hahaha! You so funny!

And, ooh! Parallelism! We had, count'm, three thousand region servers for our MapReduce jobs. You got your hands around parallelism? Optimizing MapReduce? Monitoring the cluster as the next 2.5 billion rows are processed by your ETL-job?

And then a disk goes bad, at least once a week? Stop the job? Of course not. Replace the disk (which means replacing the entire node because it's AWS) during the op? What are the impacts of that? Do you know? What if two disks go down during an op?

Do you know what that means?

At replication of 2.4, two bad disks means one more disk going bad will get you a real possibility of data corruption.

How's your backups doing? Are they doing okay? Because if they're on the cluster now your backups are corrupted. Have you thought of that?

Think about that.

And, I think I've given enough experience-from-the-trenches for you to think on when spec'ing out your own big data cluster. Go do that and (re)discover these problems and come up with a whole host of fires you have to put out on your own, too.

Hope this helped. Share and enjoy.

cheers, geophf