Thursday 3 January 2013

Big data software techniques in automation - as an analogy & in use

I really liked this post (HT @stewarttownsend) :

http://gigaom.com/data/why-big-data-might-be-more-about-automation-than-insights/

As we (@sabisu) do a lot of work in the process industries (oil & gas, chems, manufacturing) the analogy of 'big data' techniques to robotic manufacturing processes really worked for me.

What do robots do in manufacturing? We give them small tasks which require relentless repetition and accuracy. Robots don't do anything you couldn't do by hand but they're many times faster, more reliable and less expensive.

That's a pretty good analogy with many of the big data software tools which involve the distribution of work to many processing nodes. This work is repetitive, requires accuracy, speed, reliability and low cost (in all senses). It's a good fit. For example, the 'reduce' part of MapReduce is in fact a software robot, executing an algorithm. For Hadoop clusters, read any redundant architecture in your manufacturing process.

It's easy to see this translating to the real world. Need to aggregate all the flaring incidents at your petrochems plant? MapReduce will do what your Climate Change Manager might spend hours collating.

Need resilience in data acquisition from real-time manufacturing systems? Plenty of options.

Need someone to check all the logs for incidences of benzene contamination 1.2 standard deviations over the mean? Get an algo to do it.

Now, in fact I think most 'big data' technology is needlessly expensive and perhaps sub-optimal for some of these use-cases but the analogy holds.


No comments: