Next Youngsday








Home / Tech / Apps and software / When do we learn to trust the machine?

When do we learn to trust the machine?

Screen Shot 2014-09-30 at 15.42.13

Machine learning is sometimes comparable to magic, as with the example above. The problem above, called inpainting, is one that is widely studied in image processing. As you may also concur, whatever magic the scientists there applied, it was powerful enough to make one of our tall friends from nature disappear altogether.

What may surprise you though, the powerful technique they used is not too far from algorithms that may be applied to solve business problems, such as next-best-offer or customer segmentation. The catch: we just don’t do it.

I have been working professionally on predictive modeling in credit risk for a few months now. As in most of the data mining tasks you may run into in a large corporation’s marketing/risk department, any algorithm which you just can’t immediately crack open and see under the hood is out-of-bounds. In other words, if understanding what’s going on and what the result is doing takes a little more than an hour -the technique is out.

I was looking through papers presented in a prestigious credit risk conference and telling a senior colleague about it, when it struck me. Noone seemed to be using algorithms invented since about 1995! But believe it or not – those techniques have come a long long way since then!

The question is: why?

1. They are not packaged well. Many package programs often used to develop and maintain predictive models, such as SAS EM or SPSS Modeler, just do not include the state-of-the-art. This is only amplified by the fact that most organizations will not immediately invest in upgrading their “mining tools”, and it’s very often you’ll run into a bank running some tool from the late 1990s.

2. They require more training to understand and control. As machine learning algorithms develop, their consequent models are more obscure and their parameters are harder to control. That’s why, the few global organizations that do work with these techniques hire top-notch scientists and full-stack developers instead of business people with few weeks of stats software training. Not surprisingly, the first breed is a bit harder to find than the rest.

3. They do not promise huge gains. Going from old algorithm A to new algorithm B will give you some profit, but it’s not like a more fundamental business strategy change or acquisition won’t have 100 times the same impact. So this sort of data-mingering is no doubt for the companies on the frontiers, not for everyone.

4. We are still reluctant to let go. This may be more of a philosophical question now, but I sometimes figure that we are inherently reluctant to hand over control to machines. Although my earlier posts on this blog will make it very clear that I’m a big fan of gut-feeling decisions from seasoned executives; but maybe we can just cut the computers some slack. After all, they are capable of seeing complex relationships that we don’t and calculate possibilities, simulate outcomes that we never can. Maybe it is time to start trusting them a bit?

Image source: Wang YX, Zhang YJ. “Image inpainting via Weighted Sparse Non-negative Matrix Factorization”, proc. of Image Processing (ICIP), 2011 18th IEEE International Conference on.


Image Source: Flickr

Caner Türkmen

Leave a Comment

Your email address will not be published. Required fields are marked *


* Copy This Password *

* Type Or Paste Password Here *

Scroll To Top
Sign up for our Newsletter to keep updated for

Enter your email and stay on top of things,

Youngsday on Twitter!
Follow us on Twitter!