Who Knew? Having Less Sex Linked to Earlier Menopause

A NEW STUDY from University College London is bound to raise some eyebrows. The researchers focused on the USA’s Study of Women’s Health Across the Nation (SWAN). The investigators examined data…


独家优惠奖金 100% 高达 1 BTC + 180 免费旋转

Machine Learning will be called Challenge Processing in the Future


This article was published previously prior to the listed publication date.

I hope I get this right. When machine learning is so integral to our computing tasks, and so cheap for our computing needs, it will be called “challenge processing”, or “CP” for short. (I initially thought “problem processing” could be a good name for it, but “PP” somehow doesn’t ring right for me) The name comes from the idea of giving the computer a challenge that it needs to overcome.

The portion of the integrated circuit that carries out these tasks will be called the “challenge processor”. And the way programmers will access challenge processing will be through functions called “challenge functions”. They will be functions that make it very simple to use the complicated power that is machine learning.

Challenge processing takes us from where we are now, which is thinking of machine learning as an end-all, be-all for humanity’s problems and for artificial intelligence, and brings us to a place where it’s just one more step in the computing process. This is a good thing, because it will no longer be seen as a problem for a computer to complete certain tasks that we once thought only a human could do, but rather, it will be seen as another step in writing a computer program.

Above are versions of DeepMind’s Go playing program known as AlphaGo. Go is the ancient Chinese board game that is orders of magnitude more complex than Chess, and, until recently beaten by Google’s bot, it was considered that computers would not challenge top humans at playing it for years to come. Here are the key challenge match dates for each version of AlphaGo: Fan-October 2015, Lee-March 2016, Master-May 2017, Zero-Oct 2017.

This is the power of technology. And it will keep advancing. The advances are so rapid and so powerful that the physical power consumption and hardware of today that enable the Go board game to be mastered at a super-human level should fit into our smartphone in a few years. (and, perhaps still eerie to some, may fit in a chip implanted in our brains a decade after that)

But perhaps machine learning isn’t challenge processing yet because the software and hardware behind it aren’t advanced enough yet. But soon they should be. When they are, it will be so ubiquitous and easy-to-use that a programmer will plunk it into a program they’re working on just like they would now do with any function call, API call, or with any database connection. It will be as easy as feeding one typical Go game record into the program, the challenge processor learning the rules of the game and mastering it, all behind the scenes, and getting an ideal, super-human “Go bot” within a few minutes or even seconds. It will be as easy as writing the function “ChallengeProcessing(translation, english, chinese);” and the program will take care of the rest of the English to Chinese translation, whether the language the program is fed is digital text, visual text, the spoken word, sign language, or braille.

Reframing is important because at present we see machine learning as cutting-edge and difficult, with PhD’s required to fine-tune the algorithms so that we get anything useful at all. Challenge processing is a child’s toy: plunk it into any program and get the desired result. But it’s more than that. It solves a very specific problem: it takes virtually any replicable challenge that a human could do with their mind (sometimes humans can only do the problem with much difficulty and training), and then does it better than any human could ever do it. But it doesn’t necessarily do it perfectly. Challenge processing gets us to a probability of success. For example, it still makes mistakes when reading visual text, or when translating. And for that reason challenge processing is confined to a much smaller realm than where machine learning is perceived by the public today.

We don’t need machine learning to calculate 2 + 2. That problem is better solved by a normal, arithmetic processor, not a challenge processor. An arithmetic processor should get that problem right 100% of the time, whereas a challenge processor may only ever get it right 99.99% of the time, and it might only get it 99.99% right. A challenge processor’s answer for 2 + 2 might be 3.99999547264, and the next time it might answer with 3.99998365837. And that might be the best it can do. But 2 + 2 isn’t a challenge for humans or computers. It’s simple math.

When we have a simple, well-defined view of challenge processing, we no longer look at it to drive a car, or to beat a game of Go, or to translate language. There are many parts of driving a car that should not be looked at as a challenge. When someone slams on the breaks in front of you, you need to avoid hitting them, there’s no question. So in the code, you’ll see absolute “if” statements for certain situations: “if (pedestrian is in car’s current path) then…”, or “if (car is headed over an embankment), then…”

They used machine learning to first teach it how to do things along the lines of extending an arm, grasping something, etc. And then the team had the robot use those various abilities, with challenge processing (they might have called it deep learning, or machine learning) as the intermediary that figured out how to best use those functions together to solve a specific problem. A problem such as, perhaps, taking a cap off of a jar and setting it aside. Using this strategy meant that they were able to train their program in a fraction of the cycles that DeepMind used. (perhaps they could have made this process even faster and more effective, and more in line with the heart of “challenge processing”, if they had first hard-coded functions to do things like extend an arm or grasp something, instead of using machine learning for that, and then used the challenge processor to combine those abilities to open a jar)

DeepMind tried to use pure machine learning to do something like this, without first trying to get the machine to define various abilities for itself. Along the way DeepMind had virtual robots flail totally randomly millions or billions of times until they got “good” at what they were trying to do.

Flailing is a lot more effective when you’re flailing with coordination. Machine learning is a lot more effective when it’s used for challenge processing: the concrete things can all be hand-coded just like they always were in the past, and anything that doesn’t have a proven solution can be solved with a probabilistically effective and efficient outcome using an easy-to-use function that employs challenge processing.

The world seems to think that computers will eventually program for us. I think that’s a long way off. But I think the computer’s ability to solve particular problems within a program’s functionality, to take on challenges and come up with a near-ideal solutions, is near at hand.

Are we a little confused about the future of programming, and the future of work in general?

Some sources:

/* I’ve taken the liberty, throughout this article, of being free to oversimplify some things and to put some things into layman’s terms, partly because I don’t fully understand this stuff, partly because it’s easier, and partly because more people might understand what I’ve written. Thank you for reading. */

Add a comment

Related posts:

5 Ways You Can Help the Partner of Someone Struggling with Mental Illness

For the second time in six months, my sweet wife was admitted into a behavioral center to help her control her suicidal thoughts. My heart aches for her. I know she’s safe there, but I can’t help but…

The Future of Decentralised Exchanges

Centralised exchanges (CEXs) are essentially in breach of the foundational principle and the promise that blockchain and cryptocurrencies offer. Perhaps technical infancy of the blockchain project…

A guide to FTXchange

The FTXchange is a fully decentralised over the counter smart contract where users can freely trade between NEP5 tokens. This is a basic article on how to use the FTXchange, please read the below…