17

This question was inspired from What advantage humans have over computers in mathematics? and the answer of Brendan McKay, part of which is quoted in the below:

The day will come when not only will computers be doing good mathematics, but they will be doing mathematics beyond the ability of (non-enhanced) humans to understand. Denying it is understandable, but ultimately as short-sighted as it was to deny computers could ever win at Go.

The question and the answer got 42 and 35 votes, respectively, so at least a part of the community acknowledges the importance of AI in the future of mathematics or even the possibility that it will eventually replace mathematicians as they will develop mathematics much faster than humans will do.

Recently, machine learning algorithms are getting better at lower-level tasks such as computer vision, motion control and speech recognition, thanks to deep learning, reinforcement learning and massive amount of data available online, which boosted the power of supervised learning. However, without more sophisticated unsupervised learning algorithm (such as adversarial networks as discussed here), higher-level tasks such as reasoning are still difficult, and this is where the research is actively done right now.

Most parts of the mainstream ML don't require sophisticated mathematics, and traditionally such areas haven't attracted many mathematicians. However, since ML will eventually achieve human-level AI, which will develop every areas of math like a far-reaching theory, ML should be an interest for mathematicians. (There is nothing that prevents mathematicians from working on any problem whose solution may require non-purely-mathematical process. For example, the proof of four color theorem used a computer, and experimental mathematics has been gaining popularity.) I'm not proposing mathematicians to construct a purely mathematical model of AI such as Universal AI. Rather than such theoretical pursuit or simulating human brain, leading AI researchers argue that the future of AI will be on the line of the current mainstream ML. If mathematicians of this generation will not contribute to ML research, and if ML researchers will achieve human-level AI, that means ML researchers of this generation contributed to the development of math more than we did. This would be a nightmare for us.

My question is the following:

Why is the math community not spending enough manpower for developing ML, even though the earlier the completion of human-level AI will be, the faster the long-run development of math will be?

  • 2
    I guess they don't want to lose their job. Studying hard for so many years to finally see a computer do the same job faster and better and end up working in a bank or a tax office may not be that funny. – Sylvain JULIEN Aug 19 '16 at 05:49
  • What is Mathematics? If one tries to define Mathematics in terms of an object of study, you end up having to make include so many exceptions to the rule that it renders the definiton useless. If current ML leads to "classical" Mathematics developments, I posit that ML is doing (non-classical ?) Mathematics. There are opportunities to show that what ML does fits within standard Mathematics. We can include ML examples in Calculus and Linear Algebra to leverage the connections. Of course, some areas of Mathematics might never cross paths with AI or ML, but that is true in many other cases. – VictorZurkowski Aug 19 '16 at 07:08
  • 12
    Nothing personal against @SylvainJULIEN but I would just like to place a comment here, disagreeing with the opinion/sentiment/joke in his comment above. – Yemon Choi Aug 19 '16 at 07:49
  • 18
    @SylvainJULIEN If the day comes when computers can replace mathematicians, I am quite sure they will be able to replace clerks in banks and tax offices, too. – Federico Poloni Aug 19 '16 at 10:47
  • 3
    You have a point there. By the way, I do work in a tax office :-) – Sylvain JULIEN Aug 19 '16 at 12:22
  • 2
  • 6
    For what its worth, when I read discussions on the topic of computers doing mathematics in the future, my imagination tends to drift into thoughts like the following: Suppose, 200 years from now, the Riemann hypothesis is finally proved, but the proof is made by a nucleonic-matter quantum computer-entity and requires the reader of the proof to have an intuitive grasp of expressions involving 15 quantifier alterations and to be able to visualize 6-dimensional explanatory diagrams. Other similar computer-entities claim to verify the proof, but no human can make heads or tails of the proof. – Dave L Renfro Aug 19 '16 at 16:53
  • @DaveLRenfro Although this kind of question tends to be closed, I hope mine would not be, since the question I quoted from was eventually reopened. Proofs of many theorems are incomprehensible for vast majority of humans, so it wouldn't matter whether only machines or a few mathematicians can understand a proof. Also, Brendan McKay mentioned that humans can incorporate AI to themselves to enhance their intelligence, so such problems may not matter. – Math.StackExchange Aug 19 '16 at 18:32
  • 1
    Actually, it's probably not realistic for me to be thinking of humans vs computers in the way that my previous comment assumes, as @Math.StackExchange indicates. I suspect it's very likely that in a few hundred years, a thousand at most, this dichotomy of people and computers will be analogous to life on Earth before multicellular organisms developed. – Dave L Renfro Aug 19 '16 at 22:02
  • 8
    I would just like to say that within mathematics I have, since the days when I was a PhD student, seen several variants of this question along the lines of "why is the current math community not contributing much to X", where X is always some thing that is clearly The Way To The Radiant Future, that Boring and Timid People are afraid will sweep them away into the dustbin of history. As someone who works in a department that has people interested in ML, I remain to be convinced that the OP's phrasing of his or her own question differs substantively from the cases I have just mentioned – Yemon Choi Aug 22 '16 at 18:19

3 Answers3

42

This is a interesting question but, in my opinion, are several misguided or at least questionable conceptions:

1) The future is what matters.

Scientists should concentrate now on what will have most impact in the very long run. Why is that? I certainly do not agree.

2) The future is clear.

Even if you agree with predictions about the Future (like Brendan McKay's prediction which I tend to agree with) you should acknowledge that even a plausible prediction of this kind might be wrong, there are various plausible developments which may be competing, and the time scale is also very unclear and of great importance.

3) Long term projects are most fruitful

Even if you do give high weight to long term projects and it is clear to you, say, that computers will have vast impact on mathematics, it is not clear at all that your own efforts in such a direction will be more fruitful than effort in other directions. Long term risky projects more likely fail.

4) It is problematic if non-mathematicians will bring major (or even the major) contribution to future mathematics.

It is possible that current non-mathematicians will have major, even dominant, long term contributions to mathematics. What's wrong with that?

5) Mathematicians should mainly concern about ML's impact to mathematics

There are more important and more immediate applications of ML which are not to mathematics. This could be as tempting or even more tempting to mathematicians who want to get involve in ML. If you can see how your mathematical ideas could contribute through ML to autonomous driving, go for it! This can be more important than automatic theorem proving.

6) The "math community" allocates its manpower.

7) Impact of mathematics to a new area is measured by current mathematicians working on it.

It is probably the case that mathematics and the work of mathematicians in the past (along with statisticians, computer scientists, physicists, ...) had much influence on ML current development. A related thing is:

8) Impact of mathematics to a new area is measured by direct attempts of mathematicians aimed for this area.

And finally

9) Mathematicians do not contribute to current ML. I am not sure this is correct.

As a matter of fact, I recommend to look at the book Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz and Shai Ben-David to learn more.

enter image description here

Gil Kalai
  • 24,218
  • 1
    For 2), future of math isn't as clear. AI is a 70 (or longer) year old unsolved problem, but there are many active older unsolved problems like RH in math as well as long-term programs. Nobody knows whether any important unsolved math problem will be solved this century, but we feel some of them will be solved within our lifetime either because other important problems have been recently solved, or because there have been breakthroughs in the relevant areas. Many other unsolved problems in ML have been solved too, so I disagree that the future of ML is less clear than that of math. – Math.StackExchange Aug 19 '16 at 19:03
  • I read a similar book: Foundations of Machine Learning by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. A large portion of these books discusses classical topics (before 2000), and most of the topics covered were developed by researchers of CS, EE and statistics. Concentration inequalities mentioned in the books were genuinely developed by mathematicians, but it wasn't done recently. For example, Hoeffding's inequality was introduced in the 1963. McDiarmid's inequality (which I'm not sure whether presented in your book or not) was introduced in the 80's, which is still not recent. – Math.StackExchange Aug 23 '16 at 04:13
  • 1
    Dear MSE, other things I was not sure about in your question is the division between "lower level" and "higher level" machine learning and also I am not sure that "higher level" ML requires "higher level" mathematics. – Gil Kalai Aug 23 '16 at 17:11
  • I'm sorry, but I couldn't find the page where LeCun or some other leading ML researcher mentioned "lower/higher level" cognitive tasks. These are clearly not formal terms. He meant that tasks directly related to input/output such as (computer) vision are lower level tasks, and that other aspects of brain not directly related to i/o, such as reasoning, are higher level. He also mentioned that some new unsupervised learning may enable higher level tasks. – Math.StackExchange Aug 26 '16 at 05:48
  • Since the current researchers think such unsupervised learning will be on the line of the current ML development, they don't think they will use higher level mathematics to design such algorithm. Therefore, it has nothing to do with higher level mathematics. – Math.StackExchange Aug 26 '16 at 05:51
4

Actually there are a lot of mathematicians working on Machine Learning technics but it is unlikely that you can access their work. In the U.S., the NSA "is actively seeking mathematicians to join a vibrant community of mathematicians, statisticians, computer scientists, and other intelligence professionals to work on some of our hardest signals intelligence and information security problems." They are many mathematicians employed by the NSA, you can apply there if you are interested. Here is an article describing some of the consequences of their researches in the field of machine learning. I guess this answers the firt part of your question: there are a lot of manpower dedicated to ML from a mathematical viewpoint (but not all mathematicians want to be part of it.)

You seem to believe that the completion of a human level A.I. is in reach. There is no evidence that this is the case, despite the billions grants national Science agencies are pouring over scientists. We have been there already. Here is a famous quote from Herbert Simon, Nobel Prize-winning AI researcher: "There are now in the world machines that think". This quote is from the sixties and probably got him to harvest a few millions in grants for his researches. This should not prevent you to think on your own, and take these kind of assertions with a grain of salt.

Finally, there are many efforts going on now when it comes to automated solvers. ML is one of them but mathematicians have since long learned not to put all their eggs in one basket.

coudy
  • 18,537
  • 5
  • 74
  • 134
3

Well, there is Mikhaïl Gromov who considers universal AIs and argues "that the essential mental processes in humans and higher animals can be represented by an abstract universal scheme and can be studied on the basis of few (relatively) simple principles".

Personally, I think that, machine learning together with mathematics is not enough to create an universal AI, we need a lot of neuroscience and physics.

Also, for some people, mathematics is not about results or long-run development, but more about personal understanding of things and theorems.

Anyway, there is a very related question:

Why is the ML community not spending enough manpower for developing mathematics, even though "Mathematics is the queen of sciences" ?

kerzol
  • 335
  • 2
  • 11