Latest AI news. Learn about LLMs, Gen AI, and prepare for AGI deployment. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anthropic, NVIDIA and open source AI. My Links 🔗 ➡️ Subscribe: ➡️ Twitter: ➡️ AI Newsletter: #ai #openai #llm LINKS: AI Achieves International Mathematical Olympiad Problem Solving Standard Silver Medal International Mathematical Olympiad Functional Programming in Lean

By admin

39 thoughts on “Google DeepMind’s AlphaProof MASSIVE MATH BREAKTHROUGH – AI teaches itself mathematical proofs”
  1. I want to know how this relates to more trivial real world problems. We need systems being better and faster at math, logic, and reasoning. From getting a problem in text or vision and solving it as fast as humans can do it, or even take a fast good guess.

  2. it cant get fingers corect, which means it cant count upto 10 with fingers, a Child can do that. that sums all this video. it cat write text accurately either. 3 basic things. Meta is bettr on text but still strugles.

  3. Wes Roth: I'm the opposite: I'd rather work with mathematical notation than code – it's both more compact as well as making structures in formulas more obvious & hence easier to reason about.

  4. While this is very impressive, I still feel like the comparison is not entirely fair since the AI has way more time available than the human contestants. That said, it will most likely be only a matter of time before the AI becomes much faster.

  5. Isn't this apparent success analogous to the Godel problem that ultimately, if you have a system of rules, there are a vast number of potentially solvable problems in that system, and the same for falsifiable problems, but ultimately there are problems that don't have a solution without extra rules, and it's the useful extra rules that we seek, not just fancy maths problems?

  6. I would like to see a system that could hold the solutions to those ~100 million problems and extract their core mathematical truths. But I guess at best would would get some sort of point-cloud visualization of any found clusters of similarity.

  7. Premise: Incredibile work.

    But, this like Games engines (AlphaGo) what is doing is applying rules and the search space is surely big not sure big as Go for what I know. But from the Gödel theorems we know that in all mathematics we live and work in systems where we start with ground truths and then apply those rules repeatedly, so it's just boiling down to do this and with a very big space of experience and RL self play this work had to work because it worked on chess and go, I'm not surprised it worked but it surely a big step towards autoantic proof assistants that can suggest what step go next. The system lacks creativity at this point and formal system cannot create like new things as square root of negative 1.

  8. Your videos are always great and educational. I have a simple question: Can an artificial neural network solve difficult math problems on its own, or do we need new technology with more sophisticated interactions? Some believe the brain functions differently than an ANN because of quantum entanglement. I don't think chain of thought or feedback loops are new models. Is the ANN enough, or do we need something more advanced? Thanks!

  9. Makes me wonder if some of the most sought after proofs – the Riemann Hypothesis, for example – might within a few years be solved by AI rather than by humans.

  10. Do you know about FractalMath, a multi-agent system (MAC) learned from few samples to do basic math tasks. It implement logic and reasoning, but unlike Deepmind use only 5 samples of math tasks to learn solution patterns

  11. Petsonally i think AI maths can be uncredibly useful – if we can trust the proof. A little while back i asked Claude and Genini to solve a quadratic equation. Claude got it correct. Genini got it wrong, but the priof looked so plausible it took me 5 mins of squinting at it to see what it did wrong. So what happens if AI gets it wrong with maths that we dont already know the answer for?

  12. Deepmind has not publish the paper for AlphaProof and AlphaGeometry 2. The backbone of AlphaGeometry is the "deductive database" based on Wu's method a completely human designed directed acyclic graph (DAG). The transformer which is the engine of AI only did the search on that DAG. So do not get your hope up too high for "machine intelligence". This is not to diminish this engineering feat that it is, no doubt.

  13. Nah, the goalposts haven't moved. Gemini still can't long-multiply 2 arbitrary numbers together. Far from college or PhD level I just want it to prove it's at a 5th grade level

  14. "AI" is completely fine, but labeling it with human-only characteristics "intelligence", "sentience", "reasoning", etc. is seriously not a good idea

    I mean to say this as a warning, and not an ego-filled rant:

    The One God Hates those who imitate the ability to create animate beings.

    LLM's are fine and using statistical models are fine, but acting as if a bunch of 1s and 0s suddenly leapt into a whole different categorical domain outside of simply being binary combinations of energy on a computer, is both illogical (no combination of blue paint can ever turn into red paint, purely by virtue of the combination) and arrogant (to think that we are now the Creator-s and not just the create-d)

    Please consider refraining from labeling literal computer programs with human qualities (qualities that transcend binary configurations and combinations of atoms)

    Instead use phrases like "text output" instead of "it said"

    And not "i'm talking to it" but "i'm querying it"

    just because there are varied outputs, doesn't mean that the outputs still aren't results of purely binary and mathematical operations like any other program

  15. Reminded me of a 30 Rock scene, jack donaghy, "of course there are multiple types of intelligence.. practical, emotional… and then there's actual intelligence, which is what I'm talking about." Can't find the clip sadly.

  16. It's like the cosmological distance ladder. We are at a point where humans will have a difficult time understanding the how and why the AI progresses. We need AI training AI, AI generating material for AI to work on. There is a point where we won't know IF we are steering the boat. Humans have a very limited capacity to check their own work, and it requires teams to check on AI after the facts. Most of the time we'll take results as is without understanding how and why. Humans are slow and can be convinced, if AI produces a proof, are we qualified to discuss and check on it ? The design of a problem always starts from the designed answer and then from that builds the problem to solve. So we are good for now …

Leave a Reply

Your email address will not be published. Required fields are marked *