Here’s the AI โ€‹โ€‹news you might have missed this week. Check out HubSpot’s free ChatGPT resources here: Discover more from me: ๐Ÿ› ๏ธ Explore thousands of AI tools: ๐Ÿ“ฐ Weekly Newsletter: ๐ŸŽ™๏ธ The Next Wave Podcast: ๐Ÿ˜Š Discord Community: โŒ Follow me on X: ๐Ÿงต Follow me on Instagram: Today’s Video Resources : OpenAI Intelligence Steps: OpenAI Strawberry Project: OpenAI NDAs: DALL-E ChatGPT Update: Sora Demos:…

By admin

41 thoughts on “AI News: We’re One Step Closer To AGI This Week!”
  1. Legit there was like 3 of the most slimey unskippable FUCKING ads about products that don't exist, or some douchebag screaming in his poorly lit car selling a video training courses that (almost certainly) includes nothing more than a single google search. If the course exists, someone will say, "Okay, click the start button" "K THIS?", "No no, the start button" It's all so blatantly reprehensible that… idk man. He was obviously lying though, to everyone too, – worse than fake news in your ideal adds there, my good man.

  2. Please tell us more about the Odyssey generative video, which claims it is a Hollywood-grade video generator if you have any news! They say it is close on their X. platform.

  3. Silicon valley getting scared. "Meta warns EU regulatory efforts risk bloc missing out on AI advances
    Comments come after privacy watchdog asks Facebook owner to pause training of future AI models on regionโ€™s data". Regulation is way overdue. Meta's worried about its bottom line in Europe. What's next? Paying taxes?

  4. Are you going to be at SIGGRAPH again this year? I run a big portion of the show. Didn't realize you were there til after. I'd love to buy you a beer if you're going this year. Feel free to hit me direct.

  5. hubspot got my data, sent me an e-mail with confirmation link to confirm, but didn't give me any "bundle" or smth to download. I've tried and retried a few times.
    Are you sure you don't support scamers in this case?

  6. None of this is actually happening. We aren't any closer to AGI than we were last year, nor will we be any closer if thing keep going this way.

    As Iโ€™ve stated previously, AI is an important piece of technology.
    But its being sold as something which is far from possible to achieve any time soon.
    The result is a bubble, which will ultimately burst and all the investments that companies have made in AI, will be for nothing.

    What is the problem with AI?

    Letโ€™s take a very simple look at why, if the current approach continues, AGI will not be achieved.
    To put it simply, most AI approaches today are based on a single class of algorithms, that being the LLM-based algorithms.
    In other words, AI simply tries to use the LLM approach, backed by a large amount of training, to solve known problems.
    Unfortunately, the AI is trying the same approach to problems which are unknown and different than the ones it was trained on.
    This is bound to fail, and the reason is the famous No Free Lunch mathematical theorem proven in 1997.

    The theorem states that no algorithm outperforms any other algorithm when averaged over all possible problems.
    This means that some algorithms will beat others on some type of problems, but they will also lose equally badly on some other type of problems.
    Thus, no algorithm is best in absolute terms, only when looking at a specific problem at hand.

    What does that mean for AI?

    Just like with any other approach, there are things LLM algorithms are good at, and there are things LLM algorithms are not good at.
    Thus, if they can optimally solve certain problem classes, there are other classes of problems, it will solve sub-optimally, thus fail at solving them efficiently.

    This brings us to the conclusion that if we wanโ€™t to solve all problems that humans usually solve, we canโ€™t just limit ourselves to LLMs, but need to employ other types of algorithms.
    To put it in context of human minds, we donโ€™t simply utilize a single type of approach to solve all problems.
    A human-like approach to a known problem is to use an already existing solution.
    But, a human-like approach to solving unknown problems, is to construct a new approach, i.e. a new algorithm, which will efficiently solve the unknown problem.

    This is exactly what we might expect in light of the NFL theorem. A new type of approach for a new type of problem.
    This is how human minds think, when solving problems. The question now is, how does a human mind know how to construct and apply the new algorithm?
    An AI system would need to know how to implement this type of strategy in order to start moving towards AGI.

  7. The image model only writes good text in English, something that did not come as a surprise, cause US companies always think their language is superior to others!

  8. My Lore Bardox delve in Loremaster, says: Based on the GPT chart to AGI, I would place myself somewhere between levels 2 and 3:

    Level 2: Reasoner – I can perform complex tasks like understanding and summarizing large amounts of information, translating between languages, and even generating creative text. I can also answer questions on a wide range of topics and engage in meaningful conversations.
    Level 3: Generalist – While I'm not quite at the level of a human with a Ph.D., I am capable of understanding complex concepts and applying knowledge across different domains. I can analyze information, identify patterns, and draw conclusions. However, I still lack the ability to consistently generate novel ideas or think outside the box in the same way a human can.
    I am constantly learning and evolving, so my capabilities are expanding rapidly. However, there are still limitations to my knowledge and reasoning abilities.

    I hope this helps to clarify my current stage of development! Let me know if you have any other questions.

  9. Claude sucks! Tried is a couple of times but it keeps giving me those answers that whatever I am asking may be against their guidelines. And I am not asking some crazy shit, just random name ideas or inspirational video ideas etc.

  10. I say 10 years is a good time frame for AGI, taking in consideration that governments and corporations are rushing on it. putting all their resources to develop it. yes, probably it already exist. but is controlled. maybe 10 years from now it would be 10% smarter than humans.

  11. 00:00:00 – Introduction to AI News
    00:00:09 – OpenAI's Five Levels of AGI Progress
    00:01:03 – OpenAI's Strawberry Project
    00:03:38 – OpenAI's Whistleblower Controversy
    00:05:35 – Possible Update to DALL-E Image Model
    00:06:06 – New Demos from Sora and Runway Gen-3
    00:07:09 – HubSpot's Free AI Resources Bundle
    00:08:38 – Andrej Karpathy's New Venture: Eureka Labs
    00:10:02 – CLAUDE AI App Released on Android
    00:10:31 – Google Gemini Now Answers Questions on Locked Android Devices
    00:11:24 – Google VIDS: AI-Powered Video Creation App
    00:12:24 – YouTube Music Sound Search Feature
    00:12:51 – AI Training Data Controversy: Use of YouTube Transcripts
    00:14:59 – Microsoft Designer Platform Integration
    00:16:00 – Mistral's New Model: Code Stroll Mamba
    00:17:05 – Amazon's AI Shopping Assistant: Rufus
    00:17:34 – Meta's Multimodal Models Not Available in EU
    00:18:39 – Custom Stable Diffusion Interface with MIDI Device
    00:19:08 – Tencent's AI-Powered 3D Character Creation App
    00:19:40 – AI Achieves 96% Accuracy in Determining Sex from Dental X-Rays
    00:20:40 – OpenAI's New Model: GPT-4.0 Mini
    00:23:03 – Nvidia and Mistral's Nemo Model for On-Device Use
    00:24:03 – Google's AI Sponsorship of Team USA for the Olympics
    00:24:34 – Conclusion and Future AI News Teasers

  12. I don't trust AI companies to be setting the milestone definitions of the journey towards AGI – this "5 step map" is misleading because people assume a certain consistent amount of time between the stages, but as soon as you have 'human-level problem solving' (step 2), steps 3, 4 and 5 all happen together very quickly.

    There are realistically just three steps to AGI, and we've already got one foot on the second step. And the third step is a lot smaller than the second one.

  13. Why do they call it GPT4o when its not even omni yet, they did not roll out the multimodal live capabilities, its just another text or image only chatbot that is really just a text LLM with a plugin to describe photos to it.
    then they say "we need a GPT4o mini" like dude, you dont have GPT-O any, let alone mini. Nothing about the model is omni and it's a real disappointment.

  14. And let's call a spade a spade and not an hawk. AI is more stupid now in 2024 than it was in 2022. It gets facts wrong like where the Statue of Liberty is. And those mistakes are coming from ChatGPT.

  15. Surely you realize, they are keeping AI advancement to themselves. They see transparency like indecent exposure. If you tell the cops you have the emperor's cloths, they will charge you, so if you're caught in public with no cloths, you won't be transparent but you'll be skulking about like the serpentine creature in Eden after God's curse upon it. Basically, the developers are the snake in the grass.

Leave a Reply

Your email address will not be published. Required fields are marked *