Today I’m going to show you how to make a Python AI Chat Bot in just a few minutes and the best part is that this AI Chat Bot will run locally! No need to pay a subscription or connect to something like OpenAI. π To try everything Brilliant has to offer for free for a full 30 days, visit . You will also get 20% off premium subscription. If you want to find a job as a developer, check out my program with CourseCareers: π Video Resources π Ollama Download Page: Ollama GitHub Repo: β³ Timestamps β³ 00:00 | Overview 00:23 | Ollama Setup 01:52 | Running Llama3 local 03:47 | Setting up a Python environment 05:36 | Using Ollama pattern from Python 06:45 | Complete AI ChatBot Script Hashtags #coding #python #chatbot
Does anybody know what type of data the llama software is exchanging ?
Simple and useful! Great content! π
The captions with keywords are like built-in notes, thanks for doing that
Thanks to your tutorial I recreated Jarvis with a custom GUI and using llama3 model to make Jarvis, i use it in italian cuz i'm italian, but you can also use it in english and other languages.
hello do you know if its possible to use this model as a "pre trained" one, and add some new let say.. local information to the model to use it for an specific task?
How much ram required to make this program running well? Cause i have 4GB ram only
Nice
is it possible to host this in a cloud server? so that i can access my custom bot whenever i want?
Will this run on an android tablet?
Hey, Tim! Thanks for your tutorial. A haver a problem. Bot isn't responding to me? Mabe someone else have the same problem. Give me some feedback, please
Timmy! Great explanation, concise and to the point. Keep 'em coming boss =).
This just inspired me saving GPT Costs for our SaaS Product, Thanks Tim!
Hi Tim,
I recently completed your video on django-react project , but i need an urgent help from your side if you can make a video on how to deploy django-react project on vercel,render etc. or other known platform (this would really be helpful as there are many users on the django forum still confused on deployment of django-react project to some popular web deployment sites.
Kindly help into this.
A dummy question.. Where is used the template ?
I gave this vid a thumbs up but truthfully I feel this vid is a step backwards from the great RAG series Tim put up previously. I was waiting to see what Tim was gonna cook up next but this was a little underwhelming
Thanks Tim, ran into buncha errors when running the sciprt. Guess who came to my rescue, chatGPT π
Hey how can i show this as a ui
I want to create a chatbot which can provide me the programming related ans with user authentication otp
Please tell me how can i create this by using this model
And create my UI i am a full stack developer ane new to ml please reply
How do i deploy it to my website?
Anyone here that can help me with an irc bot maybe? Any help would be appreciated.. I'm useless at this :/
Thanks, super useful and simple!
I just wondered with the new Llama model coming out, how I could best use it – so perfect timing xD
Would have added that Llama is made by Meta – so despite being free, it's compareable to the latest OpenAI models.
If you combine this with a webview you can make a sorta of artifact in your local app
I started doing programming projects with high programming, βPythonβ, I hope you will see them and give me your opinion βοΈπ€
Wow thanks! This is really simple, straightforward guide to start me getting into writting the python rather than just using peoples UI. Love the explanations.
Great video! Is there any way to connect a personal database to this model? (So that the chat can answer questions based on the information in the database). I have a database in Postgre, already used RAG on it, but I have no idea how to connect the db and the chat. Any ideas?
I had been using the program Ollama on my laptop, and it was utilizing 101% of my CPU's processing power. This excessive usage threatened to overheat my device and decrease its performance. Therefore, I decided that I would discontinue using the program.
"Now we have a prompt and a model, and we need to 'chain' these two together using… LangChain"
Your way of teaching is very simple to understand πππ»
Wow, so cool ! You really nailed the tutorialπ
I have not implemented myself, but I have doubt, you are using langchain where the model is llama 3.1, langchain manages everything here, then what's the use of Ollama ?
I have found FAISS vector store provides an effective and large capacity "persistent memory" with CUDA GPU support. Yes?π