This is how I use AI in my everyday life. I’m just some guy and not super technical. This post might be useful if you’re also just some guy/girl/person! If you want advice from someone who knows how to use AI for high-level stuff, just watch Andrej Karpathy’s 2 hour video (you should also sit back and watch his 3 hour deep dive into how ChatGPT works when you have time). Here’s another good piece from someone more technical.
There are a lot more guides like this that are still pretty great but out of date. 2 years is a long time in AI! Mine will be out of date soon too.
Contents
Large Language Models
I use LLMs hourly at this point. I mostly use them to learn, so a lot of what I say here will continue on from my post on how I learn.
The value you get out of LLMs is limited by their tendency to hallucinate and their sometimes very shallow knowledge of topics, but it’s also often limited by the creativity of your questions and how genuinely you want to learn. There have been a few topics I’ve been thinking “I should really know more about this” for years, and when I actually sit down with an LLM to learn about them I get extremely bored and averse to reading. A hard truth is revealed. I have this basically utopian technology: a smart patient teacher I can ask to clarify anything I want in any style I want, and I still don’t want to learn about the topic. I just want to be seen as a guy who’s learned about the topic. LLMs can be an unpleasant window into our real motivations.
Once I work through my real motivations and start learning, it helps to sit back and let my mind go blank for a moment. There are so many ways to approach a topic with an LLM and I want to be open to new ideas. If I want to learn about the deep learning revolution, I could ask Claude to write me a series of limericks or flashcards or just go all in and write me a textbook1 catered to my exact level of understanding. I could poke it with questions about each individual topic it wrote about that I didn’t perfectly understand, or feed it papers and text from other authors on the topic to clarify what they mean. We’re probably all just scratching the surface of useful prompts.
I think the most useful way to think about LLMs is as if they’re each an extremely massive (metaphorical) database2 of almost all expert knowledge in all fields, with a very smart friend to act as a guide to that database. Google + the internet is in some ways a database of most human written knowledge and thinking as well, but it takes much longer to get the same information out of it because you don’t have the friendly guide that an LLM provides. Your LLM friend is extremely good at drawing out the data from this imagined database and presenting it to you in the way you like. They’re good at doing some basic reasoning with it. Nothing super advanced though. They’re very bad at drawing new knowledge out of the data.
I think a lot about this observation while I’m using LLMs:
One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven't been able to make a single new connection that has led to a discovery?
Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There's a medical cure right here.
Shouldn't we be expecting that kind of stuff?
I can expect LLMs to be amazing summarizers, but not to discover fundamentally new knowledge for me and make new observations. They often lack the subtlety and visible love for the subject of human nonfiction authors. Despite these limitations, having an infinitely patient helper can be surprisingly useful in a lot of areas of learning and life. It just takes a little creativity to figure out where. You can pepper them into the margins of your learning and generate a ton of value, but I don’t expect them to be a main way I learn things.
Things they’re good at
Fields where there’s clear expert consensus, but that might not have clear routes for outsiders to easily learn that consensus.
Back of the envelope calculations for problems where there’s publicly available data to pull from.
Common knowledge stuff in general.
Summarizing documents and drawing out important points.
Answering lots of follow-up questions about a topic.
Some creative tasks, like spoofing song lyrics. I wrote most of my AI musical Rumsfeld! using Claude.
Coding (compared to me).
Video mode in general, especially for repair tasks around the house.
Things they’re not good at
Very specific knowledge questions about very specific not-well-known fields where access to information is very limited.
Some creative tasks. I don’t like LLM creative writing.
Vague topics where there’s not much consensus. They usually only speak in generalizations with these.
I enjoyed Gavin Leech’s post about all the things he doesn’t like about LLMs. He’s much more bearish on them and a lot of what he wrote contradicts what I say here, so I’d recommend his post as a good counter-weight.
Language
Default LLM language can be annoyingly general. You can fix this by going to their personalization option and telling them to only respond very directly.
This is especially important for voice ChatGPT’s voice mode. ChatGPT’s trained to speak more naturally in voice mode, which often means more vague general language. Making it clear that you want direct language can save a lot of time.
In general, if there are aspects of the LLM’s responses you don’t like, consider asking it not to do that in the customization option. You might be surprised at how different you can make it. Getting creative with prompts can also fix style issues. If you don’t like an LLM’s output, it’s usually easy to just say “rewrite that in (style I like).”
Avoiding hallucinations
"You can't trust anything LLMs say" is starting to have the same flavor as people in the 2010s saying you can't trust anything you read on Wikipedia. It’s true that both can make mistakes (chatbots do still make many, many more mistakes than Wikipedia etc.), but saying this is a sign that you don't use them enough to notice how low the error rate is.
If you took someone from 1990 and put them on the 2025 internet, it’s likely that they’d get overwhelmed with fake or strange ideas, at least for the first week. They might have trouble understanding that even though anyone can edit Wikipedia, it’s one of the most trustworthy sources online, whereas a lot of official looking news websites where only specific people are allowed to post actually have a lot of lies.
Chatbots are like this. Getting good at detecting when a chatbot might hallucinate is kind of like getting good at navigating Google and avoiding lies and conspiracy theories. They’re a strange new situation and we haven’t developed a lot of intuitions for how to navigate it. After using chatbots for a bit, you develop intuitions for the places they’re more likely to hallucinate.
You should assume that LLMs are great summarizers and synthesizers of most human knowledge, but if you ask them to draw out too many new ideas from that knowledge they’ll often fail. Writing nonfiction with patience and love for a topic produces some of the best writing that exists, and LLMs noticeably fall way short of that. When things get very abstract, they’re bad at knowing what they don’t know. This will probably change in the future, but for now LLMs are still very limited. Basically avoid asking LLMs to come up with new complex knowledge, or asking about very specific knowledge that isn’t available as an understood expert consensus online, and a lot of issues with hallucinations go away.
One other specific place to watch out is anything that has to do with the letters in specific words. LLMs learn by encoding chunks of words (not individual letters) into a separate mathematical language they can understand. This means that they’re not 100% clear on the letters that make up those chunks of words. A famous problem that some chatbots still struggle with is “How many R’s in Strawberry” although progress in the question is moving fast. This means you should be a little careful if you’re asking for things that involve counting characters and similar specific questions.
You’re probably going to run into hallucinations. Try to think about where and how this can be useful to you even if it occasionally fails. These models are far from perfect. If they were perfect things would get weird. If it’s really really important that you know that something is 100% factual, you can ask the model for sources for everything it says and double check them.
Models I use
These are what I use right now. I expect they’ll change a lot over the year:
ChatGPT - Deep research, reasoning problems, voice & video mode, internet search, images.
Claude - Writing and reading long text, summarizing papers. I love Claude’s aesthetics, and because it has my favorite writing it’s my favorite model to use.
Gemini Pro - Coding, honestly should be doing a lot more with this. It seems like the best overall model right now, except for deep research.
Perplexity - Internet search.
NotebookLM - This is on my to-try list, will get to it soon. Seems really useful. I think their podcast tool is fun but basically a gimmick right now.
How I use LLMs
Deep research
I think deep research is the single most useful thing chatbots do right now. ChatGPT’s is the only one that’s truly great (I was really unimpressed by Gemini’s deep research recently, it’ll get better though). If there’s no clear overview of a complex topic, I can get a better one than what I can find online by asking Deep Research for it. It’s great at doing a lot of back of the envelope calculations to come up with estimates for important statistics about the real world. Here’s a report I asked it to do on the overall situation with all factory farmed chickens in the US and their overall welfare. It’s the best single thing I’ve read on the topic.
I’ve asked it to generate reports for me on complex topics I’m trying to communicate better about. It was really useful in writing my Using ChatGPT is not bad for the environment post. I double-checked all the numbers after, but it saved me hours of scouring the internet. If I could convince everyone to try only one chatbot tool or trick, ChatGPT’s deep research would be it.
Talking to ChatGPT voice mode while traveling
If you’re walking somewhere and it won’t disturb the people around you, you can talk to ChatGPT voice mode as if you’re talking to someone over the phone about a topic you’re interested in. I do this on my walk to work. This is maybe the most engaging way I know of learning new information over audio. It’s much more engaging than an audiobook or podcast because I can trick my brain into thinking it’s in an active conversation with someone and I feel more obligated to listen closely to what it’s saying and follow up with direct specific questions. This pressures me to pay a lot more attention to what’s happening in my brain, and notice what I don’t actually know much about, and follow that thread with more questions. It’s great for retention.
Building a basic narrative of a new field I don’t know much about
When I’m trying to learn about a new field, it’s helpful to have a kind of skeleton I can attach facts to later; a single coherent narrative of what’s happening. Even if that narrative is mostly wrong or oversimplified, it’s helpful to start there and make it more complex later. When I was trying to learn a lot more about China, I started with the book Wealth and Power, which is a series of profiles about the most important individuals in 19th and 20th century Chinese history. I don’t think understanding history as a series of profiles of people is super useful for actually understanding the world, but having a pile of simple narratives to build a broader understanding of an era is a good first step, otherwise I find that it’s hard to retain new more complex facts.
A prompt that works well here is “Tell me the basic story of (topic) as if you’re explaining it to someone completely new to it, and so that by the end they understand most of the key ideas.”
One of my favorite prompts to use has been to ask for a series of profiles of important people in a new field and how they contributed, and to frame them as part of a single overall narrative of how the field developed. This can help use my need for a simple narrative to get started in understanding the field as a whole.
When I’m ready to go deeper I sometimes prompt it with “If you were to write a textbook introducing (subject) to someone with (detailed description of my exact level of understanding and background), what would each chapter be titled? Include a description of the contents” and then ask it to expand on specific chapters I’d like to read.
Aggressively asking follow-up questions
We’ve all been socialized against asking a lot of specific follow-up questions when presented with new information. It feels annoying to do, but it’s really important. It’s very easy for me to read pages of books before realizing that I don’t actually have a deep understanding of what I’m reading. I can nod along to terms I feel like I understand, but if I actually try to test myself on what they mean I realize I’ve just been playing pretend.
Asking follow-up questions that make me feel a little stupid is the way around this, and LLMs are always available to answer. If I even vaguely don’t understand a chunk of text, I’ll just copy it into a chatbot and ask it to explain it as if I’m new to the topic. This almost always helps a lot. It makes reading take longer, but I leave retaining and understanding a lot more.
It’s possible now to upload whole articles to an LLM and then ask questions about the article as you go as if you were interviewing the author. This can be especially useful for very technical topics you don’t have a strong background in. Econ papers in general have always been a challenge for me. LLMs make them much easier. Chatbots are also surprisingly good at continental philosophy. I recently had a great conversation about some essays by Heidegger where the LLM consistently helped me understand them better.
Follow-up questions in conversations with LLMs are always useful too. In general, remembering that the goal of reading is actual understanding rather than just making it to the end of a page matters a lot. Getting into the habit of grilling AI on specific topics until you’re sure you completely understand them can help you avoid just pretending to learn.
LLMs can be useful in building up your tolerance for asking questions that feel stupid and low status but that will actually help you learn. A key skill for adult life.
BOTECs
Chatbots are great at back of the envelope calculations. If you’re trying to get a rough estimate of a large number of something in the world, chatbots usually produce great results. If the LLM has a reasoning model (ChatGPT has o1 and o3) make sure to use that for the botec.
Summarizing papers
Before I read a paper I’ll often upload it to an LLM to give me an 80/20 of the contents. Knowing the overall message of the paper makes it much easier to retain when I’m actually reading it.
Flashcards
Spaced repetition is one of the most effective ways of actually learning and retaining new information. You can have the LLM generate a large collection of flashcards on a specific topic you’re trying to learn more about and ask it to format them to easily add to Anki, or Obsidian’s flashcard plugin.
Solving problems around the house
I solved a simple plumbing problem by pointing the ChatGPT camera mode at it and explaining what was happening. It gave me a clear set of instructions for how to solve it. Thanks ChatGPT. It can pick up on a surprising amount of context over video.
Vibecoding
I’m planning to sit down with Gemini soon and vibecode a browser-based game. I have some coding experience but this will make it much easier. As Andrej Karpathy said:
If you don’t know what to do with the code the model generates, just ask the model!
Gemini Pro 2.5 seems to be the clear favorite for coding right now.
This is a great article on what you can do with vibecoding.
Automating work tasks
You can connect Claude and other LLMs to databases and other files for your work using Zapier. This can take table data, feed it to Claude with specific requests for what to do with it, and then fill in Claude’s answers in a separate column in the data. I’m only just starting to scratch the surface for what we can do with this.
Life advice
A lot of good life advice usually just requires someone who doesn’t have your specific neuroses examining your situation from the outside and saying something pretty obvious. It usually doesn’t take a profound personal connection or insight into every last detail about you. If you’re struggling with something in your personal life, go to a friend or therapist, but also consider pinging a chatbot. It just takes a few seconds.
A lot of people my age underestimate how often young people use TikTok, Instagram, and YouTube for life advice. In my opinion this is often pretty bad. A lot of that content is insight porn, something to simulate the experience of having a deep insight without actually giving you useful knowledge. It’s also optimized for maximum attention, so more dramatic ideas win out over level headed advice. It’s very noticeable how a lot of the most popular podcasts among young men now frame almost every problem as “They’re lying to you about” (topic).
This works well for the algorithm, but seems bad for keeping people ideologically sane.
I’d be willing to bet that if someone searched TikTok, Instagram, or Youtube for life advice on school, friends, dating, exercise, or difficult emotions, there’s a very high chance that the results they’d get would be much worse than what an LLM would give them, including ideas that would be actively harmful. None of this is a replacement for a real person, but of the other options available LLMs clearly win out. They have the advantage of not being designed to hack our dopamine.
If there’s a problem you’ve been having trouble talking about with other people, try typing a lot of details about it into a chatbot. You might be surprised.
Career advice
Career advice, like life advice, often doesn’t require a crazy amount of insight into the human subtleties of your situation. A lot of good career advice is good for a lot of people, and people just have trouble accessing it because they can’t tailor it just slightly to their specific circumstances at work. I’d trust Claude’s career advice for someone over a lot of online articles. If you’re on the market, try writing a lot about the specifics of your career situation and goals into a chatbot and see what happens.
Clearing ugh fields
An ugh field is a task that you don’t want to do, and that’s causing you so much shame for not having done it that even thinking about it is painful. From Rob Wiblin’s blog:
From this, your brain gradually learns that thinking about this task is the mental version of stubbing your toe. Just as your brain learns to avoid whacking your foot into things, it learns to find creative ways to prevent the task you’re avoiding rising into your conscious awareness.
I’ve had a few ugh field tasks recently I’ve been really averse to thinking about. Just planning out how to deal with them is tough. I realized that I could just ask an LLM “Hey I’m trying to do (task) and really don’t want to. Here are all the details. Please break down exactly what I need to do for each step and make a checklist for me, including every minute little step. If anything requires external information, find the website where I can get that information” and then just unthinkingly follow the checklist. Going into this saying “I’m basically going to be a big baby about this and get the most help from the AI I can, even though this is all really silly” can actually give you an easy way through an otherwise painful experience and make the whole process much faster. Clearing ugh field tasks can be one of the most liberating experiences in everyday life. A little helper tool to make those easier is so nice.
Editing and fact-checking writing
I actually don’t especially like any AI’s writing style. It’s good for learning, but not good for convincing essays (and definitely not for fiction). However, if I’m writing something, LLMs can be a useful way to review and edit. Uploading your final doc to an LLM and asking for a full list of all grammar mistakes and ways to improve the quality of the writing can be great. I disagree with over 50% of LLM advice on how to improve the quality, but it still gives me a lot of useful ideas and updates.
You can do the same to fact check something you’ve written (or any other writing). Just upload the doc and ask the LLM to fact-check every significant claim. It’ll produce a useful list of its best guesses at where the piece is wrong.
Debating and learning about rival viewpoints
If you feel like you don’t fully understand where the other side of a debate is coming from, you can ask a chatbot to behave as if it believes the other side and argue against your points as convincingly as possible. You can go back and forth with it for a while. This can draw out a lot of useful ideas about the other side you might not have noticed by just reading about it on your own. It’s sometimes challenging to find how the other side would respond to your specific objections if you don’t know anyone who actually believes it.
Some other thoughts on chatbots
Chatbots are anti-addictive
An underrated aspect of LLM apps is that they aren’t built to hack your dopamine, and often require thoughtful focused reading. When I use ChatGPT or Claude or Gemini I sit down knowing I’m going to read a lot of sometimes dense material that will require calm focus. Looking at the other most popular apps, I’m happy that ChatGPT is where it is.
AI can be amazing without being magic
Some people interpret me saying “AI is so useful! It’s great! I use it all the time!” to mean “AI can answer every question and is always useful! Everyone has to like it and use it.” which isn’t the case at all. AI is a really useful tool in the same way previous tech has useful. It can be a crazy step up while still having clear obvious limits. YouTube is also a crazy useful tool for learning. If I found out someone had never even considered using YouTube to learn something, I’d hype it up and push them to try it, because they’d obviously be missing out. But YouTube also has a lot of bad/fake info and other noticeable problems. If someone responded “But look! There are all these fake bad videos on YouTube! And it’s using as much energy as whole countries. It has a ton of stolen content on it too. You should really stick to reading books.” I’d react by saying that they were overreacting to a fairly harmless and clearly useful new tech that has a lot of very obvious benefits if you just spend the time to learn how to avoid the downsides. A lot of AI discourse feels like living through the early days of YouTube again, only this time everyone is much more brittle and conspiratorial about new tech and are constantly assuming it’s trying to trick them.
Images
One of the clearest signs of AI progress is how far AI image generators have come in the last few years.
Prompt: A cat dressed as a computer programmer

The progress in AI image generation was so rapid that you quickly saw common knowledge about it develop, and then everyone acted like that common knowledge was going to be around forever, and then in a year it was invalidated. “AI can’t draw hands” is still a popular idea but isn’t the case anymore.
ChatGPT/Dall-E
ChatGPT’s advantage in image generation is its understanding of very subtle requests, picking up on the exact look of a picture, and transferring relevant parts of that picture over into the new style. Here’s a picture where it turned Neutral Milk Hotel into muppets:
Take a second to look at all the little details the AI picked up on and made extremely subtle decisions about what muppets would look like with the exact same details.
Here’s a Mao-era propaganda poster of me over a crowd of farmed animals holding copies of Reasons and Persons:

My favorite use of AI images in general is making extremely personalized images like the Maoist poster above for friends. You can include a ton of very specific details about them in the picture. I’m not sharing these for their privacy, but you can make images of Maoist posters or action figures or Renaissance art that feature your friends doing the very specific things they like.
ChatGPT works great even when you’re just prompting it without any reference images. Here’s “Solarpunk Dunkin’ Donuts” in different styles:
It’s also pretty good at making comic strips:
ChatGPT is good for very specific graphic design that needs a lot of context, or fun pleasant images of specific people and places in different art styles to make them happy. Both are great but limited.
I personally prefer MidJourney’s aesthetic style to ChatGPT. ChatGPT follows instructions well, but often feels kind of rigid. Here are two examples with ChatGPT on the left and MidJourney on the right. In both cases I prefer the look of MidJourney.
Prompt: An AI large language model in the sky. Below it people exist in a confused system where they're constantly changing their minds. Illustration style
Prompt: A landscape architect sketch of a concrete brutalist monument in a forest
ChatGPT is better at capturing all the subtle details I might be asking for, but it often just doesn’t look as great as MidJourney, even if MidJourney deviates from what I’m asking for. If I’m just trying to make an image I like a lot (that’s not an image of a person or place I know) I’ll usually stick to MidJourney.
Prompting
Finding styles that work and that you like
ChatGPT is better at some things than others. It can turn people into muppets, but has trouble putting their faces on Mount Rushmore etc. I think its impressionist paintings mostly look tacky and often obviously AI generated, but really enjoy looking at the cartoons it can generate. There’s a lot more to explore. It’s pretty new as of writing so I’m sure people will find more good prompts.
Refining an image
If you don’t like how an image looks or want a specific detail changed, ask ChatGPT to modify it a few times. Just use very clear specific language. I sometimes combine the final results from a few different images by editing them together separately or just uploading them to ChatGPT and asking to combine the vibes of each. Here’s ChatGPT’s result when I asked it to combine the vibes of images of Biden and Trump:
The written detail it can handle is limited
You can include a surprising amount of very specific details for what you want your picture to look like, but it’s still limited. Experiment with finding the limit for what you’re trying to do. You can actually smuggle in a lot more information if you draw a basic version of the picture you want ChatGPT to create, upload it, and then say to add a lot more detail and describe the specific style of art you’d like. It can handle a lot more detail and context sent over images than text.
MidJourney
MidJourney’s my favorite AI image generator. It’s the most reliable at actually making images I like looking at.
Its controls are specifically designed to modify images, so you get a lot more options for the layout of your image, and you can edit it in much more specific ways than ChatGPT. The basic plan costs $10/month, it’s worth paying once, experimenting with it to see if you like it and deciding if you want to keep using it.
The latest version of MidJourney is the first image generator I’ve seen that can make extremely convincing looking images of people and places. I wouldn’t have been able to tell that any of these were AI unless I looked really close:
Image generators can easily create hands now. The common wisdom “Just look at the hands, AI always messes them up” is over:
Here are a few images I’ve made recently that I liked a lot. I mostly use these for desktop wallpapers, covers for other stuff I’m making (like blog posts), or just keep them to have and look at:
Here’s an album with some more pictures I’ve made that I liked. I share a few more in my post about AI art.
Prompting
I haven’t found any special tricks to prompting MidJourney beyond getting good at the controls (especially the edit feature), giving it a sense of what style of art you like, and exploring a lot of different art styles to see what it’s especially good at.
Music
Suno
From what I can tell Suno’s the best AI music app right now.
Prompting
Suno lets you write lyrics and add tags for the style of music you’d like. you can also opt to make your song entirely instrumental.
Lyrics
It’s very important in the Suno lyrics section to specify which parts of the song are a “(verse)” or '“(bridge)” or “(chorus)” at the beginning. This doesn’t always work but a fun trick is to also add “(spoken word)” for the intro if you’d like your singer to just talk at the beginning of the song. I did that here.
If you include parentheses around words in the main lyrics line, the model will read that as backup singers, which can add fun texture to songs.
Styles
Suno has a spot below the lyrics where you can write a 200 character description of the style of music you’re looking for. It’s built to read this as a series of individual tags rather than whole sentences, so you should write things like “Glitch, triphop, nostalgic, eerie” instead of “A glitch song with triphop elements” etc.
If I want to make a song sound like an artist I like, I can just go to ChatGPT or Claude and ask “Please write me a 200 character Suno prompt to make the song have the same sound as (band). Do not use any proper nouns, make it very info-dense, as a series of tags instead of whole sentences. Separate everything by commas.” I generated the first AI music I really enjoyed listening to by doing this with Boards of Canada. ChatGPT’s description was:
Lo-fi analog synths, VHS warble, warm tape hiss, dreamy nostalgia, slow breakbeats, deep reverb, eerie speech samples, minor-key pads, hypnotic loops, surreal, hazy, downtempo, 90s IDM.
Another good place to find style ideas is to click “Home” and listen to the popular songs other people have made, and if you hear any styles you like look at the tags the person used to make it.
Original music
Writing lyrics
The only song I’ve made on Suno that I actually sat down and wrote all the lyrics for is Get Bigger Than the Carnists. For most other songs I asked Claude to write lyrics (and modified them a lot, you need to push Claude to not use overly general language, even in music!). With Claude’s help (and ChatGPT for prompt ideas to make Suno use the right style of music for each song) I wrote and published Rumsfeld! The Musical in a few hours of playing with the model. I found myself actually listening regularly to this punk cover of Thoughts for the 2001 Quadrennial Defense Review, and Defying Intelligence.
Covers
Suno really cracked down on copyrighted material, so you unfortunately can’t just take lyrics from songs you like and make them different styles. Before they did that I made a few. I found that country music is probably the best-sounding genre for Suno covers, so most of these are some variant of country:
Golden Boy (Mountain Goats) - hillbilly, and in French (you notice how good these models are when you hear a song in another language)
Northfield (a favorite sacred harp hymn) in a bunch of different styles
Jammable
I’m closing this with the most tasteless way I use AI: making covers of songs I love with stupid character voices. Jammable makes it really easy to get stupid AI covers of songs you like with a lot of characters.
Here’s Patrick Star singing Trouble by Cat Stevens:
Maybe the most tasteless thing I’ve created is Plankton singing Gardenhead/Leave me Alone by Neutral Milk Hotel:
There’s something stupidly cathartic about it. The song was one of my favorites as a teenager and I listened to the Plankton cover over and over one day. Inexcusable.
Gavin Leech observes that this textbook is subpar
This is a metaphor, I understand that LLMs don’t actually have databases
I am not sure about anti-addictiveness. I have observed, that while doing deep research, chatgpt often gives positive reinforcement like "You’ve raised an excellent point about...", "you're getting the gist of ... quite well", "You're doing exactly what a good inquiry should do...", "You’ve just raised an excellent meta-level point...". Not sure if this is a feature but flattering definitely works, surely it's not on the level of tiktok addictiveness, but getting praised by AI while having an intellectual conversation with AI definitely feels good. As it has the ability to simulate a human-like interaction, it definitely has addictive potential.
Vibecoding is a lot of fun. Indeed, Gemini is impressive, especially when you can currently play for free within ai.dev (Google's coding studio). I've done a bit of Cursor/Windsurf, but it's also interesting to see what people without subscriptions have access to.