Skip to content
GitHub

What Can AI Do?

Learn what AI does best and where it falls short.

On this page, you’ll learn about the capabilities and limitations of AI, especially in academic settings. Let’s explore where AI excels, where it disappoints, and how different models differ in their thinking. After finishing this page, you’ll know when AI helps and when it can drag down your productivity.


By AI, this site refers to LLMs (large language models), the most common type of AI used behind ChatGPT, Google Gemini, Claude, and more.

So, when does it work best?

AI can draft essays (use responsibly; see the Ethics section), summaries, flashcards, and study notes almost instantly. Whether you’re restating definitions or rewriting your thoughts, it’s fast and fluid.

Example: You ask: “Summarize the causes of the Civil War in two paragraphs.” AI delivers a concise and coherent summary—perfect as a study starting point.

AI is excellent at idea generation. Need essay topics, project themes, or bullet-point plans? It can produce those in seconds.

Example: You ask “Give me five creative project ideas for environmental science.” AI quickly lists ideas like “urban microclimate mapping with DIY sensors,” or “bioplastics from kitchen waste.”

c. Reasoned Problem-Solving (with techniques)

Section titled “c. Reasoned Problem-Solving (with techniques)”

When guided with the right prompts—like “Let’s think step by step”—AI can tackle multi-step problems in math, logic, or coding with logical precision. You can also use a reasoning model, which automatically guides itself through steps to solve a problem. We’ll discuss this later.

Example: “If the cafeteria had 23 apples, used 20, then bought 6, how many remain? Let’s think step by step.” AI breaks it down: 23 − 20 = 3, then 3 + 6 = 9.


AI sometimes fabricates information—invented dates, fake citations, or baseless quotes. Always double-check facts.

AI recognizes patterns, not meaning. It can mimic reasoning but doesn’t “understand” concepts like a human does.

Unless connected to real-time sources, AI only “knows” what it’s been trained on.

Studies show students often engage less cognitively when AI provides instant solutions. Without effort, critical thinking can suffer.

Even advanced AI models falter on highly complex reasoning tasks—especially when hallucinations arise.


3. Language Models vs. Reasoning Models: What’s the Difference?

Section titled “3. Language Models vs. Reasoning Models: What’s the Difference?”

If you are using ChatGPT, you can choose from GPT-5 Instant (language model) and GPT-5 Thinking (reasoning model) as of September 2025.

What is the difference between language models and reasoning models, and when should you use each? Let’s find out.

  • What they are: Models trained to predict and produce text based on patterns. This is the classic type of large language model.

  • Best at: Creative writing, summarizing, translating, and quick conversation.

  • Trade-offs: It does not go through thorough reasoning processes when responding to a user’s question. When dealing with complicated or STEM-heavy questions, it might not be capable of giving the best response.


  • What they are: LLMs fine-tuned or architected to generate structured, multi-step reasoning. They produce intermediate “private” reasoning steps, not visible to the user, to arrive at more reliable answers.

  • Best at: Solving STEM-related problems or answering complicated questions that require “thinking”.

  • Trade-offs: It usually takes longer for reasoning models to respond to a problem, since they go through a reasoning process before giving a response. They might also overthink when dealing with simple, straightforward problems.


TaskUse AI?Use Reasoning Model?Why It Works
Drafting essay topics or introsYes (responsibly)Only if the topic is complicated or requires “thinking”Quick idea generation is perfect for LLMs. Reasoning adds cost without benefit.
Tackling a multi-step algebra problemPossibleYesReasoning models help trace logic and avoid simple mistakes.
Summarizing a book chapterYes (responsibly)NoFast and fluent—perfect for LLMs.
Debugging a coding logic errorPossibleYesComplex logic demands reasoning steps over fluent output.

  • Know your AI’s strengths: Use it to generate, brainstorm, and summarize.
  • Guard against its limitations: Always cross-check facts, especially with hallucination risk.
  • Think critically: Don’t substitute understanding with convenience.
  • Choose the right AI for the task: Use LLMs for creative tasks, reasoning models for complex logic.
  • Maintain your learning: Use AI as a tool to support—not replace—your thinking.