What Can AI Do?
Learn what AI does best and where it falls short.
On this page, you’ll learn about the capabilities and limitations of AI, especially in academic settings. Let’s explore where AI excels, where it disappoints, and how different models differ in their thinking. After finishing this page, you’ll know when AI helps and when it can drag down your productivity.
1. What AI Does Best
Section titled “1. What AI Does Best”By AI, this site refers to LLMs (large language models), the most common type of AI used behind ChatGPT, Google Gemini, Claude, and more.
So, when does it work best?
a. Writing & Content Generation
Section titled “a. Writing & Content Generation”AI can draft essays (use responsibly; see the Ethics section), summaries, flashcards, and study notes almost instantly. Whether you’re restating definitions or rewriting your thoughts, it’s fast and fluid.
Example: You ask: “Summarize the causes of the Civil War in two paragraphs.” AI delivers a concise and coherent summary—perfect as a study starting point.
b. Brainstorming & Outlining
Section titled “b. Brainstorming & Outlining”AI is excellent at idea generation. Need essay topics, project themes, or bullet-point plans? It can produce those in seconds.
Example: You ask “Give me five creative project ideas for environmental science.” AI quickly lists ideas like “urban microclimate mapping with DIY sensors,” or “bioplastics from kitchen waste.”
c. Reasoned Problem-Solving (with techniques)
Section titled “c. Reasoned Problem-Solving (with techniques)”When guided with the right prompts—like “Let’s think step by step”—AI can tackle multi-step problems in math, logic, or coding with logical precision. You can also use a reasoning model, which automatically guides itself through steps to solve a problem. We’ll discuss this later.
Example: “If the cafeteria had 23 apples, used 20, then bought 6, how many remain? Let’s think step by step.” AI breaks it down: 23 − 20 = 3, then 3 + 6 = 9.
2. Where AI Falls Short
Section titled “2. Where AI Falls Short”a. Hallucinations & Confabulations
Section titled “a. Hallucinations & Confabulations”AI sometimes fabricates information—invented dates, fake citations, or baseless quotes. Always double-check facts.
b. No Genuine Understanding
Section titled “b. No Genuine Understanding”AI recognizes patterns, not meaning. It can mimic reasoning but doesn’t “understand” concepts like a human does.
c. Knowledge Cutoffs & Outdated Info
Section titled “c. Knowledge Cutoffs & Outdated Info”Unless connected to real-time sources, AI only “knows” what it’s been trained on.
d. Encouraging Over-Reliance
Section titled “d. Encouraging Over-Reliance”Studies show students often engage less cognitively when AI provides instant solutions. Without effort, critical thinking can suffer.
e. Complex Challenges Still Tough
Section titled “e. Complex Challenges Still Tough”Even advanced AI models falter on highly complex reasoning tasks—especially when hallucinations arise.
3. Language Models vs. Reasoning Models: What’s the Difference?
Section titled “3. Language Models vs. Reasoning Models: What’s the Difference?”If you are using ChatGPT, you can choose from GPT-5 Instant (language model) and GPT-5 Thinking (reasoning model) as of September 2025.
What is the difference between language models and reasoning models, and when should you use each? Let’s find out.
Language Models (LLMs)
Section titled “Language Models (LLMs)”-
What they are: Models trained to predict and produce text based on patterns. This is the classic type of large language model.
-
Best at: Creative writing, summarizing, translating, and quick conversation.
-
Trade-offs: It does not go through thorough reasoning processes when responding to a user’s question. When dealing with complicated or STEM-heavy questions, it might not be capable of giving the best response.
Reasoning Models (RLMs)
Section titled “Reasoning Models (RLMs)”-
What they are: LLMs fine-tuned or architected to generate structured, multi-step reasoning. They produce intermediate “private” reasoning steps, not visible to the user, to arrive at more reliable answers.
-
Best at: Solving STEM-related problems or answering complicated questions that require “thinking”.
-
Trade-offs: It usually takes longer for reasoning models to respond to a problem, since they go through a reasoning process before giving a response. They might also overthink when dealing with simple, straightforward problems.
4. Table: When to Use Which AI?
Section titled “4. Table: When to Use Which AI?”| Task | Use AI? | Use Reasoning Model? | Why It Works |
|---|---|---|---|
| Drafting essay topics or intros | Yes (responsibly) | Only if the topic is complicated or requires “thinking” | Quick idea generation is perfect for LLMs. Reasoning adds cost without benefit. |
| Tackling a multi-step algebra problem | Possible | Yes | Reasoning models help trace logic and avoid simple mistakes. |
| Summarizing a book chapter | Yes (responsibly) | No | Fast and fluent—perfect for LLMs. |
| Debugging a coding logic error | Possible | Yes | Complex logic demands reasoning steps over fluent output. |
5. Summary
Section titled “5. Summary”- Know your AI’s strengths: Use it to generate, brainstorm, and summarize.
- Guard against its limitations: Always cross-check facts, especially with hallucination risk.
- Think critically: Don’t substitute understanding with convenience.
- Choose the right AI for the task: Use LLMs for creative tasks, reasoning models for complex logic.
- Maintain your learning: Use AI as a tool to support—not replace—your thinking.