Boris Quiroz

Technologist. Sometimes I do some coding.


Things I Learned After a Few Weeks Coding With AI


Recently, I started using AI tools to assist with my coding project. This is not a comprehensive guide, but rather a collection of insights I’ve gathered over the past few weeks. The motivation behind this initiative was a company suggestion to explore AI tools for coding and I was curious to see how they could enhance (or not) my workflow.

So far, I’ve been using a combination of AI tools, but most notably Copilot, Codex, Claude.ai and Cursor. I won’t go into the details of each tool, but rather share my experiences and what I’ve found to be the most “interesting” mindset to have when using AI for coding.

AI is a tool, not a replacement.

One of the first things I learned is that AI is a tool and not a replacement for a developer. It can assist with tasks, but it doesn’t replace the need for human judgment and creativity. AI can help with repetitive tasks, suggest code snippets, and even debug issues, but it doesn’t understand the context of your project or the nuances of your codebase. At the moment, AI is very bad a contextual understanding and even worse at system design. It can help you write code, but it won’t write the entire application for you. Not at all. Or maybe yes, but it won’t be professional code. It will be a mess of code that you will have to clean up and refactor anyway.

Below you’ll find a list of things I learned while coding with AI tools and I think worth sharing. It is not an exhaustive list, but it is (I think) a good starting point for anyone looking to use AI tools for coding.

Be clear and specific.

Provide context, describe the goal, constrains and the language you are using. Avoid vague promppts like “parse this CSV file”. Instead, try something like:

Write a Python function that redas a CSV and returns the average of the ‘price’ column. The CSV file is in the following format: date,price,quantity.

Iterate with follow-up prompts.

Most of the tools I tried work best in a conversational loop. Ask, evaluate, then refine your request.

  • Refactor this code to be more readable/testeable.
  • What’s the Big-O complexity of this code?
  • Add type hints and a docstring
  • Now make it asynchronous in the most concise way

I’ve found that if I use the in the most concise way prompt, it will often produce a more elegant solution. This is especially useful when you want to avoid unnecessary complexity.

Show examples of input and output.

If you’re using some chat-based AI tool, providing examples of input and output can help the AI understand what you’re looking for. For instance, if you’re asking it to write a function that processes a specific type of data, show it a sample input and the expected output. For example:

Write a Python function that takes a list of dictionaries and returns a new list with only the dictionaries that have a ‘price’ key greater than 100. For example, given the input:

data = [
    {'name': 'item1', 'price': 50},
    {'name': 'item2', 'price': 150},
    {'name': 'item3', 'price': 200}
]

The expected output should be:

[
    {'name': 'item2', 'price': 150},
    {'name': 'item3', 'price': 200}
]

If you’re using a code editor with AI integration, you can also provide examples in the code comments or as part of the code itself. This helps the AI understand the context and the expected behavior of the code. You can use docstrings, typing hints or explicit goals in the comments to guide the AI. For example:

def process_data(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
    """
    Process a list of dictionaries and return a new list with only the dictionaries
    that have a 'price' key greater than 100.

    Example:
    Input: [{'name': 'item1', 'price': 50}, {'name': 'item2', 'price': 150}]
    Output: [{'name': 'item2', 'price': 150}]
    """
    pass

Break complex tasks into small parts

Ask for smaller building blocks first, then stitch them together. For example, first ask for the fetcher, then the parser and finally the processor. This way, you can ensure each part is well-defined and works correctly before integrating them into a larger solution.

Use the right tool for the job.

Not all AI tools are created equal. Some are better suited for specific tasks and in my experience with the tools I tried, I found that ChatGPT is very good at analyzing files, summarizing codebases and very good at generating documentation (like README files!). Copilot is great for completing functions from comments, for rapid scaffolding and for generating code snippets. Cursor is great for debugging and refactoring code, while Claude is great for generating code from scratch and for answering questions about the code.

At the end, AI in coding is not magic (but it’s close). The more thoughtful the interaction, the better the results. Treat it like a peer with infinite patience and a vast knowledge base, but remember that it still needs your guidance to produce the best results.


Published June 10, 2025