We’ve all been immersed in the hype surrounding Generative AI. It’s becoming a critical part of business operations, whether for sales, support, or analysis. As a techie, I’ve also been exploring how to leverage these tools to boost my productivity. Ideally, I’d use them for code generation and analysis. However, the effectiveness of these tools purely depends on the wielder.

“A smartphone is only as smart as the user.”

While everyone around me was getting accustomed to smart code completion tools, we’ve now entered a new era of Agents. Tools like Cursor AI, GitHub Copilot, and others are currently being used in various scenarios, from assisting with code analysis to generating code and even automating workflows. It was time for me to see how these new agents could help me with my new team and project.

Context: My existing knowledge

I recently transitioned to a new team within the organization. I have a strong background in building backend Java applications and experience with older frontend tools like JavaScript (ES4) and some React. While I understand how new frameworks work, I don’t have enough exposure to be an expert.

The new project domain was a candidate engagement/tracking platform. The tech stack for the project included Python/Django, Postgres, and Angular. The project wasn’t overly complex, making it a perfect use case to test out Cursor AI, which was a leading tool at the time. The principles I learned, however, are applicable to today’s most popular agents like GitHub Copilot and others.

Goal: Refine a story with cursor.

Constraints:

  • Refine the story for any developer who would pick it up.

  • No knowledge transfer of code.

Story: “Phone validation for candidate edit flow was inconsistent.”

This was a vague story header to work with, and I was unaware of the user interaction details. So, I dove directly into figuring out the primary touchpoint.

I checked out the frontend and backend codebases, which were in a single repository, and got Cursor up and running to help it set its context.

On the surface, the story seemed self-explanatory. (It doesn’t take a genius to figure out those aspects!) Coming from a Java background, understanding the service structure would have taken some time. Cursor quickly helped by analysing the context and pinpointing the exact locations to search.

Let’s walk through the process of how I worked with Cursor to refine this one-line story into a blog post.

P.S. I won’t delve into how we set up Cursor with its rules or discuss how LLMs and context building work.

Prompt 1: Can you help me refine a story which we have only 1 line about — “Phone number validation during editing a candidate is incorrect”. Identify the files that are related to the story.

Cursor: Went through all the directories and identified two locations where phone numbers were found. Cursor indicated that the add candidate and edit candidate components had phone number validations, and they differed in terms of the regular expressions (regexes) being used.

Prompt 2: “Are there any other locations where the phone number for the candidate needs to be handled? Could you confirm if the backend needs to handle any such inconsistencies?”

Cursor: Found the backend model, and it has a slightly different regex than the frontend for validation, though it’s not as restrictive. The form types for add/edit candidate were also different.

My initial analysis of what the story should encompass was now a bit more concrete. We still needed to understand the tests and the limitations of the current regex. Cursor informed me that the “add candidate” regex was almost valid, as it still didn’t cover validations for international phone numbers. The “add candidate” component used a reactive form input, while the “edit” component used a template-driven form. I decided to keep the backend changes aside for now, as its regex was more flexible, and we would handle it as a separate story.

Now we knew where we could make a few changes and close this enhancement. But as Robert Martin (Uncle Bob) says:

“Always leave the campground cleaner than you found it”. 

So, we needed to refine the story further.

Prompt 3: “Taking the add candidate component as the base, and knowing that Sahaj is an organization that spans India, Australia, US, and UK, what other changes need to be incorporated for phone validation, keeping the changes minimal? Let’s keep the backend changes aside, as its regex is not as restrictive.”

Cursor: Recommended handling phone numbers with a minimum length of 7 and handling “+” as a prefix (with the modified regex). It also suggested modifying the edit candidate component to use a reactive form approach for consistency.

Prompt 4: Keeping all the information in check, can we now write a story as a technical product manager to create a common phone number component which will be used in both the add/edit candidate component? The backend validator should match the frontend. Tests should be a critical part of the acceptance criteria. Do add a background section for the story.”

Cursor: (Spits out a one-pager with different sections: background, impacted components, limitations, acceptance criteria.)

This one-pager looked good, but I was feeling lazy, so back to Cursor!

Prompt 5: “Can you give me the same story but this time make it a Markdown style story and in a single context to help me copy to the clipboard?”

Let’s look at the final draft of the story.

### Phone Number Validation Consistency and Form Standardization
Critical inconsistency in phone number validation between the Add Candidate and Edit Candidate components, along with a complete lack of test coverage for phone number validation. This creates a poor user experience and potential data integrity issues.

#### Add Candidate Component ✅ (Reactive form - Better Implementation)
Location: ....add-candidate.component.html
Validation Pattern: ([0-9s-]{7,})(?:s*(?:#|x.?|ext.?|extension)s*(d+))?$

#### Edit Candidate Component ❌ (Template driven form - Problematic Implementation)
Location: ....edit-candidate.component.html
Validation Pattern: ^\d+(\.\d{1})?$

#### Critical Issues with Edit Candidate Component:
❌ Wrong pattern: Designed for decimal numbers, not phone numbers
❌ Allows decimals: Accepts invalid values like 123.4
❌ No spaces/hyphens: Rejects common phone formatting
❌ No extensions: Can't handle office extensions
❌ Arbitrary restriction: 10-character limit blocks international numbers

#### Add Candidate Tests (add-candidate.component.spec.ts):
✅ Comprehensive validation tests for email and LinkedIn
❌ No phone number tests

#### Edit Candidate Tests (edit-candidate.component.spec.ts):
✅ Basic functionality tests (creation, dialog behavior)
❌ No phone number tests

### Acceptance Criteria
Test Coverage
- [ ] Common phone validation 
- [ ] Add Candidate component has comprehensive phone validation tests
- [ ] Edit Candidate component has reactive forms.
- [ ] All existing functionality is preserved
- [ ] Test coverage report shows improvement

That story looked far more polished than anything I would have written myself!

After this first activity, I learned a couple of things to effectively work with Cursor:

  • Analyze the requirement with an agent and make sure you instruct it to ask clarifying questions for better understanding.

  • Whether it’s coding or refining, the basics matter..

  • Break down the analysis into multiple phases.

  • Plan out each phase.

  • The bigger the task, the more confused the agent can become.

“As a senior developer, I use Cursor as an intern who could be a prodigy with the right guidance. This makes me ensure that I vet every analysis and plan it provides.”

A few things to help with costs:

  • Having your thoughts aligned and concise will help avoid making multiple calls to the agent.

  • Trim the chat window or start a new window if a task is completed.

  • I used Auto mode for this exercise. (Auto helps with cost/time, but it could redirect you to an inefficient model.)

  • Pricing models for these tools change frequently, so it would be a good idea to check and see if the cost advice is still accurate.

Would I use an agent to refine? Yes, definitely. The rapid evolution of AI tools has only reinforced this. Whether it’s Cursor, GitHub Copilot, or the next big thing, the core value remains the same: it’s a powerful tool to accelerate development, as long as I’m vetting and learning from the agent. Ultimately, it’s a highly intelligent assistant for my codebase, a “glorified search engine” that helps me get to the solution faster.