Hey Cloud Architects and AI Enthusiasts!
In our last post, we got our hands dirty (or rather, our fingers on the keyboard!) with Vertex AI Studio and Google Gemini, quickly prototyping a product describer for a retail company. It was awesome to see how fast we could go from an idea to a deployed web app β no code required.

But what if the AI's first response isn't quite what you wanted? Or you need it to be really specific, following a precise format, every single time? That's where Prompt Engineering comes in β it's the art and science of talking to your AI models to get the best possible results.
Today, we're diving deeper. We'll explore powerful techniques like few-shot prompting and how to rigorously test and compare different prompt versions right within Vertex AI Studio. Our new retail challenge? Intelligently routing customer service queries β a task where precision and consistency are paramount.
Ready to level up your Gemini game and build even smarter AI solutions? Let's go! π
Part 1: The Art of Prompt Engineering β Why It Matters So Much π¨
Think of an LLM like Gemini as an incredibly knowledgeable, but sometimes unopinionated, intern. If you give vague instructions ("Write something about a product"), you'll get a vague (though often grammatically perfect) output. But if you give clear instructions, examples, and context, your "intern" becomes a highly efficient, specialized assistant.
That's Prompt Engineering! It's about:
- Clarity: Ensuring the model understands exactly what you want.
- Consistency: Getting predictable outputs, especially for repetitive tasks.
- Control: Steering the model's tone, format, and focus.
- Accuracy: Reducing "hallucinations" or irrelevant information.
Our New Retail Challenge: Customer Service Routing π
Imagine a busy e-commerce customer service department. Queries come in via email, chat, or social media. Manually reading each one and deciding if it goes to "Returns," "Technical Support," "Billing," or "General Inquiry" is time-consuming and prone to error.
We want Gemini to act as a smart router, categorizing incoming customer messages instantly. This requires precision!
Part 2: Zero-Shot vs. Few-Shot Prompting (and When to Use Them!) π¬
In our last post, we used Zero-Shot Prompting for the product describer. That means we gave the model no examples, just instructions. For simple, creative tasks, zero-shot can be great!
But for structured, consistent outputs, especially classification or data extraction, Few-Shot Prompting is your secret weapon.
Scenario: Routing Customer Queries
Let's start by creating a new prompt for our customer service routing.
- Create a New Prompt: In the Google Cloud console, from the Navigation menu (β°), select Vertex AI > Vertex AI Studio > Create prompt.
- Rename Your Prompt: Click on "Untitled Prompt" at the top left and rename it
Customer Service Router - Prototype
. - System Instructions β Setting the Role: First, give Gemini a clear role.
In the System instructions box, paste:
You are an AI assistant for a retail company's customer service department.
Your primary goal is to classify incoming customer messages into specific, predefined categories.
You must only respond with the designated category. If unsure, respond with 'Uncategorized'

4 . Configuration Settings:
- Ensure the model name model is selected (preselected gemini-2.5-flash model )
- Set Temperature to
0.1
. (Why? We want very little randomness for classification; we need precise, deterministic outputs.) - Set Output token limit to a reasonable number, like
128
(since we only expect a category name). - Confirm your Region ( for example I picked the Paris region , choose the one near to you or a cheap one )

5 .Zero-Shot Attempt: Let's see how it does with no examples.
- In the main prompt area, paste:
Customer Message: "My new smart toaster isn't connecting to the app. I followed all the instructions, but the Wi-Fi light just keeps blinking red."
Classify this message into one of the following categories:
- Technical Support
- Billing Inquiry
- Order Status
- Returns & Exchanges
- General Inquiry
- Click the Submit arrow. Observe Gemini's response. It will likely get it right, but sometimes it might add extra text or be slightly off.

β‘ Architect's Insight: Zero-shot is fast, but it relies heavily on the model's pre-trained knowledge. For tasks requiring strict formatting or very specific definitions of categories, it can be less reliable.
- Clear the Canvas: Click the Clear icon (usually a trash can or X) in the top toolbar to clear the entire prompt canvas. This is crucial before adding few-shot examples!
Enter Few-Shot Prompting!
Now, let's guide Gemini with examples. This teaches the model the pattern we expect.
- Add Examples: On the bottom right of the Prompt section, click the Add Examples button (looks like two overlapping squares).

- Input Your First Example:
- For the INPUT of your first example, paste:
Customer Message: "I received a defective jacket. The zipper is completely broken right out of the box. How do I send it back for a replacement?"
Classify this message into one of the following categories:
- Technical Support
- Billing Inquiry
- Order Status
- Returns & Exchanges
- General Inquiry
- For the OUTPUT of your first example, paste the exact desired output:
Returns & Exchanges
- Add More Examples: Repeat the process by clicking Add examples again in the example window. Aim for at least 2-3 diverse examples for robust few-shot learning.
- Second Example:
INPUT:
Customer Message: "My last statement seems to have a double charge for the 'Ultimate Gaming Headset'. Can you look into this for me?"
Classify this message into one of the following categories:
- Technical Support
- Billing Inquiry
- Order Status
- Returns & Exchanges
- General Inquiry
OUTPUT:
Billing Inquiry
- Third Example:
INPUT:
Customer Message: "I ordered the 'Smart Home Hub' three days ago. Has it shipped yet? I'm excited to get it setup!"
Classify this message into one of the following categories:
- Technical Support
- Billing Inquiry
- Order Status
- Returns & Exchanges
- General Inquiry
For the OUTPUT of your first example, paste the exact desired output:
Order Status
- Click the Add examples button to save all your examples and return to the main prompt.

- Re-add System Instructions & New Input: Remember, clearing the canvas also cleared the system instructions. Paste them back in:
You are an AI assistant for a retail company's customer service department.
Your primary goal is to classify incoming customer messages into specific, predefined categories.
You must only respond with the designated category. If unsure, respond with 'Uncategorized'.
Now, in the main prompt area (Input) Write value here, paste the original message from our zero-shot attempt:
Customer Message: "My new smart toaster isn't connecting to the app. I followed all the instructions, but the Wi-Fi light just keeps blinking red."
Then, in the Write your prompt here area (below the Input field), add the instruction that leverages the examples:
Classify this message into one of the following categories, based on the examples provided:
- Technical Support
- Billing Inquiry
- Order Status
- Returns & Exchanges
- General Inquiry

5 . Submit and Compare: Ensure your Temperature is still 0.1
. Click the Submit arrow.
- Observe: Has the output become more consistent? Does it strictly adhere to the category name, without extra conversational text? Few-shot prompting helps Gemini understand the intent and format you expect.

Part 3: Experimenting & Comparing Prompts β The "A/B Testing" of AI π
Prompt engineering is an iterative process. You try something, you test it, you refine it. Vertex AI Studio's Compare feature is incredibly powerful for this!
Let's refine our "Customer Service Router" further by adjusting parameters and instruction variations.
- Save Your Current Prompt: Click the Save button at the top. Confirm the name
Customer Service Router - Prototype
. - Enter Compare Mode: With your
Customer Service Router - Prototype
prompt open, click the Compare button on the top toolbar.
- Your current prompt will appear in a column on the left.

3 . Add a New Comparison Prompt (Zero-Shot with Stronger Instructions):
- In the central area, click + Compare new prompt. A new pane appears on the right.
- For this new prompt, let's try a strict zero-shot approach, but with very explicit instructions. In the single large text box for the prompt, paste:
You are a highly efficient customer service message classifier.
Your only task is to identify the single most relevant category for the given customer message.
Choose ONLY from the following categories: Technical Support, Billing Inquiry, Order Status, Returns & Exchanges, General Inquiry.
Do NOT add any other text, explanations, or punctuation. Just the category name.
Customer Message: "My smart thermostat keeps losing connection. I've reset my router multiple times."
- Configuration for this comparison:
- Model:
model name
(same as before). - Temperature:
0.0
(even more deterministic). - Output Token Limit:
64
(even tighter for a single word). - Region: Confirm.
- Scroll down and click Apply.

4 . Submit Prompts & Compare! Click the Submit prompts button (at the top of the "Compare" interface).
- Analyze the outputs: How do the few-shot (left) and strict zero-shot (right) perform? Does one give you cleaner, more consistent single-word outputs? This side-by-side comparison is invaluable for prompt optimization!
5 . Experiment with Temperature (in a New Comparison):
- Click + Compare new prompt again.
- For this new prompt, let's use the same few-shot setup as your original
Customer Service Router - Prototype
(copy its text and examples over, or select it if Vertex AI allows "Compare saved prompt" and then modify). - Configuration Change: Change the Temperature to
0.5
(a bit more creative/varied). - Submit prompts and see the difference. Does the higher temperature cause the model to deviate or add extra words? For classification, higher temperatures are usually undesirable.
6 . Prompt Management (Quick Review):
- Once you're done comparing, you can save your preferred version by clicking Save as new from any of the comparison panes, giving it a descriptive name (e.g.,
Customer Service Router - Optimized Few-Shot
). - To exit the "Compare" view, click the back arrow (β) at the top left.
- You can always find all your saved prompts under Vertex AI > Vertex AI Studio > Prompt Management. This is your library of AI knowledge!

What's Next? Multimodality & Beyond! π
You've just mastered some crucial Prompt Engineering techniques! Understanding few-shot prompting and how to effectively use Vertex AI Studio's compare feature will dramatically improve your ability to build reliable AI applications.
Next up, we're going to dive into one of Gemini's most exciting capabilities: Multimodality! Imagine giving Gemini an image of a product or a customer's handwritten note, and asking it to extract information. Yes, it can do that β and we'll see how in our next post.
Got any questions about prompt engineering or how you're seeing it apply in your own work? Share your thoughts below!
See you in the next one, where we bring images into the AI conversation! π