Advertisement
UK markets closed
  • NIKKEI 225

    40,580.76
    +506.06 (+1.26%)
     
  • HANG SENG

    17,978.57
    +209.43 (+1.18%)
     
  • CRUDE OIL

    83.88
    +1.07 (+1.29%)
     
  • GOLD FUTURES

    2,369.40
    +36.00 (+1.54%)
     
  • DOW

    39,308.00
    -23.90 (-0.06%)
     
  • Bitcoin GBP

    46,857.99
    -1,778.65 (-3.66%)
     
  • CMC Crypto 200

    1,252.54
    -82.38 (-6.17%)
     
  • NASDAQ Composite

    18,188.30
    +159.54 (+0.88%)
     
  • UK FTSE All Share

    4,463.09
    +33.43 (+0.75%)
     

Microsoft’s chief scientist: Step aside, prompt engineers—AI will start prompting you instead

Getty Images

Jaime Teevan is chief scientist and technical fellow at Microsoft.

Millions of users have turned to artificial intelligence (AI) to overcome the blank page problem—that feeling of uncertainty we all know when faced with an expanse of white that seems to demand perfection from the very first word.

However, a year into integrating LLMs into our productivity tools, we seem to have created a new blank page to scratch our heads at: the prompt box.

Prompting is how people tell an AI tool what they want it to do. You’re not alone if you find a blank prompt box overwhelming. Even though we’ve been using natural language to communicate our whole lives, we need to talk to large-language models differently than we talk to other people, often in unintuitive ways. What’s more, the best way to prompt constantly evolves as the underlying models improve, meaning that even if you arrive at a prompt strategy that works well today, it might not work tomorrow. And that’s assuming you already know what you want to use AI for—these new models are so capable that it’s hard to even imagine all of the things they can potentially do.

ADVERTISEMENT

In my role as chief scientist at Microsoft, I’ve spent a lot of time trying to figure out how to help people prompt better, whether it’s by discovering successful practices that can get turned into training materials or making it easy for people to share good prompts. However, my long-term goal isn’t really to help people figure out how to get the most out of AI—it’s to use AI to help people get the most out of themselves. This means that rather than expecting everyone to ask AI exactly the right question, phrased exactly the right way, AI really needs to be prompting us.

My prediction for the next big AI-driven change: Our computers are going to start asking us questions! This will help us prompt better, articulate our thoughts more clearly, and even open new perspectives to explore.

AI as clarifier

The most obvious questions we’ll start to see from AI are follow-up questions. Nobody ever really provides enough context on their own when they first ask for something. If somebody were to ask you to create a slide deck from a document, for example, you wouldn’t just go away and create that deck, you’d first ask for some additional information. “Who’s your audience for this?” you might say, or “How much detail do you want to include?”

Similarly, AI shouldn’t just create a deck for you when you ask it to without first learning a little more about what you want.

The process of working with someone to arrive at a shared understanding is called “grounding.” In fact, a significant part of any conversation between people typically consists of grounding.

When an AI system pulls background information into the prompt in the process of generating a reply, we now use the term “grounding” for that as well. For example, if an AI tool does a web search to help answer a question you have, we say that the answer is grounded in the search results. But rather than always trying to figure out how to ground a conversation by finding what it needs on its own, AI should sometimes ground the old-fashioned way—by asking for clarification.

Research suggests many of the prompts people use are too vague or broad to produce good replies, and that having the AI ask clarifying questions can improve outcomes. Of course, the questions an AI system asks might not look exactly like the questions a person would ask, because the questions will target the kind of information AI needs to produce high-quality replies. Generative AI does a lot better producing ideas when given an example of what you are looking for. If you want, say, a list of ideas for a catchy title, a particularly valuable thing for AI to do would be to ask you for an example.

AI as provocateur

Questions can do more than just get at what we already know—they can also spur new ways of thinking. There’s been a lot of discussion recently about how AI can help people do things faster and more efficiently, and research consistently shows significant productivity gains when people use AI to do the things they tell it to, like writing a first draft or summarizing an article. However, AI can be even more useful when we use it not just to obey us but also to challenge us.

One of my favorite ways to use AI is to get a fresh perspective. When summarizing an article I prompt, “What questions should I ask?” When writing a hard email, I ask how my message might land with different stakeholders. This is really me instructing the AI to ask me the kind of questions that can push me to refine my arguments and inspire me to come up with ideas I hadn’t even considered.

In fact, while writing this article, I asked, “What counterpoints might I have missed?” which led to the model asking me how I thought the limitations of AI might “affect the quality of interaction and the relevance of the questions it asks?” The cool thing about questions is they don’t need to be perfect to get us thinking—and this question made me think about how people need to be prepared to engage with AI critically and capture some of the most interesting opportunities. We have to be open to having our ideas challenged, and willing to challenge the AI in return. And if we get that right, we will create a two-way street of inquiry that drives us toward growth and discovery.

AI as a conversation catalyst

As AI starts asking us questions, this will not only make the conversations we have with our computers better, but it’s also likely to make our conversations with other people better. Imagine, for example, how much more interesting your meetings would be if you knew the most effective questions to discuss so as to make the best possible use of your time together.

In the early days when we first started building Copilot, we had very limited model access because it was brand new and the necessary infrastructure hadn’t yet been stood up. It was clear, however, that our products needed to figure out how they were going to integrate the new model capabilities as quickly as possible. So I set up a daily call with representatives from the various product teams, such as Word, Outlook, and Teams, where I’d screen-share the prompt box and we’d collaboratively try more and more creative prompts to see what the model might produce. These group sessions were just a hack to make progress, but they ended up being really valuable because each team was able to learn from the questions the other teams asked.

Now, everyone can have the model in the meeting with them. And this makes it possible for all of us to use AI to intentionally spark valuable conversations like the ones we had when first playing around with it. Next time you’re in a Teams meeting, try out one of the pre-populated suggested prompts like, “Suggest follow-up questions,” “What questions are unresolved?” or “List different perspectives by topic,” and see where it takes your conversation.

Ultimately, AI’s greatest impact will not be how it helps us do more of the same faster—but how it helps us work in new, different ways that make the most of our human intelligence and help us think more deeply.

Of course, as chief scientist here at Microsoft, my prediction that AI will start asking us questions isn’t something I plan to just passively observe. I get to help make it come true. And we should all be working to make the future with AI one we are excited to be a part of.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

This story was originally featured on Fortune.com