Tips for your legal prompt engineering experiment

Unlike most software, none of the foundational model providers like OpenAI and Anthropic have released official documentation for how users should engage with or “prompt” their software. So, the internet has collectively stepped up to offer thousands of listicles for their “top 10 prompting tips”. Some of these are great, and some of these are a little less than great.

We want to offer some tips, but we do not purport to write a guide for the best prompts to use. This is not your standard listicle of “top 10 prompting tips”. Instead, we want to share a few stories with Steve Goldstein about his experience. Steve has given more time to experimenting with prompt engineering than almost anyone else we know in the legal vertical (other than, perhaps, Josh Kubicki who wrote his 100 days of experiments on his Brainyacts blog).

In this video chat, Steve talks about his experience of prompting generative large language models and what has worked well for his use cases. We cover topics like:

  1. how Steve interacted with GPT to write a 47 page guide to overcoming barriers to adoption of technologies

  2. how someone might want to start trying “prompt engineering” with conversational style prompts

  3. using examples and templates with few-shot prompting to demonstrate the expected output to GPT

  4. how to always give GPT a persona before asking it to perform a task

  5. solving complex problems by requiring the use of multi-step reasoning

  6. accidentally stumbling into the use of chain of thought prompting (and also tree of thought prompting)

  7. how to use overly-prescriptive prompting to achieve unexpectedly accurate and meaningful results

  8. using GPT to generate synthetic data for classical machine learning tasks

Our favorite quote from Steve is probably his insight about how prompting is essentially the same as interacting with a child.

You have to show it the way on the more complex situations... Prompting is what you do with first-graders. Prompting is essentially like, “Here’s 5 minus 4... no, did you forget to move the 1 or something?” This is a prompt. I want you to do this, young person, and I want you to do it this way...
— Timestamp 13:20

Our conversation barely scratches the surface of the capabilities of generative large language models!

Given that prompt engineering is part art and part science, we think our most useful tip from this interview is to engage with prompt engineering as if you were conducting a scientific experiment… try and try again. Don’t be shy! As we always say, “Don’t worry, you can’t break anything that we can’t fix.” (except for confidentiality and privacy… please be mindful about those risks)

Previous
Previous

Releasing our Generation 3 Upgrades — A Progress Update

Next
Next

Paul Farrell joins as Syntheia’s Director of Customer Success