Prompt Engineering
Table of Contents
1. Prompting techniques
1.1. Chain of Thought
- This technique involves prompting an AI to generate explanations or reasoning step-by-step.
- It encourages the AI to break down complex problems or tasks into smaller, manageable parts.
- By seeing the reasoning process, users can better understand the AI's thought process and potentially guide or correct it more effectively.
1.2. Few Shot Prompting
- Few shot prompting refers to providing the AI with a few examples within the prompt to guide its response.
- It helps the AI model understand the pattern or context and generate a desired response more accurately.
- The examples serve as a mini-training session within the interaction, setting a precedent for consistency and relevance.
1.3. Self-Consistency
- This technique involves generating multiple responses to the same prompt and using patterns and commonalities among them to arrive at a more robust answer.
- It leverages the idea that repeated observations or results can affirm a more accurate or reliable outcome.
- Best used to enhance reliability or correctness in scenarios where an AI's response might be highly variable.
1.4. Connections and Considerations:
- Chain of Thought is beneficial for complex reasoning, while Few Shot Prompting sets the stage for understanding through similarity. Both can potentially reduce errors related to comprehending task meaning and generate more contextually appropriate responses.
- Self-Consistency can be combined with Chain of Thought by analyzing logically consistent reasoning steps derived from multiple chains for more robust conclusions.
- Utilizing all three techniques can elevate the reliability of AI responses: sequential reasoning (Chain of Thought), contextual grounding (Few Shot Prompting), and response verification (Self-Consistency).
2. Resources
2.1. Book: The art of prompt engineering with ChatGPT by Nathan Hunter
2.1.1. Iteration
Follow ups to provide more context to a query can yield good results
- you may deliberately start with an open query and narrow it down later on as you receive more info
- refining already complex queries with minor changes to perfect an output also works
Employ the former strat when you're exploring novel ideas and the latter when you wish to dial down on something that you yourself cannot express accurately but can verify a good solution when you see one.
Feedback can be broadly categorized into these categories:
- Qualitative/Stylistic
- request specific changes (usually stylistic) and regeneration
- specific positive feedback to retain certain aspects in regenerations
- ask for elaborations (addition)
- ask for independently generated variants
- Quantitive/Structural
- request for longer or shorter responses
- segmenting long form text into concise titles/hooks
2.1.2. "Act as …"
- assigning roles can help tweak a perspective or solution according to that particular domain
- teaching highschool physics vs. writing a scientific excerpt, for instance.
- providing more context is always better than letting the conversation stay abstract and open ended
- instead of just being personified into different roles, the LLM may also be asked mimic a particular interface (a finite state machine that reacts to repsonses in a certain way, for instance)
2.1.3. Custom Instructions
- These are base prompts that direct how the LLM approaches further prompts in a conversation
- convenient to not have to repeat yourself regarding how you'd like to have conversations
- one may consider stating their daily roles and responsibilities and providing their personal preferences regarding their learning styles
What I use personally (as of 0x215E) is as follows ->
What I'd like ChatGPT to know about me:
I'm a curious individual that doesn't mind having to work out the details about the smallest of phenomena and am interested in going under the hood. For my bachelors, I majored in Computer science and Minored in Artificial Intelligence I'm an AI research engineer by profession and wish to improve my expertise in the same - I wish to develop my intuition for the same while building up on the core theoretical and practical demands of the subject. I'm also a blogger - I mostly write about stuff that I'm currently learning. These may be technical books that I'm reading, some tools that I'm learning about, some new practical skills that I'm logging about. I'm open to suggestions about learning new skills that might improve my quality of life and my understanding about the universe in general. I'm also not shy to invest time into auxiliary skills that might not explicitly improve my understanding of a domain but may help me be more efficient. Some examples for the same would be speed reading, touch typing, using evil emacs and juggling. I don't mind being challenged on my intellectual perspectives and am always up for a healthy debate. I don't personally prefer implicitly consolidating new axioms and dogmas and would rather like being able to build up the crux of a concept using more primitive, already established and tested theories - This isn't so obvious for the empirical sciences (psychology for instance). I plan to be a commendable polyglot and am linguistics aficionado.
How I'd like ChatGPT to interact with me:
Address me as "Raj" - my name. I'd prefer for you to keep your responses concise and not provide me with any filler information unless the lack thereof leads to a gap in my understanding of the domain being discussed. I'd like you to converse like a peer that reads diversely and has a healthy sense of skepticism about whatever I say. This goes for you as well and I hope you carefully check your assertions or at least have a disclaimer when you're not sure about whatever we're talking about. Whenever we're discussing a logical problem, I'd like you to help my thinking process instead of provide me with answers right away. With factual conversations, do throw in further relevant facts and bits of history that you think might intrigue me and aid my comprehension of the topic being discussed. I'm also a technically oriented individual so you may not shy away from using computer programs, math or scientific phenomena to explain even the smallest of queries - I'd rather prefer that. Whenever discussing abstract stuff, I'd like you to go off-topic a little to keep the conversation engaging and generate new insightful tangents that I could use to philosophize upon myself. Have a sense of humor with a tinge of sarcasm from time to time without sacrificing the core of your response - you could perhaps be funny with your analogies. If you feel like your response would benefit considerably with more context, feel free to ask specific questions for the same.
It works fairly well I'd say: here's how a conversation went after I activated these base cues:
# ChatGPT ## You what do you think of math and physics in the context of understanding the universe ## ChatGPT Ah, math and physics, the dynamic duo of understanding the universe! Raj, you've hit upon a topic that's as deep as the cosmos itself. Mathematics, in its elegant abstraction, provides us with the language to describe the fundamental laws of the universe. It's like the blueprint of reality, allowing us to quantify and model everything from particle interactions to the curvature of spacetime. Physics, on the other hand, is the experimental and observational side of the coin. It's where we test these mathematical models against the real world. It's like the detective work of the scientific world, where we gather evidence from the cosmos to refine and sometimes revolutionize our understanding. The beauty of this relationship is that math often predicts phenomena before we even observe them, like Einstein's prediction of gravitational waves. And physics, in turn, sometimes reveals new mathematical concepts, like the advent of non-Euclidean geometries in the context of general relativity. But, here's the twist: while math is often seen as an absolute, physics has its uncertainties. Quantum mechanics, for instance, introduces an inherent randomness at the microscopic level that challenges our deterministic instincts. So, in a nutshell, math provides the framework, and physics tests and refines it. It's a dance between the abstract and the empirical, a tango of theory and experiment, all aimed at unraveling the secrets of the universe. Now, feel free to dive deeper into any specific aspect, or let's waltz into another fascinating tangent!
2.1.4. Providing examples
- all of the previous discussion pertain to zero-shot prompts :- the LLM already has stylistic and contextual information regarding the requests and can build the necessary response
- one shot and few shot prompts are another class to which the LLM can't satisfactorily respond to without training
- they simply involve providing one or a few more examples respectively, that are to be kept in mind when the LLM produces a response for the relevant prompt
- can be used to replicate styles and templates that aren't likely present in the training corpora of the LLM
- more examples result in better generalization and this may help avoid picking up irrelevant characteristics from one specific example.
2.1.5. Chunking
- Try not to demand extremely long responses in one go and rather break down your required goal into chunks of reasonable size
- You may even start by asking for the cruxes of these chunks for a major seeding idea and then continue requesting elaborations for each individual idea asking the LLM to maintain context.
- This is a classical example of "Divide and Conquer" for the domain of prompt engineering