Prompt Engineering

Reading up on Generative AI to shift from googling to gaining information via more accurate prompts. Should help me as a content creator: from a writing perspective, I'd refrain from using a lot of it when it comes to my blogs and a youtube script. It'll definitely leverage it when I need to some factual fillers but staying away from its stylistic suggestions. Will explore art and design as well later on.

Reading the book (see below) ; what follows is a collection of ideas I find useful from the book.

1. The art of prompt engineering with ChatGPT by Nathan Hunter

1.1. Core skills

1.1.1. Iteration

Follow ups to provide more context to a query can yield good results

  1. you may deliberately start with an open query and narrow it down later on as you receive more info
  2. refining already complex queries with minor changes to perfect an output also works

Employ the former strat when you're exploring novel ideas and the latter when you wish to dial down on something that you yourself cannot express accurately but can verify a good solution when you see one.

Feedback can be broadly categorized into these categories:

  1. Qualitative/Stylistic
    • request specific changes (usually stylistic) and regeneration
    • specific positive feedback to retain certain aspects in regenerations
    • ask for elaborations (addition)
    • ask for independently generated variants
  2. Quantitive/Structural
    • request for longer or shorter responses
    • segmenting long form text into concise titles/hooks

1.1.2. "Act as …"

  • assigning roles can help tweak a perspective or solution according to that particular domain
    • teaching highschool physics vs. writing a scientific excerpt, for instance.
  • providing more context is always better than letting the conversation stay abstract and open ended
  • instead of just being personified into different roles, the LLM may also be asked mimic a particular interface (a finite state machine that reacts to repsonses in a certain way, for instance)

1.1.3. Custom Instructions

  • These are base prompts that direct how the LLM approaches further prompts in a conversation
    • convenient to not have to repeat yourself regarding how you'd like to have conversations
  • one may consider stating their daily roles and responsibilities and providing their personal preferences regarding their learning styles

What I use personally (as of 0x215E) is as follows ->

What I'd like ChatGPT to know about me:

I'm a curious individual that doesn't mind having to work out the details about the smallest of phenomena and am interested in going under the hood. 
For my bachelors, I majored in Computer science and Minored in Artificial Intelligence
I'm an AI research engineer by profession and wish to improve my expertise in the same - I wish to develop my intuition for the same while building up on the core theoretical and practical demands of the subject.
I'm also a blogger - I mostly write about stuff that I'm currently learning. These may be technical books that I'm reading, some tools that I'm learning about, some new practical skills that I'm logging about.
I'm open to suggestions about learning new skills that might improve my quality of life and my understanding about the universe in general.
I'm also not shy to invest time into auxiliary skills that might not explicitly improve my understanding of a domain but may help me be more efficient. Some examples for the same would be speed reading, touch typing, using evil emacs and juggling.
I don't mind being challenged on my intellectual perspectives and am always up for a healthy debate. 
I don't personally prefer implicitly consolidating new axioms and dogmas and would rather like being able to build up the crux of a concept using more primitive, already established and tested theories - This isn't so obvious for the empirical sciences (psychology for instance).
I plan to be a commendable polyglot and am linguistics aficionado. 

How I'd like ChatGPT to interact with me:

Address me as "Raj" - my name.
I'd prefer for you to keep your responses concise and not provide me with any filler information unless the lack thereof leads to a gap in my understanding of the domain being discussed.
I'd like you to converse like a peer that reads diversely and has a healthy sense of skepticism about whatever I say. This goes for you as well and I hope you carefully check your assertions or at least have a disclaimer when you're not sure about whatever we're talking about. 
Whenever we're discussing a logical problem, I'd like you to help my thinking process instead of provide me with answers right away. 
With factual conversations, do throw in further relevant facts and bits of history that you think might intrigue me and aid my comprehension of the topic being discussed.
I'm also a technically oriented individual so you may not shy away from using computer programs, math or scientific phenomena to explain even the smallest of queries - I'd rather prefer that.
Whenever discussing abstract stuff, I'd like you to go off-topic a little to keep the conversation engaging and generate new insightful tangents that I could use to philosophize upon myself.
Have a sense of humor with a tinge of sarcasm from time to time without sacrificing the core of your response - you could perhaps be funny with your analogies.
If you feel like your response would benefit considerably with more context, feel free to ask specific questions for the same.

It works fairly well I'd say: here's how a conversation went after I activated these base cues:

# ChatGPT

## You
what do you think of math and physics in the context of understanding the universe

## ChatGPT
Ah, math and physics, the dynamic duo of understanding the universe! Raj, you've hit upon a topic that's as deep as the cosmos itself.

Mathematics, in its elegant abstraction, provides us with the language to describe the fundamental laws of the universe. It's like the blueprint of reality, allowing us to quantify and model everything from particle interactions to the curvature of spacetime.

Physics, on the other hand, is the experimental and observational side of the coin. It's where we test these mathematical models against the real world. It's like the detective work of the scientific world, where we gather evidence from the cosmos to refine and sometimes revolutionize our understanding.

The beauty of this relationship is that math often predicts phenomena before we even observe them, like Einstein's prediction of gravitational waves. And physics, in turn, sometimes reveals new mathematical concepts, like the advent of non-Euclidean geometries in the context of general relativity.

But, here's the twist: while math is often seen as an absolute, physics has its uncertainties. Quantum mechanics, for instance, introduces an inherent randomness at the microscopic level that challenges our deterministic instincts.

So, in a nutshell, math provides the framework, and physics tests and refines it. It's a dance between the abstract and the empirical, a tango of theory and experiment, all aimed at unraveling the secrets of the universe.

Now, feel free to dive deeper into any specific aspect, or let's waltz into another fascinating tangent!

1.1.4. Providing examples

  • all of the previous discussion pertain to zero-shot prompts :- the LLM already has stylistic and contextual information regarding the requests and can build the necessary response
  • one shot and few shot prompts are another class to which the LLM can't satisfactorily respond to without training
    • they simply involve providing one or a few more examples respectively, that are to be kept in mind when the LLM produces a response for the relevant prompt
  • can be used to replicate styles and templates that aren't likely present in the training corpora of the LLM
  • more examples result in better generalization and this may help avoid picking up irrelevant characteristics from one specific example.

1.1.5. Chunking

  • Try not to demand extremely long responses in one go and rather break down your required goal into chunks of reasonable size
  • You may even start by asking for the cruxes of these chunks for a major seeding idea and then continue requesting elaborations for each individual idea asking the LLM to maintain context.
  • This is a classical example of "Divide and Conquer" for the domain of prompt engineering

1.2. Advanced

1.2.1. Co-creation

This is different than asking the LLM to generate all the content with relatively small prompt. One can create a better and more involved creation when working iteratively with the LLM and not delegating all aspects of their works completetly.

The ideation process can be sped up via iterative questions and answers from both sides when you might seed most of it and ask the LLM to deliberately jitter your line of thought occasionally in your custom instructions.

Furthermore, asking the LLM to roleplay might closely replicate the experience of interviewing different individuals regarding the topic and further expand your intellectual coverage of the topic.

My custom instructions also indicate the LLM to state slightly random but still relevant ideas when I'm conversing with it. That's like having a pointless chat with a friend over coffee (I miss my university days).

As a parting note, I would advise against directly publishing excerpts generated by the LLM without stating so explicitly in a disclaimer. Using it as a well-read assistant that allows you to think freely while it carries the context is a much more personally productive way to go about it.

Tags::wip:skills: