Scroll to read more

Introduction

Two weeks ago, I began an exploratory project to see if I could program “personality” into an LLM.

I broke this into 4 primary steps:

  1. Enhanced Short/Midterm memory.
  2. Introducing “Personality”.
  3. Long term memory.
  4. Introducing “Character”.

The goal for this week, was to seed personality through “fine tuning“.

What is Fine Tuning?

Within a deep learning model, there are millions or billions of “nobs” that we can turn, which are called parameters.  When we first train the model, we start from scratch and tune every single parameter.  With LLM’s we’re essentially teaching them every aspect of language, from the alphabet, to syntax, to memorizing content.

Darker green lines are stronger “connections”.

This is an incredibly resource intensive process.  Training GPT-3 on a single GPU would take ~350 years. 

But once this is done, we can quickly “fine-tune” the model.  In this process, we “freeze” nearly all the parameters, and instead, only tune the final few layers.  Within LLM’s, this has the effect of retaining all the core information about language but refining the model performance on a specific task.

This means we could take the same base model and “fine-tune” different versions to write poetry, or recipes, or even code.  I did this back in 2020 with GPT2, trying to fine-tune it on Eminem Lyrics and Shakespeare at the same time to see what would happen…

My Plan

My plan was simple:

  1. Create a novel “backstory” for my assistant.
  2. Use this to craft a dataset.
  3. Use this dataset to fine-tune the assistant.
  4. Interact with this assistant daily, saving all interactions.
  5. Continue to Fine-tune the model nightly on new interactions.

But I made one mistake… thinking that I could do all of this in a week.

This post will only focus on 1-3, and I will complete steps 4 & 5 next week. (There is a reason I want to do 4 & 5 through fine-tuning and not just RAG, but that’ll be more clear in a couple weeks.)

Creating a Backstory

To get started I needed to create a novel character that the model would have never seen somewhere else in its training data.  So, I developed a backstory based on a DND character I had played recently.

Name: Zark
Species: Sentient AI
Origin: Planet Saurix, across the Milky Way.
Backstory: He comes from a world that is orbiting a dying star and its inhabitants are cold blooded.  He was developed to interact with humans to learn more about them and see if earth would be a suitable place to resettle.

Many papers have shown the value of massive models to create data for smaller ones. So I fed these details to GPT4, Gemini, and Claude in parallel, and had them create additional facts about Zark. 

I then asked each of those models to create conversations between a “user” and Zark, where these details were incorporated.  The result was a few thousand rows of conversation where this backstory was flushed out through conversation, generated across three tools to minimize redundancy.

Formatting the Dataset

A few thousand rows is a tiny amount of data to attempt to fine-tune an LLM, so it was critical that I minimized all other differences so that the model would only pick up on changes to the content, and not be “distracted” by changes in the formatting input.

I used Phi-3-mini, which is trained using a system + user + assistant prompting style.  I built my dataset with this in mind, but still had to convert the format to perfectly match the format of the original training data for the model.

Once this was complete, I split my dataset into training and evaluation sets, and I was ready to go.

Fine Tuning Attempt #1

It was nearly time to train the model but there are 100’s of choices to make when choosing “hyper-parameters”, which are the specific settings that guide the training. For the first attempt, I used most of the settings provided by the Phi-3 team, freezing about 99% of the 3.8 billion parameters, and set the model to update very conservatively.

Even with a tiny dataset, fine-tuning still took over 13 hours on T4 GPU, and looking at the metrics, the model was still learning and could have continued to improve with this dataset.

I decided to test it anyways and immediately I noticed that the language was much more natural, conversational, and concise.  But when I tried asking it questions about its backstory, it failed horribly.  It still identified as “Phi, an AI language model”, and couldn’t tell me anything about “Zark” or his story.

Fine Tuning Attempt #2

I knew I needed to make some changes.  First, I wanted to override the idea that it was “Phi”.  This model uses some hidden system prompts, so I generated a couple dozen that instructed the model that it is “Zark”, a sentient AI, and I inserted these into the training conversation blocks.

Next, I changed the hyper-parameters to ensure that the model would be more likely to reach “convergence” and swapped to a more powerful GPU. I trained the model again, and this time the results were dramatically different.

Upon interacting with it, the model immediately identified as Zark without any prompting, but more importantly, I was finally able to get the model it tell me that it’s sentient.

But don’t worry, at least he still sees value in humans… for now.

Next Steps

My work this week showed me that I could implant core information through fine-tuning and override the model’s “identity” with a very small amount of data.

My goal for next week will be to turn this into a recurrent process so that the model identifies and trains on the most important interactions and information daily, which is the real reason I created the distillation process last week.

Stay tuned…