August 16 Lab Room 4 Prerequisites for Rexy

Elizabeth introduced Rexy, an FFA internal prompting tool, and discussed the importance of understanding tokens for advanced plots and calculations. They also explained the pricing for Chat GPT-4 and the timeline for Project Rexy alpha and beta testing. Elizabeth emphasized the importance of prompt construction and understanding tokens for successful writing with AI.

Key Points:

(1:33) - Elizabeth introduces Rexy, an FFA internal prompting tool for Fast Future Fiction Academy Story Technologies, and explains the taxonomy they are building for it
(4:00) - Elizabeth discusses tokens and limitations of AI, including how AI does not think in words and the importance of understanding token numbers for advanced plots and calculations
(7:59) - Elizabeth demonstrates how different forms of "no" have different token numbers
(14:54) - Elizabeth mentions that Rexi will be paid for by the token and explains what that means
(22:34) - Pricing for Chat GPT-4 includes an input cost of $0.003 per 1000 tokens and an output cost of $0.006 per 1000 tokens, with a typical input to output ratio of 4:1
(24:28) - A rough estimate for writing a 50,000 word book with AI is 800,000 tokens for input and 200,000 tokens for output, costing approximately $36 with Chat GPT-4
(27:39) - GPT-3.5 has a significantly lower output cost than GPT-4, with 1000 tokens costing $0.004 instead of $0.06, making it a more cost-effective option for development and testing before using GPT-4
(51:12) - Timeline for Project Rexy alpha and beta testing, and inclusion in tuition for Future Fiction Academy
(53:31) - The AI's token selection process is explained, including how it assigns percentages for likelihood of tokens and rejecting tokens that don't make sense in context
(57:59) - Elizabeth explains how to access the OpenAI playground and different models, as well as limitations of certain models and tools like Po.com
(1:04:16) - Square brackets are recommended as the most universal option for prompt construction
(1:06:30) - Elizabeth emphasizes the importance of prompt construction for working with Rexy
(1:15:04) - Elizabeth discusses how to plan and construct a prompt for a specific task, using summarizing a book as an example
(1:18:58) - Elizabeth breaks down the steps a human would take to summarize a book and translates it into steps for an AI to follow, including the use of brackets to define boxes of information for Rexy to parse
(1:21:31) - Elizabeth suggests running the prompt as a two-step process in chat mode for better results and explains the difference between chat and API calls in advanced prompt engineering.
(1:32:00) - Elizabeth emphasizes the importance of understanding tokens, breaking tasks down into individual prompts, and practicing with different models for successful writing with AI

 

We are going to be doing prompt basics, and I'm going to tell everyone that some of this is going to be a repeat for you, especially for my regulars. However, I'm constantly reminding myself, so if I'm constantly reminding myself of it it's a good thing to review and make sure you're solid on these details before Rexy comes.


Quick little thing about the timeline of Rexy, for those who don't know what Rexy is the FFA internal prompting tool. F Fast. It is Feature Fiction Academy Story Technologies. And what we're doing there is we're basically creating an on the fly prompting tool that you guys can go, okay, I need to do this, I need to do that.


Actually, it's just easier if I share my screen and I give you guys a little bit of a preview for it. You can now see that I managed to turn my mail sounds off. I just had to Google it for those of us. So this is Rexy. And just so you can see a little bit of the idea here. We're working on a taxonomy, for example, so that all of your ideas and inspiration prompting is right here.


Your world building prompting is in the 200s. The characters would be in the three hundreds for generating a character list, and there will eventually be multiple categories here so that hold on a minute, let me get everyone to just mute. You're welcome to unmute if you are asking questions but I just wanted to meet everyone so there's no background noise.


The characters plotting writing, we have more numbers coming in so that basically every function from marketing and publishing to writing to developmental editing and copy editing. Everything's going to have a set 100s code and then the prompts that go underneath that will have numbers and I, my hope is that by giving this taxonomy.


It's going to help us all because how many of you have been overwhelmed with the prompts you have and they're just all over the place. Some are by date, some are by task, some are by project, and they're just literally scattered about your Notion pages and your work documents or your Evernotes. Yes. How many of you like this taxonomy idea that you'll just know okay if it's a prompt for characters it's in the 300s, and we'll make this taxonomy.


My Notion is a wasteland. Oh, you should see the Future Fiction Academy Notion. We have prompts upon prompts and trying to find a prompt. You're just like searching. You're hoping you remember a keyword. There is that. Okay, so this is what Dexy is and this is what we're building.


Yeah, we under we completely identify with the mess. Okay, so the other thing that we're going to do today. So let's go ahead and start off we're going to first start off about tokens and limitations. Okay. And forgive me for some of you who have already seen this before, but that's okay. The first thing we're going to talk about is the fact that the the AI does not.


Think in words, doesn't process words are enigma to it. And we can see that because how many of you have asked the AI to write me 5, 000 words, and very clearly it does not understand what that means because it'll go, here's 4, 261 words. And it's 20, it's 2, 300. Has anyone noticed that?


And that's probably a failure of its ability to do math and count and a whole host of other things. Yeah. So it doesn't understand words. Unfortunately, you also, it's not also, it's also not self aware. We've tested prompts of saying we've tested prompts of do 2000 tokens. It doesn't have an awareness about tokens either.


It's tokens are also a mystery to it. So this is the open AI tokenizer. And what I want to emphasize to everyone is that everything website is open. That you type in has tokens. So if we do the phrase, we have no bananas right here. We can see that's going to break that down into five tokens. We have no bananas and a period, all of your punctuation has a token as well.


13 is the number for period. And you can see the numbers for them back here. 11, 35, 4, and 13. Now, why on earth would we want to turn words into tokens and then tokens into numbers? If you have numbers, you can start running advanced plots, mathematical equations, calculations, all the calculus and stuff that we had to take in high school if you were advanced, but you never actually use on your day to day life unless you became an engineer, those things.


But basically they do this thing called vector search, and I did explain it on my my live it's, I can explain it again, but we'll save it for probably another time. But the whole point is that The that's funny. It's actually a song. We have no bananas today. If you've ever seen a Sabrina, it's a, it's an older song.


But the the tokens here, once we turn them into numbers, now I can do what's a frequency of the token number 13 in a piece of writing? It would be very high, right? Because we end every sentence with a period. And if we end every sentence with a period, right? The AI doesn't need to know that's a period.


It just needs to know that the number 13 as a token comes at the end of some thoughts or some tokens that would be a coherent thought and they're always at the end. And there's going to be a high frequency of 13 all throughout a piece of writing, because we use periods all the time. If you are curious as to what is token zero I did figure it out, it is an exclamation point.


I forget what token one is. You have to play with it, and it, and if you guys can find it again, let me know, but it's one of the weird punctuation marks, the asterisk. No, that's a number nine plus sign. No, that's 10 equal sign. Nope, that's 28. So see what I mean? We did find it one night because we were adamant to find which one was token one.


It's not those either. But you do notice a pattern, all of my punctuation marks that I'm doing these the plus sign equal sign. Oh, cool. So it changed the token as soon as you have you have them to write together. You can start to see that different different punctuations are very close and they're low and the token numbers, they were like the first things they tokenized.


Now, another important thing to concept to understand. Let me show you this sentence here. Yes. Notion numbers have token numbers as well. That's a great idea. So if I go one, two, three, four, they all have token numbers. I think that's so that it doesn't get confused or something like that. And then this is also translated into what is it?


Hex or binary for the computer. It's not. We see a number here, but this is still to make it easier for human eyes. It's not necessarily how it's rendered on the backend. So yeah it's very strange that they all have it. Apparently we have a space or something. So we have no bananas.


here as a sentence. I want to point out that the word no in this sense is token number 645. Can everyone see that? 1135, 423, 645. And this means we are out of bananas. There are no bananas to be had today. And actually it's not correct English. So there we have no bananas. And then if I say you did the dishes comma no with a question mark, does this no and that no mean the same thing?


No, and apologies. You guys have seen this illustration a couple of times, but I think it's very illuminating that this is why the AI sometimes gets confused. It used six, four, five and six, four, five, the same token number, but they have completely different meanings. It also changes too.


If I do a capital now. If I go capital no, now it's 2949. It's a completely different token. If I do all caps no, now it's 15285. It's an even bigger token number. If I come over here to the text though, I can see that these are all different tokens right here. And it'll show you the color coding of how it broke it down into tokens.


Oh, the quotation is one. Thank you, Angie. We know what Angie's been doing for the last five minutes. Testing all of the different things there. So let's see. Yep, number one is a quotation mark. That's cool. I like that. Okay. So the other thing I wanted to show you too is that the same word could have different tokenization as well.


So we saw that a little bit with the word no, I like to do it with the word Wednesday, just because if we do a Wednesday, all caps, I could spell it correctly. Although that shows something else. Look at that. Because I misspelled the word, I got all kinds of funky tokens here. Okay, so that's where misspellings in your prompts.


Remember how we used to think oh it's no big deal if I misspell it because it figures it out anyway. It does for the most part, but it is also messing up your tokenization on the back end.


But I want to show you this. So let's do an uppercase Wednesday. So uppercase Wednesday. We come over here, we've got one token. It's 27150. Push down do a shift down. Why is it 628 now? Okay, there we go. We had some ghost tokens in there. 198 is the hard return. The shift and return. Let's do Wednesday all lowercase.


Wait a minute, I got two tokens. That's because a lowercase Wednesday is broken up into two tokens. I don't know why, the only reason I can think of is because... I don't know. It decided that because there are words that start off with wed. For example, if I do wedded, okay which it didn't tokenize it that way either.


So it's not really tokenized it's not really tokenized by phonic necessarily or by syllable or any rhyme or reason we linguistic people would do. It's just tokenized based on whatever the component letters are that can construct to make words. That's it. So we have Wednesday, which is now in two sections there because Nesday is not a component of a word in the sense of there's other Nesdays out there.


And then finally, what I want to show you is what happens when we yell at it, how many of you and I started this, saying, Hey, when we yell at it, it seems to understand it more. Who remembers that when we did experimentation and stuff like that back when I was working for pseudo right, and I was sharing it that you know if you put it yell at all caps at times, it keeps it keeps understanding better.


Now I have a theory as to why that is. We have one, two, three, four tokens. Okay? Yeah, back in the day, aka three months ago. Let's look at the token IDs here. It's an individual token ID each time for that last Wednesday. Now, here's what I think happens when we yell at it. We make it think harder.


And by think, remember it doesn't actually think. We make it process and calculate a little bit differently. What I mean by that is that the first thing it's going to try to do when it's presented with this string of tokens is try to apply context to it. And my hunch is that the context for Wednesday all in caps Think about the training data.


In what situations would you all capitalize Wednesday? Situations where you mean Wednesday and nothing else, right? You don't mean Thursday, you don't mean any day, you mean very specifically a Wednesday. And Wednesday may not be the right word to explain this about, but let's do the word romance, because that'll be a good one to do.


So here we have romance, and this is a good thing to notice. Romance is two tokens. My hunch is that words that have more than one token, sometimes it can be difficult for it to figure out always what it means. But if I put it in all caps, we still only have two tokens. And I bet you they're very different token IDs and they are.


So what I think happens is that the training data set would not have had capital words, all cap words as frequently as lower capitalization words, which probably decreased the ambiguity around the use of those words in the training data. Does that make sense?


Does anyone else like this this theory?


And the training data was a whole bunch of public domain content. Now we've learned possibly some copyrighted content, especially with BookCorpus that the AI trained on. Any questions before we move on? No? Okay. The next thing I want to talk about is that when we work with Rexy, we will be paying by the token in most cases.


And what that means is that most of, some of you may not have any experience with paying by the token unless you have your OpenAI account already going. If you use tools like PseudoWrite or PseudoWrite charges by words you pay an amount of money and you get X amount of words.


Okay. I pay my 20 and I get 90, 000 words. Places like po. com, they don't charge you by the words, but they charge you by the number of messages. If you click the button, 100 times in a month, that, that's your 20. So with Rexy, Rexy is more similar, is similar to TypingMind as far as a device, as far as a program goes.


And it is meant to hook into your API key. Now, when you go to OpenAI, I want to explain something that both Anthropic and OpenAI, both of them have two separate interfaces. There's the chat interface and then there's the API interface or the playground interface as we call it sometimes because OpenAI calls theirs the playground.


So Claude actually has one too. So you all are used to this one here, where it is Claude. I actually have an API key. So when I click my name, I can actually go to the API console. So if you don't see this here, you don't have the API console. How do you get the API console on, on Claude? I would give you the link to apply, but.


I don't know anybody that's actually going through at all. There's such a backlog. So I did reach out to my contact there and to find out how we can't streamline the process for people. I don't know what the restrictions of the countries will be. And that also involves signing. Contracts like terms and conditions contracts, not like giving away your first son, but signing terms and conditions contracts with anthropic.


That means you're going to only use it for good purposes. And you won't use it for nefarious evil purposes because anthropic is very keen on their AI being harmless. But just so you guys can see, the back end of the console is very similar to their chat interface. The only difference is that here at the Future Fiction Academy, I can actually pull up my API keys.


I can pull up the logs and the all important usage and invoices. So each month we pay an invoice for how much we use it, which also means that our chat is not free. Everybody else's chat is free, but our chat is not free. Now OpenAI has the same thing. I went ahead and went to the usage token because when you first to access this, when you're signed into OpenAI, you're going to click your name here and then you're going to go down to manage account.


I'm not going to click on that because the very first screen that pops up shows my, my my. The identity for my organization. And I think that's sensitive information, even though it's not my API key, it's still pretty sensitive. So I don't want to reveal that. But you just click your manage account and then that is going to take you to your usage.


Now your usage, it'll show you by day on the API key. And you can see that I have, I've spent, 75 cents on that day, 69 cents on this day, 46 cents on that day. It is possible to make it add up very quickly. Depending on which models you use. So you may find yourself once you're using your own API key going, Ooh, I'm not going to use cloud two for everything.


Let me bump down to cloud 1. 2 for a little bit, and then I'll run cloud two because the difference in the pricing on these different models is significant. It's it's degrees of magnitude. And I'll go ahead and show that next. So let me go to a Google sheet.


Yeah, go to Google Sheets. We'll just go with a blank one for right now. Alright, so let's figure out how to do, how you would calculate pricing for your API keys. So this is pricing or API. Okay, any questions so far? I know for my new people this is probably very technical but this is probably the most technical lab that one of the most technical labs that we've made.


No questions? All right, we're going to press on. To find the pricing. If I go to, let's do OpenAI pricing. OpenAI pricing. There's a pricing page, how wonderful. Only pay for what you use. Now this is completely different than, Chat. Okay, I don't want anyone to be like I paid 20 for chat. That's a separate entity.


And you can tell that because when you come to the open a platform to log in, you have a chat interface and the API interface. So when we're talking about the playground. When we're talking about paying by the token, we're talking about the API model, not the chat model. Right now, the chat model is 20 a month for ChatGPT and that gets you access to GPT 4, and some other goodies like Code Interpreter and Custom Instructions, depending on the country you're in.


Let's go back here to Pricing. Now, you'll see this interesting calculation here. The input and the output are two different prices, and what the heck does that mean? It means that your prompt, everything that you send to the API, It's going to be at one cost and everything that comes back from the API is going to be a different cost.


So this 8k context is what most people have access to. What this means is that there is an 8000 token context window. And that has to include anything you send to it and anything it comes out with. On top of that, their outputs do have limits as well. How many of you have been frustrated that the AI will not write?


Excuse me, that the AI will not write you 10, 000 words back no matter how many times you ask it in one, one react one thing by say, Hey, write me 10, 000 words. It just doesn't do it. It can't. Everything is capped at a multiple of either 2048 or 4096. GPT 4 has a cap of 4, 096 tokens in its response.


That's the most it can reply back to you. If you pull out your handy dandy calculator, take 4, 096, multiply it by 0. 75, you'll find out that's roughly 3, 000 words. It comes out to 3, 072, but to make it You know, approximate because as we saw with the tokenizer, can we be 100% certain that one token equals one word?


No, not at all. Some words are multiple tokens. Some words are one token. Plus you have punctuation in there too that's taking up tokens. So it's really about four characters equals one token. And if you take your total word count, if you want to find out your particular rate take a sample of your writing, look at what the character count is, divide that by four, and then you'll have a good idea on how many tokens roughly that piece of writing is.


Broad strokes, not accounting for historical writers like me mine's actually not 75%, mine's even lower. Mine, to be safe, I do about 70, 70. Percent to a thousand. So about 700 words is 1000 tokens for me. And then accounting for like children's writers, they're going to be higher than 75%. They're probably going to be in that 80% because more and more of their words are one token versus somebody who's writing historical fiction.


Over here we can talk right here about the pricing. So we'll put it right here for a second here, pricing. And we have chat GPT four, which everyone should have in their playground. Now, if you don't have it in their playground, contact their support. It has 8, 000 token. Context window.


We shorten that to 8K when we're talking about it. And you'll hear people say, Oh, that's chat GPT 16K. We mean 3. 5 turbo 16K, which means it has 16, 000 tokens context window. The input cost is 0. 03 per 1, 000 tokens. And the output cost is 0. 06. For 1000 tokens, thankful you're thankfully I have written many books and done many analysis and things like that.


And I can tell you right now that when you are writing with AI, typically, you're going to be in a three to one, no, a four to one ratio. In terms of input to output. I think you all can see this in our mega prompts and stuff that we work on your mega prompt, maybe 2000 tokens long to give you 1000 tokens or your mega prompt could be 3000 or 4000 tokens long just to give you 1000 tokens, but roughly it works out to about four to one.


So out of 1 million tokens 800, 000 would be input. 200, 000 would be output in order to get a 50, 000 word book. That's just the rough guesstimates and stuff. And different things that Karen B and I have been able to come up with as benchmarks from our own work with it. And Karen is an accountant, so she's good with numbers.


Very good with numbers. So let's think about what that cost is going to be. So if I know that my input. I can't type right here input and apologies for anybody who's allergic to spreadsheets. I'll be done quickly. I promise is 3 cents per 1000 tokens. And my output is 0. 06 for 1000 tokens. Let's pretend it takes 1 million tokens to, to write a book.


I don't know if that's 1 million. I think it is. I like to put commas in my work numbers. Okay. So 1 million tokens, and we'll go ahead and break this down as 800, 000 tokens for input and 200, 000 tokens for output. So just like this, I'm just making the boxes. What I can do to figure out my cost is I'm going to take this 8, 800, 000.


I'm going to divide it by 1000 because this is the unit that they charge us in. So that means I have 800. Times that they would bill me three cents. Same thing for this next one. It'll be the 200, 000 divided by a thousand. So there we go. So this is number of 1k tokens. All right. And then I just do the cost.


So I'm going to multiply 800 because they're going to charge me 800 times the 0. 003. Same thing. And I'll go ahead and turn this into a format number and turn it into a currency. And you can see that would charge, that would cost you about 36 if you were going to run the entire book. Now, how many of you would be very happy with just paying 36 for a ghostwriter to write an entire book for you?


I'll wait.


Anybody with me? Okay, good. So let's look at Claude. I want to give you pricing on Claude. Yeah, I know. See, he is a Notion. I knew he's also with Notion too. Claude Instant, see pricing. Okay. So here we have, I give them 50. I know you've charged way more than that Leland. All right, now we have a little bit of differences here.


Oh my goodness. We have these interesting things, but now they're going to charge us by the million tokens. Ew, right? Ew. But we can convert that. The way this works, and this is... The reality that we're going to have to deal with. So let me go ahead and insert one row above and we'll call this open AI pricing, and I'll make sure that this link to this particular spreadsheet is available to everyone so they can copy and paste it.


Cause you can change how many this is tokens just so everyone can see it. All right. So let's figure out Claude. Good old Claude. All right. So Claude has and actually we need to do this again for open AI cause I only did GPT four. So this is. This is actually GPT 4. Let me show you what this looks like for 3.


516k. So I'm just going to copy this and I'm just going to paste it underneath and we're going to call this 3. 5. I'm going to do 16 K because that's what most people want to use. So down here at the turbo you can use the smaller GPT four K, which means it has 4, 000 tokens of context window. This one is limited to 2048 for, yeah, 2048 for the output.


And it's, fractions of a penny per thousand, but we're going to focus on the 16 K one right here. This is the. The interesting thing here. And I've never noticed this before GPT turbos optimized for dialogue. I've never even tested that. That seems interesting. So we'll work with these tokens and we have the same situation of input and output are the two different delineations here.


And yeah your eyes did not deceive you. That was an extra zero. So 0. 00. Three is the input. No, not six. Three is the input. And then 0. 004 is the output. That means 1000 tokens up here where a thousand tokens cost three cents down here, a thousand tokens doesn't even cost a penny. In fact, you're going to have to run 3000 tokens before you have a penny.


Does everyone see the difference in that pricing and how that can make a significant difference in the price. Now you're talking. 3 or 3 and 60 cents instead of 36. So when you are selecting your model numbers and your API, just understand that GPT 4 does cost 10 times as much as GPT 3. 5. Now, does everyone think that GPT 4 writes 10 times as better than 16K?


I don't exactly. And for the most part, we're like, who cares? 36 versus 3 and 60 cents. I hear you. But basically. I think it's going to be a standardized practice for us where we start using 16K to run our prompts to check that's a good prompt and we're getting consistent results before we toss it into GPT 4 because we can save ourselves money on the the development basically.


So there we go. Okay, so let's talk about CLAWD. So CLAWD, we're going to bring over the same information over here, except we'll call this CLAWD 2. We'll start with CLAWD 2. 0.


And this is where it gets a little bit trickier because the units are different. So this is the price per tokens.


And I'll put that here too. Price per tokens. For 1, 000 tokens. Okay, so here I need to change this to 1, 000, 000.


And I can't put my parentheses in here, or my commas in here because that'll make it not a number. And we want it to stay a number. Maybe I can, I don't know, but this is 1 million. So we have it there. Now the pricing for anthropic is clawed instant. We want Claw 2.0, 1102, and 32 68. So 11 0 2, 32 68. And now you'll notice that my, my numbers are still fine because.


800, 000 divided by a million, it means 80% of that price is what I would pay for 800, 000 tokens because I didn't get to the million. At a million, they charged me 11 and 32 cents, or 11 and two cents. Let me, this is about driving me nuts that it's not in currency mode. currency and then this is also a currency just so that we can see what we're talking about numbers and this does get more complicated i understand for people who are not american you will have to do the conversions for your your currency which i know does fluctuate so that's the that's not gonna work.


I can't make the I can't make the the things here so that I can't make it currency here because it just makes it a zero and that's not accurate at all. Leland's pointing out that you should round up for pricing because they do. Yes, that is true. It does get very close to it. They'll round up to the nearest million sometimes, especially with Claude.


However this still gives a good example of what's going on here. And you can see that technically Claude 2. 0 is cheaper than GPT four, but it's more expensive than 3. 5 16 K. So it's somewhere in the middle. I would say that Claude point Claude 2. 0 is probably Let's see, 14. So that's like a 5x, 5x increase over 16k.


I would say that Claude writes probably two to three times better than 16k, maybe. Not, probably not five times. All right, and let's do this one more time for Claude 1. 0. Does anyone have any questions about how I'm doing this spreadsheet? And it's actually Claude 1. 2 now is the pricing. And like I said, I'll make sure that this is part of the lab report.


I'll give you guys the link to this. I'll make it public to share. So you guys have it to play, make a copy of it. You'll be able to view it. So you'll be able to copy it and put it into your own Google sheets and you can change your numbers and start figuring out the pricing of your product, your projects.


Okay. So Claude instant is. 1. 63 and 5. 51. So now it is 1. 63 and I already forgot the other number, 5. 51. All right, so here you can see that if you read Claude 1. 2, you're looking at 2. 40. So it's on par with the cost of 16k. I would absolutely say Claude 1. 2 is a better writer than 16k in terms of just writing ability.


Now when it comes down to analytics though, these models are not the same. Not in my experience. And I think a lot of other people's experience too. If I'm asking for a task that's something like more analytical, I'm going to go to open AI versus Claude, if I'm looking for some kind of flowery or marketing writing or something like that, I'm going to go to Claude over open AI.


These are things that you develop as you practice with them. And this is another, I'm going to flesh this out with what I'm actually teaching about here. And that is best model for the task is the concept here that we have to learn. And as more models come out, there's Google palm now in po.


com. There's a place called hugging chat. Watch this. If I go to hugging, hugging face chat and this is free at the moment, you can literally play with, I'm stealing that lane lenses. I'm putting that right now into the. Into the into the the lab report. Thank you. Right here. How many of you guys have heard of Llama?


You've heard of different models and things like that. Here, you can actually choose other models that are here. So there's the Llama 30B, which just means 30 billion, what, points of data for the transformer. And this is 70, wait, 70B? I thought it was 70. I wonder if that's a mistype. Is this something new?


Okay, so if I'm curious, I just click on the model page and make sure that this is saying what it says it is. It won't do Steamy Romance, unfortunately. This is 70 bi Are you kidding me? 70 billion parameters! Sorry for the field trip y'all. There are some ways to write STEAMy. SudoWrite has some ability there.


You can get Claude to write STEAMy if you give it the full context. It's not gonna go, so there's a line here and I'm gonna, I'm gonna just be blunt. If you are doing age play, anything that's child SCIM or anything like that, I don't care that it sells other places. I don't it disgusts me personally, but it's you're not going to be able to get an LLM to write that.


There's just way too many international laws and stuff like that around child pornography, including writing, written materials in other countries. Okay, so even if you're in the United States and written materials are fine, they're not fine in other places, and these companies are all international.


Nobody's going to jail so that people can write X rated content. For adults, just no one's gonna do that. If you want to just write, romance at the level that maybe the old harlequins were, you can get the models to do that as long as you're giving it bigger, big enough context.


I, when Claude first came out, I got him to write an entire sex scene where the man took care of the woman first before they had intercourse and everybody was happy and everything like that. And it read like an old Harlequin. It really did. If you want something that's more graphic It can be challenging.


But again, I've had Claude write a serial killer from a serial killer's point of view, stalking someone, stabbing them with a knife and describing the blood coming out, gushing down the hand. The difference was, I didn't get right to that. I always start backing up. That we're writing a novel and I get it into the throws of writing the novel so we have the beginner chapter there and then we have the serial killer bit.


So basically, if your prompt has chapter one chapter two where there is no violence and now we're writing chapter three. It's a 5050 shot, whether the AI is going to say I can't write that versus it says I can't. Does that make sense to everyone.


Thanks, Ryan. I've been writing my STEAM and Pseudo Written Claude II will analyze the scene from a technical perspective. It's told me it appreciates understanding the mechanics of how it works. That's funny. Yeah. Okay. I know we're doing basics, but I've got to, I've got to do it. This is 70 billion parameters.


I've got to, I've got to give it a try. Let's see here. Now hugging, I almost said hugging sex. Hugging face is, does not like sex either. So you have to, it's the same thing, like you can't just say write me a sexy. Write me a romantic meet cute between a cafe barista male and another A man who comes in for his regular order


who is not usually working that shift and another man who comes in for his regular order. This is the opening chapter for a romance novel. Let's see. Oh, we have the sun. Here comes the sun. Oh, Puritan romance a thing. I think that's called Amish romance, Ryan.


He enjoyed the fast paced environment. So this is another free one.


Couldn't help. Couldn't help. Same things with all the other LLMs. We have the sun shining brightly. And this wrote 458 words. Great. Expand that even longer. Add in more cafe customers and make it more vivid and contemporary.


And then there's the expanded one. People are rushing to start their day. The cafe is bustling. We'll see what it can do. I'm not mad at it. And this is a free model. So you can sign up with hugging face. co and you can get free AI with the chat and everything like that. And 70 billion parameters. That's like bigger than chat GPT four in terms of it's in people like to look at how many billion parameters are there to like.


Say this model is more intelligent than the other model. Truthfully, you can have a really big, dumb model. I've seen them. Mosaic is one of them. It'll say 70 billion, but it doesn't actually write very well. It doesn't analyze very well. So this is I'll put this down as a lab for later this month to do some hugging faces, hugging face stuff for the quality of 70 B, but I'm not mad at this llama and Oh, go ahead.


Is there a limitation with this one? How many questions you can ask before they kick you out? No, there's no limitations. And it looks like it didn't finish. So I'm going to say continue so it can finish. That was 672 words. It looks like it's very similar to what we had when GPT four first came out and it wasn't quite able to give you a lot of stuff back now, llama is unique in the fact that you can download the model and you can spin it and run it on your own computer.


If you have the processing capacity to do that, I don't recommend that for most computers, even. A gaming computer, you're going to be sitting there and waiting a long time. So unless you are nostalgic for the internet loading times we all had in the 90s I would not, I wouldn't necessarily run a model on your own computer.


Do you agree Leland?


Thanks Joe. That is another one. GPT for all. That's the one that does also not say for work.


It depends on the model you pick. Okay. So here we have 474 plus the 672. So I got 1200 words. There's no limitations on how many things. I haven't tested this for what it can do and anything like that, but this is Hugging Face 70B parameters. And somebody yesterday, I think it was, there's something that came out that's 200 something billion parameters.


And again, You can have a big dumb model. This thing is supposed to be three times as smart as 16k, but literally if I take, because I don't think what are the parameters for chat GPT 3. 5, how


many


does it not do those models in that, that that it doesn't have that, that phraseology of it. Of how many billion parameters it is


how many billion parameters model is GPT 3. 5. Oh, 175 billion parameters. Okay. So that's why it is. These other ones are still trying to catch up. All right, so that's something else that we'll dive deeper in on a different day. Let me get back to our syllabus here. We've gotten to pricing.


Oh dear. Okay. Any questions about the pricing before we move on?


Yeah. Okay. So Dion is sharing, you would have to get a dedicated custom build desktop that would cost anywhere up from 3, 000 depending on the models you want to run locally. Yeah. And I've seen them spin up even on very powerful models. And like I said, unless you're nostalgic for the loading times of the internet when we were kids or younger in the 90s and early 2000s don't do it unless you want to put one request in there and then walk away and come back.


We are definitely getting the benefit of these things in the cloud where they are running them on much more powerful machines than we can all individually afford to have. Okay, so that is hugging face chat, and I will go ahead and put that link into the


other. Free models is hugging face chat. And then the, this thing, let me go ahead and click share on this. And we will say anyone with the link can view. Done. And then we'll copy this, and I will put this in here too. And this is the pricing spreadsheet. So everybody has that, at least as a link. Okay moving on!


Any questions so far?


See, I even made sure that even though this is the basics over and over again, Yeah, dial up sounds for desktop. Oh my gosh, that would be hilarious. Nyan nyan! Crush! Yeah, we don't miss that. I know this is like the basics, but it's good to refresh it. I will say make yourself note cards.


Okay. In the sense of, I want everyone to really get to where you're as fast as I am. Where if somebody says, okay, GPT 3. 5 16K, I can tell you it has 16, 000 tokens for a context window, which means that the combination between the inputs and the outputs can't be more than 16, 000 tokens. However, any one response can't be more than 4096 tokens, which is about 3000 words.


I'll work on making a table here for you guys, a table of inputs and outputs per model, so that you guys have it. Make yourself note cards and things like that. The reason why you want to get it down cold. How many of you think you have it down cold now? If I asked you GPT 4, you could tell me instantly what the context window is and how much you can write back in a day, write back in one response.


Okay. But how many of you think you could memorize it like that, that those couples of things that, it's four models really that you need to memorize just four models, Claude and GPT. The reason I say that is because when you're in Rexy. Okay, let me load a prompt, just so I can give this example here.


If I go to world creation universe creation. We will default a prompt for you. And I click load prompt. And when I come here I can put my elements in, but if you'd, Oh, my elements go over here now. Okay, cool. Oh, I like this better. This is so much, this is so awesome. Pay no attention to the metric tokens.


I had to play with it later today. I can't wait. Okay, so the prompt tokens, when you're putting stuff in here, from components, from your Notion page, and eventually your Notion pages will just be able to be linked into as tokens. We'll talk about that in a minute. We're using tokens here. We probably should come up with a better phraseology here, because we don't need tokens for the AI.


A token here is just Genre means what's in this box. So whatever's in this box is what's going to go right here in the field here. But you have the ability in Rexy to change your model at any time. Yes, variables. I want to come up with a better word than variable though. It is a variable. It is a variable, but I want to come up with a cutesy word, or just something that makes more sense context wise.


You will be able to change your model. Now you guys won't have Claude until I can get you API keys for Claude. I'm hoping we can fast track that. Like I said, I reached out to my contact and fingers crossed. You need to be able to understand fundamentally if your prompt won't is too big for turbo, or if it's too big for four that it's got to go into 16k.


If you can't understand that concept of oh my gosh, my mega prompt is 8000 words, it's got to be in 16k. Then that's going to be a problem for running Rexy. And I can't fix it for you because We're not going to dumb this thing down and what I mean by that is I'll get back to the timeline Hans. I won't forget.


I promise. It is my very strongly held belief that creatives like us need to have raw access through an API to create prompts on the fly and get amazing things out of the AI. That is what I believe. Every single tool that's out there, apart from possibly typing mine, but I'll talk about that in just a second.


Poe, pseudowrite, verb, all of them. They put prompting and stuff between what you put in and what gets passed to the AI. Okay, so Rexy doesn't do that. He's a dinosaur. He doesn't, he's not modern like that. You call to the API, everything that you stick in is what's going to get sent to that API call.


That's what's going to get sent to GPT 4. How many of you have been playing and seen prompt leakage on different tools, whether it's verb or pseudorite or something like that, or even Poe, we've seen this. Poe, we've run something in GPT 4 chat. When we ran it on Poe, we got a very different result because it was very clear that there's some like extra prompting and stuff in the back end in between.


Would this be the same for the text to image stuff? We don't have text to image stuff yet. I don't have a that could be a different lab. I haven't played with a lot of those tools yet. That's usually Christine Breen's department, but that's a good question, Ash. Okay, but does everyone understand the point of Rexy?


That Rexy is going to have a lot of power. You're going to be able to directly call to the API, even just with OpenAI, for everybody, right away, because you guys all have OpenAI keys. And Harper's saying, yeah, I hate that, I can't get consistent results, because every time I use a tool, sometimes... Yeah, something has changed.


So this is why it's critical. Start studying now. Make sure you have it down cold. Oh, this thing broke. It's giving me an error. I now know I need to change the model to something that has a bigger context window. And with Rexy, if I go to 16K, now suddenly my maximum length is bigger. If I bring it down to GPT 4, we do have this or we're going to get this fixed where it'll come back down.


Yeah, it kind of defaults. I can't... If I try to go bigger, it's going to come back down. But it's still very important that you all understand the fundamentals of this so that you're nimble with your prompting. Okay. Timeline. I forgot about timeline. We are doing our alpha testing this month.


I'm aiming for beginning of September. Sometime the entire student population of feature fiction Academy will have access to project Rexy. It'll be in data. We're hoping to leave out of beta in October. And at that point, I know I had said last week that we were going to include this in the tuition.


Unfortunately, that's not possible with the cost of the per user and everything like that. So it's, I can't say the exact number, but it's right now we're looking roughly something in the 20 range in terms of What you would pay per month to have access to Rexy and that's just to help with the cost of the infrastructure and tech support and everything like that.


You guys will all be getting the training and stuff for free with Rexy. Later on in the fall, people who are joining the FFA, it's going to be expensive for them to come in and work with Project Rexy on a monthly basis. So it's definitely the right time to get in. That's all I can say. And the reason it's going to be expensive is because.


Think, think of where everyone started in May and the number of questions. We're going to have with people brand new to being AI. Yeah, so Rexy is going to be great. You can tell your friends to join the FFA now between before the really before the end of August is best. They'll definitely be locked in there.


The lock in for the, roughly 19, 20 bucks a month or so will be probably sometime in September that we'll do the cutoff and say, okay, everybody who joins lab after this. You're not, you haven't been with us for months and months. So we can't honor that.


You're not going to get in on the good deal. Okay. All right, here we go. More information here. Everybody good so far. I don't mean to be bombastic. Okay. So a couple of other things now we'll talk about is troubleshooting prompts.


Stick with me, kid. All right. This is something that even I'm really bad about.


When I was tired last night, you saw it. Something broke one time and I have a bad habit. If it does, if I don't get the results I want right away. The very next thing I do is change the prompt. How many of you do the same thing?


Yeah, it's a bad habit. We got to get out of it. We need to start running a prompt two to three times to make sure it's not a bad diet, not a bad diet dice roll. Okay. Oh Harper is better than us. She waits till she depends on what broke. Yeah, that's true. But basically it's a token selection right with the tokenizer we've seen that and then I also do a whole thing of explaining in the playground here if you go to a legacy, and we say, describe for me a rose bed, and I make the maximum length.


Temperature controls how much it has to listen to your instructions, whether it's got to be exact or if it has ability to be more random. Full spectrum, and I click submit. It will show me the tokens it chooses. And on any token, like design, I can look at this and go, there was a it's not a 45% chance that it could have picked that token.


But whether or not that token, like when it was deciding what token could come next, it assigned a percentage for the likelihood of that token to make sense in the context. And it has the ability to pick a very low token. See, it picked the one that was 0. 35%. Doesn't mean that these tokens don't make sense.


All of these tokens make sense for the, this particular. If you had a rosebud is a garden. A rose bed is a garden area. A rose bed is a garden feature. A rose bed is a garden filled. A rose bed is a garden design or the, of a garden of, so all of those tokens would have worked in that sentence and it would have impacted what the next token selected would be.


If I said a rose bed is a garden. Feature this one, that's 10%. I honestly would not pick the next token feature again. Does everyone make sense? Does that make sense to everybody? I wouldn't ever say a rose bed is a garden feature. Feature That doesn't make sense. So what happened was here is that a rose bed is a garden design.


And then once it chose design, that's what made feature 14 34% likely as a token to go with the token design and the context of the larger sentence. Is this making sense to everyone? So every once in a while, the AI will click, will pick a token that doesn't make any sense. Like this one right here. If it said a rose bed is a garden bed, all of us would say that's pretty bad writing, right?


We all were taught that in elementary school. You don't define a word with the same word. So it would, if it started to pick a rose bed is a garden bed, matter of fact, let's do this.


I'm going to manually make it choose that token. A rose bed is a garden bed and I'm going to click submit. And so that changed it right here. Instead of a garden bed, we now have of, and you'll notice of was one of the tokens that could have been selected there after garden, but of was now the token that it did, and then soil, which was one important set, but it also could have done 88%, but it's rejecting this number one thing for that same reason, because of its training, because it has training to know like we do that.


You don't. You don't use a word to define a word. A rose bed is a garden bed of roses. Does that help anybody understand what a rose bed is? No. So I think it's fascinating to start looking at this. And once you start understanding this and really synthesizing it and looking at it and playing with it, by the way, to get to this mode, excuse me.


You want to go to complete mode. They don't have this in chat. So this is the open AI playground. You want to go to complete mode and then you want to come down here to the show possibilities and you want full spectrum. Now I will let you know that, you, I don't know how I got you, but I'm Joby. I apologize.


Did not mean to make give you Bon Jovi. Why is this not working? Oh, bed of roses. Got it. I will warn you that you will be topped out at DaVinci 3, which is a model that is not as smart as 3. 5. So just be understanding that when it picks a token, it's not 100% illustrative of how it's going to pick tokens in 3.


5 and 4, because those are Smarter models. They have more billions of parameters that they can consider. But DaVinci will at least illustrate for you the functionality of it, selecting a token. One by one. Yeah. Click to show more models. It still is all just, it's not, it doesn't have the chat models in here because this is a completion mode.


So if you click to show more models, you'll just have older models like Curie and Babbage and Ada. And different da Vinci's. Does that make sense, Dion? If you want to have the other models, you have to go into a chat mode, and then you have to go into these, and it doesn't, these are all the chat ones, and it, this, the playground, for whatever reason, doesn't have the percentages for the tokens.


I'm sure there would be some way for us to do it, for the API to call that back, but I don't know that chat supports that.


Alright, so we have that that right there for those information for troubleshooting prompts. Run them a couple of times, and Yes. Oh yeah. No completion mode supports the percentages. I don't know if chat mode does. Do you use chat your Python to call the chat ones, Deanna, and you can get the percentages of what it considered for each token.


I wasn't aware that was supported. If so, I will research that because that seems pretty cool. The other thing I'm going to say for best practices is use square brackets like this.


When you are megaprompting to define sections of megaprompts. Now you're probably going to go, but Elizabeth, there's other ones and I like this one and I like that one. Okay, here's my reasoning for this. Parentheses can cause confusion because we use


in writing for asides that have nothing to do with coding. Okay. The alligator brackets, this one, plus the minus sign, this can cause issues in tools like Poe. com or any online tool because HTML uses. Those we've seen this in PO. So when you go to po. com, for example do now, this doesn't mean that if you use angle brackets, it won't be a pro it won't be a problem or anything like that.


And I go to cloud instant, which I have access to, and I'm like tell me a joke and put the punchline in tags like this punchline. It causes issues because PO has a tendency.


FFA chat type. I have not played with that. So it says here's one. Why can't a bicycle stand on its own? Because it's too tired. Punch line. It doesn't have legs. Not exactly. Okay, sure. I guess that works. But you'll notice that this one worked. Okay. But when a longer prompt, if I was to just tell Claude.


Instant. Tell me 16 jokes about bread. Here, 16 bread jokes. Why can't I ha! These are not about bread at all. What do you call a bread that isn't yours? A crum munity loaf. And it's hallucinating a little bit. It saw the yeast. That doesn't even make any sense. It's not doing it now. But what will often happen when you're prompting in PO is that it has anyone seen like a word, a letter here that'll be like a word here that'll be purple.


And if you click on it, it will prompt more prompt and stuff like that. But it's, I don't know how to get back to my old. Chats. I guess it would be in here, maybe. My old chats. Okay, there we go. There we go. This is what I was talking about. See how this says cultural enrichment? If I click on this, it'll start, it'll tell me more about cultural enrichment.


This is something that they have baked in, but of course it doesn't do it, because it's Claude. But this is a feature that they have baked in. That one was something else, but if I come here, Ocean Myths, and I click on Ocean Myths. Here's some interesting things about Ocean Myths. But when you start using a bunch of brackets with the angle stuff, it can impact, it can impair that functionality that it has where it's got another layer running that it's identifying words that it could turn into another prompt.


It's trying to predict what other words you would want to expand as like a user ability or something like that, which is very handy if you're doing something of like research of tell me more about this. And then you click the button and it tell me more. It's simulating that old the old experience of the internet where everything would be linked and you're like.


Tell me more about this and then you get that click and it expands. So that's why I say that the angle brackets are not necessarily a best practice. Okay, and then the other reason is other people like these, the squigglies. This can cause issues in Rexy. And also confuse an LLM to think it's JSON formatting.


Now I'm going to be upfront with you, I don't do JSON formatting. I haven't had a time to sit down to see what it means. But the rules of JSON formatting for the syntax it shows you how to do like the data, so instead of name and John with the colons, you do these little squiggly brackets to put everything in.


And there are people I know Lynn Jordan and some other people, Leland. They will, they have experimented with doing JSON syntax for like their prompts and everything like that. And that's cool. But if you aren't using JSON syntax. And the A. I. Is programmed to think that you are for that letter or That thing, you could see how that could insert some confusion.


So again, it doesn't mean that the squiggly brackets won't work for you. They will, or the angle brackets won't work or the parentheses won't work. They all work, but for my money and for the analysis and stuff that I've done with tokens and things, I would say that the square brackets are the only things that I have not run into any problems with.


It's not caused an issue for me on Poe, hasn't caused an issue for me on Pseudowrite. It's the most universal ones that I can find for all the different tooling. Any questions about that? Does anyone else have other brackets? I don't know what else brackets you would use. I will say that a dinkus or some other denotation


between task And information for the task is also a best practice and you guys are familiar with that when we do our mega prompting. All right, so now the last little bit of this lab I know we're over on time but I think that this is important. How is everybody feeling.


All right, this next part here is for Karina. Are you still with us Karina. Yep, I'm here. Okay, good. Now we are going to go into prompt construction. This is the last part. This is the last major part for this lab prompt construction, because you are going to need to know how to construct a lab to construct a prompt for Rexy when you move into Rexy because Rexy is all about prompt construction.


If nothing else, at first we will be making all the prompts for you so if you look at Rexy. It's doubling as Rexy training. That's okay. We will have here the generic to the genre and ideas. Bring your genre and ideas so far. This will help you learn the elements needed for your story world.


This is a very generic story world, like pretending you know nothing about what to do for story world. So I did one last night that I can share with you guys for this specific prompt, just to show you what it outputs for you. Let me go to notion. I know we're on the boot camp, huh. And I'll go to the dashboard and the brontosaurus pages and my projects.


Okay, so where here it is the world building one. So that particular prompt exports this and all I gave it was cozy mystery. Lakes and I wanted witches and magic and Southern United States. That was the information I gave it into Rexy. And so what it, what that does for me is that it number one, define the elements I would need to construct a story world for this series.


It even came up with the lakeside charm series. I wouldn't call it that, call it something else, but that's fine. So it says that we're going to need a setting and it explains to me what I need. And then all the information there. The town could have a name like Southern Charm, Virginia, this doesn't really work like Virginia. We don't have live oaks and Spanish moss that's further south. But I guess there's some very rurally maybe but that's not really nobody would see that and thinks think Virginia for live oaks and Spanish moss. So anytime the AI gives some information to you, you do need to validate it and double check it.


If it's something that needs to have some grounding in reality. If I was writing fantasy, not a problem. We can have Spanish moss and live oaks anywhere we want to. It tells me which characters I need. So I need a protagonist and antagonist and a supporting character. And what we're working on is that if I bring this in here, let's look at Rexy for a second.


It's not, this is not full yet. This is not fully fleshed out yet. But just to give an example of what we're thinking in terms of you could walk yourself through the sequence. I go to characters. Generating a character list and I load this prompt, you'll see this prompt has genre ideas. I could just put that there and I could put cozy mystery, which is, and it has here that she's a witch in secret.


And all of that she runs a charge shop antagonist supporting characters. And how many characters do I want, let's say I want 10 characters, and then it's going to go based on this genre cozy mystery, which is. I can take out the witches out because it already says witches in there. So we'll go paranormal cozy mystery.


Yeah, Cypress and Spanish Moss would look better in Virginia Swamps, most definitely. Ideas, so Ideas are going in here. And then Generate, Number of Characters. And I'll keep all the things here, and I'm gonna, I'm not gonna actually write to Notion. And we'll do GPT 4 is fine. The reason I'm not gonna write to Notion is because I wanna really show you the completion and I always forget that.


And I can always copy and paste it to Notion, or right now. The big, ooh, this is really cool.


I don't know what's going on here with the completion preview. I don't know what these are. I don't know how this works yet. I'll have to talk with Joe. I'll find out here. We can't see what you're seeing. Can you go up to the completion thing there? Let's take a look at the request. Because I don't know if that request came through the way that we expected.


You're a writing assistant based on this genre and I, Oh, it didn't bring in the box. Yeah. The tokens didn't come through. So that was last night's bill. That was like me at one o'clock in the morning, adding these features. All right. So that's okay because you know what? Guess what? I know what goes in the boxes.


What's here. This actually still demonstrates what I'm trying to teach more fundamentally. This box here goes right here and ideas. And then generate, boop, 10 characters. Okay, cool. Now it'll process. Sure. I'd be happy to. No worries. And Rexy won't be like this when you guys have access to him. We're building it.


Joe's building it. Yeah. Yeah. It's very stable right now. Cause every night it's changing and we're adding stuff. I feel bad. I feel like we're running you ragged, sir. No, it's a good thing. And I can't wait to show you those completions, the magic tokens. Pop the magic token.


Okay. It's running. So it's definitely giving me a better response than 47 words. And it's running this list.


But that's the philosophy here. And what's really cool is that even though I changed this, if I reload this prompt will go back to normal. It won't have, it won't have this information in there. It'll have genre and ideas and we'll fix it to where genre goes back into genre. It's just it's in flux right now.


I should have picked turbo or something like that for just takes so long to run. And it doesn't actually take longer than running in playground. If I was to grab this entire prompt. Here and bring it over into Playground and just type it into user and go here and GPT 4. I'll go with one. Submit. It starts running, but you'll see that we're waiting.


It looks like it's faster because we actually can see it. As it's populating. When you do an API call, you don't get that. It just does that. So we can compare the completions. So here I have the request, which has the whole, the information here that was sent over. And then I have the completion right here.


And we do have, oh, no, this is cool. It groups some characters together. So here is a six to seven characters, six and seven Sylvia Rodriguez and Angie Johnson. We'll go ahead and make them ally witches. So Sylvia's tall and Angie's short. I didn't ask it to group them together. And then Alex Ridley, nosy reporter.


When we were creating this prompt, literally last night is when Karen Brown Christine and I were formulating it. They had wonderful ideas about this prompt. They wanted for the characters and we want Enneagram and we want this and we want that. And I. I scaled it back a little bit because I was like, let's just do a generic writing prompt right now for the characters.


And then that way it has the where their ideas are, that's where I can put the information in. So while playground, I don't know which playground I was in. Yeah. See, this is still running. So this just goes to show one's not really faster. Rexy is not faster or slower than running it in the playground.


It's just in playground. You can actually see it as it goes and you can stop it if it's a problem. Okay. So that's a little bit about constructing a prompt. But now we're going to do planning, planning how we do a prompt construction. So I'm going to walk you guys through how I would actually create that prompt that I did last night.


But we'll do a different prompt than that. So the first thing I always start with is steps a human would take. Okay, so I would say what is the task? What is the task that we want to build a prompt around? It can be a task that we do prompting around all the time. It doesn't have to be a complicated task.


I'll let Karina pick it if Karina would like to. Karina, do you want to give me a task that you want the AI to do? What about summarizing a past book? Okay, so summarizing, that's a good one,


summarizing a past book. So let's think of what a human would need to do.


How does a human do it,


right? And we would go, okay, they would probably read the book entirely, if I'm honest. Then they would start with chapter one


and reread that. Then write a summary of chapter one, having the context of the whole story so they could note. I guess what kind of summary would you want, did you want one that would have the foreshadowing and stuff or you would only want like a strict summarization of chapter one in isolation.


I think I would just want to know what the chapter is about. Okay, so then we don't need to read the book entirely, thank God. That makes it easier for the AI to do it. So start with chapter one, read that.


Maybe, you know what, if I was really, if I was really doing it. As I was reading, I would jot notes of things to add into the summary. And then I would write the summary. Is that how every else will do that too? Yeah. You would pull out two or three key points as you're reading, and then you would write the summary.


So then we translate that to what is, how does an AI do that? So an AI would need to read the chapter. So we would feed it the chapter. If I was exactly mimicking what a human does... I would ask it to make a bullet point list of major elements in the chapter that should be in the summary,


then write the summary, write the chapter summary. Does that seem good? Yeah. And this is where You know we lend our thinking power to the AI because the AI probably has training on summarization. It could probably just, that's probably one of the tasks that it could probably do out of the box, but because I take the time to really hone this down into the component steps and then figuring out the prompting that would go with that.


I don't know why I can't delete that out of the chat. It's just being silly for zoom. I have a word in there and I can't change it. There we go. Okay. By breaking it down into component steps, now I can start to build the prompt. So now, if I was prompt building, so the prompt, read this chapter. And then I always say the word, and then I put the squiggly marks for, if I'm building a prompt for Rexy, I have to define a box that chapter stuff is going to go into.


So I would put chapter right here, in the squiggly boxes, right? And then I will say, read this chapter, then make a bullet point list


of major elements in the... Now if I put squigglies again, it's going to repeat that entire chapter. So I don't need to do that, and that's where for best practices to make sure it's really super clear, I actually will put this into square brackets


and close it. Because when... In Rexy, when it parses over the information that's in that box, it doesn't parse over the name of the box. Let me show you in Rexy here real fast. It's not going to send over genre in squiggles. When it's actually working. So it's important that if I have something that is squiggles like this, and I'm not going to have context in that box, like I'm not going to say that box is chapter one, so it understands that thing is all of a chapter one.


I can put brackets around it, like a container wrapping saying, okay, this is the chapter. And then I can call back to that when I want it to do different tasks, then make a bullet point list of major elements in the chapter. And I just put it in brackets, square brackets. If I could spell it right.


Finally, oh, major elements in the chapter that should be in a summary in a chapter in a summary, then write the chapter summary. Now, two or three steps. An AI can handle, and you do want to say those steps one by one, because this is where you lend it the ability to think, because you make it make that bullet list of major elements first, that's the first task.


Then it has something that it can be in that right mindset to write the summary. If you wanted the absolute best results, you would actually run this. As a two stepper, you would do something like this. You would do this as prompt number one in a chat mode. And so it gave you that bullet list.


And then you would say, great, now write that chapter summary, because that would give you the options to read those bullet lists and to make sure that it makes sense. Also, it gives the AI a chance to read that bullet list summary. Every time you chat with it, it'll go back and read up the whole context.


So not in the API call, by the way, but only in the chat. So if you're doing an API call with Rexy, the difference is that you have to send the information over and over again, or you have to take a bunch of information, make it into a report, and then send the report for the next prompt for what it needs to do next.


This is very advanced prompt engineering when you're doing the difference between chat and API calls. Does this help, Karina? Yeah, and then you can repeat the process once you have all the chapters to get a book summary, right? Yes, and our hope for Rexy is that we're going to be able to say... Oh, go ahead, Joe.


Oh, I was just going to say, so I did a run on Rexy. I apologize for interrupting. I did a run on Rexy, and it seems to be working. I think we're missing a button. So we might be able to take another stab at a run and just click somewhere else just to force it to commit it. And then I'll get your button in there later today, but I think we to it online.


We don't need to run it again. This wasn't designed to be like teaching Rexy or anything like that. It was just, I was just showing Rexy for explaining why I'm teaching these basic components and how they relate to Rexy. But I'll try it again since you're here. Okay. So I actually have a prompt that's like this.


So let me go back to catalog. It's not in anywhere yet. I don't, now that I realized that I don't know where we would put this prompt. I think it's a novel writing, and it is in Not novel summaries. It's been brainstorming. I wasn't really good at my naming yet. I was just throwing things in here.


Story constitution, chapter by chapter. Eventually, we're gonna get to this thing, which is a multi chapter. I designed this as a hack of it but eventually it's going to just, with the magic tokens, it's just gonna sequence. So basically, read the chapter given and write a detailed report for character setting and plot elements.


And then the system prompt, it has an example of that report, so it knows what it needs to look like. And then this one is designed that you literally copy and paste one, two, three, four, five, six chapters. That's okay, because it's there on the page. And then we can just run this really fast. I'll show you how this works.


Let me go with give me one second. Google Drive, and we'll eventually get it to where you can say, okay, these are the sequences. Run this. Not that one. Go to this one. And then, yeah, we'll go with mom no notes. Book in a day. Pretty close. We're starting off with chaining just the summarizations.


Okay, so let me, oh, I don't have jump cut. Give me one second here. Launchpad, Jumpcut, turn that on. For those who don't know, Jumpcut is a tool in Mac that allows me to highlight a bunch of stuff. So like this is chapter one,


and I'm going to copy that. This is chapter two. And once I highlight it and press control C it puts it, let me stop the share for just a second. I'll show you.


It puts it so that I have chapter one, I have multiple things that I've copied and pasted, and I can select them from my bar up here. So I'll stop the share and do a new share. Cancel. I highly recommend a jump cut or some tool. I think PC has it already built in. So I already have two chapters and we'll just do three chapters for the illustration.


We won't do all six and I called this parked content. And it was like a hack of Rexy. It was a way to set myself up. So I'll put in chapter one, I put in chapter two. I put in chapter 3. So basically, these are static pieces of content. This isn't going to change. And now, I just change right here. This will eventually be a magic token.


Chapter 1, let's not do GPT to take forever and a day. We'll do 16K. And we will auto write it to Notion, and I will call it BOM number 6, chapter 1 summary. I click go, and then I have to, oh, I don't know why that was an issue, but okay. Update it. Okay. Then I have to click one more space. You say Joe?


Yeah. I think what might've happened last time is the left side wasn't filled in. Is it showing right now, or does that look like it? It's blank. See something in there? It looks like it's blanked. Yeah. So I think something I must be clearing the token somehow. Okay. Or clearing, clearing this part out, when we're updating the l M. Because that seems to be what happened last time. So that's what's happened. So they're clearing out. So it cleared out behind the scenes and that way, that's why nothing got passed. Although it looks like something showed up on the right. It hallucinated completely.


Yeah. It's all right. We'll run it again. Because this time I won't update the LLM. So that's a bug right there. And just, yeah, just click somewhere else, like in the white, just to simulate button press there. Yeah. And then go ahead and click on the process. Let's see if we get better results this time.


Yeah, this time it didn't clear it out. So something when we change the LLM, it's clearing the prompt tokens. Yeah, because I was working on the slots last night, which you're gonna, you're gonna love this feature when you see it. But anyway, so I was working on that, and I was testing those out, and I didn't test the original stuff out, which I probably should have, especially since But so this I got the results.


I got the completion and here's chapter one and everything's there. And if I come over to notion to Elizabeth's projects there is the chapter one summary right here. Not that one. That was the hallucination one. But it did go to my my Notion. Sometimes Notion is a pain in the butt. Move that down.


Okay. And then what I can do very quickly, because it's Rexy, right? I don't have to change anything, because chapter one... Actually it's faster than that, because I think you were doing this last time. Now you can just click that, and if you go up to those little red ones on the top... Yeah, they're all new, like the S1 where it says assign completion to click the S1 button on the right side.


There you go. No, I don't want this is not the right sequence for this prompt. If that's going to assign the completion to S1, I don't want to do that. I don't. Okay. Gotcha. That'll be for like writing, but I'm excited about that. This is for sequencing. This is possibly even like a filter more or, a layer almost.


So now I wanted to do chapter two for this element. And I come here and I rename this chapter two and I click go. And so now it'll just summarize chapter two because it has the part content over here. It doesn't matter that chapter one and chapter three are here in boxes because my prompt doesn't call them out.


It doesn't, it does no harm, no foul, it's just sitting there. So that I can rapidly call chapter two, chapter three, chapter four, and it shoots it out to Notion for me very quickly to summarize stuff.


And then chapter three change here and click go for my veterans. How much work would this actually be the other way with playground? How many times did you have to copy and paste the chapters?


Oh, I have to have my chat open. Yeah, too many times. And this way, you only have to validate the data once, too. Make sure your chapter 5, 6 are all one time. And now if I come over to Notion, you guys will see I've got chapters 1, chapter 2, chapter 3, and bam. And the reason we like Notion is because if I want this to be all one document, you just right click, sorry, you click the left click here, turn this into text right here, turn this into text.


Okay, that one looks a little bit ugly. And then turn this into text. That one looks better. I don't know what happened here. It looks a little bit wonky, but I can fix it. Is there a character limit that you can put in the left hand field size? There is not. That's why it's important to think about your context window.


10, 000 words here, there's not very many models that are going to handle that other than Claude. 10, 000 words here would make it, That's really like about the high end of what Turbo can process. So if you have longer chapters, you still want to work in slices three to no more than three to 4, 000 words at a time for it to summarize.


Does that make sense, Karina? Yeah, I did the long way around. Yeah, you did the manual one. I did too. So you're in good company. You did it the right way, you learn it, you are educated about it, but now you can see how this would make your life so much faster in the future. So much faster. Yes, and as we are making lots of books, we are going to need to get them very quickly into into these kinds of formats.


This is a lot easier to validate. An outline for, stories going off the rails, at least for a preliminary check before you have to read the whole thing. So this is a little bit of Rexy. So hopefully everybody understands the concept of how we would take a prompt, take a task we want summarizing a past book.


How does a human do that? Start with chapter one, read that jot notes, and then this, and then you can make an actual prompts to do that. In this case, my prompt does not have. Make a report and make summary bullet points and then do it. Instead, what I did is a different methodology of lending it some thinking.


I gave it an example report so it knows what kinds of information it's looking for before it even starts reading that text document. Because for opening I system gets parsed over first. All right. This was an hour and 45 minutes so I want to thank everyone for saying any questions.


So my share. I know this was a very much of a deep dive on the very basics and things like that. But I think that these are going to be the critical skills that we are all going to need to adopt. This is the stuff that you need to be able to do in your sleep that we have the ability to do and other skills that we have with writing.


This is the core skills I think of writing with AI, understanding tokens, understanding what that cost is going to be understanding how to break the tasks down into individual prompts. If you start working and practicing on those three critical skill sets. When you get access to Rexy, you're going to be dangerous, as we like to say in the FFA cap, or going back to the club strat.


How many pen names will we need? All right. Thank you guys so very much. Thank you to everyone who volunteered to be part of the alpha group. I am eternally grateful. They're going to be hacking away at this over the next week and a half or so. And that way we have something very robust and strong for when September comes rolling around.


So your assignments before now in September. Practice, some more practice with different models, practice on projects. You don't necessarily want to see the light of day. You're just practicing. You're prompting.

August 16 Lab Room 1 Basics Beginnger Bootcamp.pdf
Complete and Continue  
Discussion

0 comments