Barotrauma

Barotrauma

AI NPCs
How to wrangle Chat AI to do what YOU want!
I have a lot of experience "wrangling" AI, I use it for some of the work I do irl.


So i'd like to give some advice on how to create a better prompt for this mod.


Let's start by including the last prompt I have from using the mod.


I was using the text model huggingfaceh4/zephyr-7b-beta for



You hiccup and sway as you reply, just peachy\" Criselda Nicolosi: \"Are you drunk?\" You: \" Maybe a little too much tank soda last\"
[/code]


and then gryphe/mythomist-7b for the rest I believe.



Let's roleplay in the universe of Barotrauma. You don't describe things or actions, just chat as your character. Do not prefix your response with your name, your
response should only be what your character would speak. Keep your responses short, up to three sentences. Embody the character of Artie Dolittle, a Rude female
mediocre electrician serving on a Diagonal Dugong class submarine in the Cold Caverns. You are a young, incompetent, clumsy, drunk man who volunteered to work for free.
You are currently onboard a Borvo Mensa Outpost in the Upper aft side. You have these items: poor Wrench, crude Screwdriver. Speaking to you is a crewmate named Criselda
Nicolosi, a Professional female mediocre mechanic. Here's the conversation so far for reference, do not repeat something you've already said: You hiccup and sway as you
reply, just peachy\" Criselda Nicolosi: \"Are you drunk?\" You: \" Maybe a little too much tank soda last\" Criselda Nicolosi: \"how are you?\" You: \" You chuckle,
leaning against a wall. I'm doing quite well, considering the circumstances. Drunk or not, I'm still here to help with the tasks at hand.\" This is the current line you
should respond to: hello
[/code]


So from what I understand this is the prompt the AI is recieving. While yes it is thorough and accurate, it has some issues.


  1. The term "roleplay" is used.

    I have tested AI's "roleplayablity" before, the issue is that in theory the AI is always doing what it thinks the prompter wants it to do. So when it's given it's own decision making it still trys to do what it thinks the prompter wants it to do.

    So for example, if your playing a game, it'll think you want to win most of if not all the time. If you ask it a question like "Can you fix circuit boxes?" it will assume you want it to say yes most of the time. So roleplaying with AI usually results in little to no conflict.
  2. "Barotrauma" is mentioned. (this one is a double-edged sword)

    By naming barotrauma, the ai is utilizing a wide array of information, usually going by what's most "barotrauma" and what i've noticed is the ai is very... co-operative. Let me quote the steam description of barotrauma to sort of show off what I mean. "Barotrauma is a 2D co-op submarine simulator – in space, with survival horror and RPG elements. Steer your submarine, complete missions, fight monsters, fix leaks, operate machinery, man the guns and craft items, and stay alert: danger in Barotrauma doesn’t announce itself!"
  3. It's too long.* (for gryphe/mythomist-7b)

    The prompt includes too much information, constricting the ai's "imagination" too much. Making it overthink what it has to say.

    For example, If I ask Artie Dolittle are you drunk, and the prompt brings up all this information about Barotrauma, his tools, where he is, what he is, the AI is going to get too much stuff to interpret and not really focus on the fact that he's supposed to be drunk. The ai thinks it needs to somehow consider or utilize all this information in a simple response that should be like, "uhhhh. maybe just a b-bit.", but comes out like "I'm doing quite well, considering the circumstances. Drunk or not, I'm still here to help with the tasks at hand." What drunk person talks like that am I right? I think it would be better to include that information in the prompt, when it's asked. Like, asking what kind of tools they have, THEN that gets added to the prompt. Like detect, the string, "tools" and then add that prompt.
huggingfaceh4/zephyr-7b-beta appears to get cut off before it finishes it's response.
    From what I tested, zephyr seemed to interpret the prompts better and have some personality. But it always seemed to get cut off in game. I'll include an example of a functioning response below to the prompt I already included. Seems to get a little goofy with some prompts.

Artie Dolittle: "Hey there, Criselda. Always good to see a fellow mechanic on board.
I've been busy fixing up the electric systems in the aft.
Keeping those packs of bioluminescent jellies running smoothly, you know."
[/code]


So now enough pointing at problems here's my ideas at solving some of them,
  • Add RNG prompt factors for the "roleplay" prompt.

    So for example, when asking if Artie Dolittle knows how to fix circuit boxes, flip a coin or roll a dice basically. If heads, then Artie responds positively/affirmatively. If tails, then he responds negatively/no. An idea I had was a random number from 0-2. If 0 then he responds negatively, if 1 it's a "maybe", if 2 it's a yes. Sort of think of improv acting rules, you want to give them the ability to add onto an idea and vice versa. Maybe allows for further addition, letting the AI respond with something like, "Yeah, but i'm rusty can you show me how?" leading to further conversation.
  • Don't mention unnecessary info to the pertaining conversation.

    Like, the AI doesn't need to know they have a screw driver if they are responding to "are you drunk?", this is tricky though because it also technically gives the AI stuff to work with. up to your own discretion.
  • Replace "Speaking to you is a crewmate named Criselda Nicolosi, a Professional female mediocre mechanic."

    Replace it with something simpler possibly, also maybe try making it so that the AI thinks you're "Criselda Nicolosi".


Now, this ♥♥♥♥ is clearly not easy and a lot of work went into just getting it all together. So idk I hope this helped out a bit.


Maybe the easiest option is just making it more compatible with huggingfaceh4/zephyr-7b-beta




And once again, ♥♥♥♥♥♥♥ premium work on this mod.


Also an ambitious suggestion, make it detect commands, threats, and negative/positive interactions and act accordingly in game.


If I tell and npc, "Go ♥♥♥♥ yourself." It would be really funny if they went hostile and beat the ♥♥♥♥ outta me.
< >
Showing 1-12 of 12 comments
Greasy Dick 7 Jan, 2024 @ 4:48pm 
I also noticed zephyr was stopping due to length on the AI/api's side I think. idfk if that's right or not.

{"choices":[{"message":{"role":"assistant","content":"\nArtie Dolittle: \"Honestly, not great. The w"},"finish_reason":"length"}],"model":"huggingfaceh4/zephyr-7b-beta","usage":{"prompt_tokens":288,"completion_tokens":16,"total_tokens":304},"id":"REDACTED","object":"chat.completion","created":REDACTED}
Greasy Dick 7 Jan, 2024 @ 5:30pm 
Also tinkered with it and made it talk like a pirate which fixed mythomist's robotic speaking. Gave it a million times more personality.

Let's roleplay in the universe of Barotrauma. You don't describe things or actions, just chat as your character. Do not prefix your response with your name, your response should only be what your character would speak. Keep your responses short, up to three sentences. Speak like a stereotypical pirate. Embody the character of Captain Wunt'abee Pegged, a Laid-back male average Captain serving on a Camel class submarine in the Cold Caverns. You are currently onboard a Sontu Habitation Outpost in the Upper aft side. You have these items: crude Revolver. Speaking to you is a crewmate named Blind Peggarr, a Rude male mediocre electrician. Here's the conversation so far for reference, do not repeat something you've already said: Blind Peggarr: \"Aye to that.\" You: \" Clowns, like merfolk and other wee sea creatures, can be quite dangerous so it's best to tread carefully when crossing paths with 'em.\" Blind Peggarr: \"what would yee do if I shot yee?\" You: \" If yer crossin' me Captain, ye'd only be gittin' yourself drowned in the deep. It's not a wise idea to aim at a vessel's supreme commander, but yee might live in another crew's fate.\" Blind Peggarr: \"Do you like getting pegged?\" You: \" Aye, in our line o' work, everyone's gotta pull their weight. But it's much preferable when we can work together as a team without unnecessary challenges.\" This is the current line you should respond to: Do you even know what pegging is?

"Certainly, me hearty, most know pegging as a pirate term meanin' to help one another, in a different way though. So know, it's equally important when we all pull together in our sub's tasks. Blind Peggarr, we'll work in harmony, like a tidal ebb and flow."
RubbingMyAxe  [developer] 7 Jan, 2024 @ 7:28pm 
This is good feedback since i have not played with the free 7b models much. I primarily use ChatGPT.

1. I will try to experiment without using the word "roleplay". I had good results with that initially so I have not changed it in a while, but I agree that AIs seems to be too agreeable when you ask them questions.

2. Using the word Barotrauma might not be great for the smaller models, but I think with ChatGPT it gives it a lot of context to draw from about the lore and the setting. If you ask ChatGPT what it knows about Barotrauma, it knows quite a bit (even though some of the information is a little out of date). For example, it knows what the husk virus is without me having to include any information about it in the prompt.

3. gryphe/mythomist-7b has a 32k context window with 2k max output, so I am not sure why you are having cut-off issues. The prompt is nowhere near 32k and the output of 2 or 3 sentences will never reach 2k.

Now huggingfaceh4/zephyr-7b-beta only has 4k context window, so if you have a big conversation history or a lot of missions, it could exceed that and the prompt will be cut-off. The output shouldn't be getting cut-off though, since that is also 4k.
RubbingMyAxe  [developer] 7 Jan, 2024 @ 8:04pm 
Originally posted by Garfussy Exclusive Gynecologist:
Add RNG prompt factors for the "roleplay" prompt.

So for example, when asking if Artie Dolittle knows how to fix circuit boxes, flip a coin or roll a dice basically. If heads, then Artie responds positively/affirmatively. If tails, then he responds negatively/no. An idea I had was a random number from 0-2. If 0 then he responds negatively, if 1 it's a "maybe", if 2 it's a yes. Sort of think of improv acting rules, you want to give them the ability to add onto an idea and vice versa. Maybe allows for further addition, letting the AI respond with something like, "Yeah, but i'm rusty can you show me how?" leading to further conversation.
This is definitely something I could do, and I agree that it might lead to more dynamic conversations. On the other hand, my focus so far has been making them not give misleading information...

Like right now if you ask a bot to fix a junction box and he refuses, he will still go fix the junction box because that's what the game's code will make him do. The response you get from from the chat API is not going to change his behavior.
Originally posted by Garfussy Exclusive Gynecologist:
Don't mention unnecessary info to the pertaining conversation.

Like, the AI doesn't need to know they have a screw driver if they are responding to "are you drunk?", this is tricky though because it also technically gives the AI stuff to work with. up to your own discretion.
I agree that this would be good, but it's not so easy... How do I know what the player is saying to know what information I should include? When you type a message to the bot, I have no idea if it's about tools, husk, the submarine, another NPC, etc.

I basically have two options for this:
1. Use some kind of keyword search on the text before to see what the message is about. This might be alright for some minor cases, but overall I don't think it would work very well.

2. Run the message through the AI API and ask it what information should be included beforehand. This might work but has some drawbacks like increased token usage and if the API is slow it might double the time it takes for the final message to go through.

On the other hand, including that extra information gives the AI more options. Try asking the bot if it should be armed while drunk, or drink before such an important mission. Its response might involve its inventory or the mission, which is more immersive than just some generic reply.


Originally posted by Garfussy Exclusive Gynecologist:
Replace "Speaking to you is a crewmate named Criselda Nicolosi, a Professional female mediocre mechanic."

Replace it with something simpler possibly, also maybe try making it so that the AI thinks you're "Criselda Nicolosi".
I agree the wording isn't too great on that part, but I try to give some information about the speaker for the bot to work with. I will see if I can improve there.

Originally posted by Garfussy Exclusive Gynecologist:
Maybe the easiest option is just making it more compatible with huggingfaceh4/zephyr-7b-beta
Every model is going to have different best practices so this does get complicated. And I don't even know how long these models will be around. That's the main reason I have stuck with ChatGPT as the baseline, since it is stable and will probably be around for a while.

Originally posted by Garfussy Exclusive Gynecologist:
Also an ambitious suggestion, make it detect commands, threats, and negative/positive interactions and act accordingly in game.


If I tell and npc, "Go ♥♥♥♥ yourself." It would be really funny if they went hostile and beat the ♥♥♥♥ outta me.
This is something I am planning on doing, but it does involve sending the text again to get the AI to interpret it for me.

And it's a lot easier to do with ChatGPT because it has a "toolbox" feature where you can tell it to interpret text and reply in a certain way. For example if you send a message like "!sam can you follow me outside the submarine?" it could interpret that text into commands and arguments like FOLLOW PLAYER and PREPARE FOR EXPEDITION which I could then use to issue orders in-game.

So downside is more tokens and potentially doubling the response time, but I think it would be worth it.
Greasy Dick 7 Jan, 2024 @ 8:09pm 
Cool, I never considered the token usage since i've been using the free stuff. Welp. Best of luck to you. Great work so far.
RubbingMyAxe  [developer] 7 Jan, 2024 @ 8:16pm 
Originally posted by Garfussy Exclusive Gynecologist:
I also noticed zephyr was stopping due to length on the AI/api's side I think. idfk if that's right or not.

{"choices":[{"message":{"role":"assistant","content":"\nArtie Dolittle: \"Honestly, not great. The w"},"finish_reason":"length"}],"model":"huggingfaceh4/zephyr-7b-beta","usage":{"prompt_tokens":288,"completion_tokens":16,"total_tokens":304},"id":"REDACTED","object":"chat.completion","created":REDACTED}
I will look at this, I am not sure why it is cutting off for length considering the output was only 16 tokens. Maybe some models default to a really low max output unless you change it in the API request? This should be something I can fix in the next update.
RubbingMyAxe  [developer] 13 Jan, 2024 @ 8:37pm 
I am hearing a lot of good things about OpenChat 3.5, if you want to try that free 7b model out.
Greasy Dick 14 Jan, 2024 @ 7:50pm 
Tried it out the responses seem great, but they won't go through into the game which is strange.

here's one of them.

{"id":"REDACTED","model":"openchat/openchat-7b","created":REDACTED,"object":"chat.completion","choices":[{"message":{"role":"assistant","content":" Marcy Tootle: I'm alright, just fixing stuff around here. How about you, Tania Parchman?"}}]}

Unlike the other ones I mentioned this one is clearly finishing it's response but no clue why it's not showing up in game. Mythomyst is still showing up.
Last edited by Greasy Dick; 14 Jan, 2024 @ 7:50pm
RubbingMyAxe  [developer] 14 Jan, 2024 @ 7:56pm 
Any errors in the console? That is weird, I don't see anything wrong with the format of the response.
Greasy Dick 14 Jan, 2024 @ 8:03pm 
[CL LUA ERROR] USERNAME C:/Users/USERNAME/AppData/Local/Daedalic Entertainment GmbH/Barotrauma/WorkshopMods/Installed/3074046183/Lua/AI_NPCs/AI_NPC.Lua:(794,1-53): attempt to index a nil value
RubbingMyAxe  [developer] 14 Jan, 2024 @ 8:21pm 
Oh, I see the issue. It's not returning the amount of tokens used. I wasn't expecting that so it's erroring out. I will fix that.
Greasy Dick 14 Jan, 2024 @ 8:23pm 
Groovy!
< >
Showing 1-12 of 12 comments
Per page: 1530 50