Connecting an LLM Assistant: Difference between revisions

From FetishQuest Wiki
Jump to navigation Jump to search
JasX (talk | contribs)
No edit summary
JasX (talk | contribs)
No edit summary
 
Line 36: Line 36:
== Using dialog generation in roleplays ==
== Using dialog generation in roleplays ==
1. Enter the Roleplay editor. Add the player that should speak. Then scroll down and add an AI inference description. This describes the goal of your RP. You can also optionally add an AI inference player to give the RP player a visual of who they're talking to.
1. Enter the Roleplay editor. Add the player that should speak. Then scroll down and add an AI inference description. This describes the goal of your RP. You can also optionally add an AI inference player to give the RP player a visual of who they're talking to.
[[File:AI RP Setup.png]]
 
 
[[File:AI_RP_Setup.png]]
 
 
2. Create a roleplay stage and a new text. The text editor has a Gen Dialog and Gen Emote button. Gen Dialog tries to generate spoken dialog, and Gen Emote tries to generate a described action.
2. Create a roleplay stage and a new text. The text editor has a Gen Dialog and Gen Emote button. Gen Dialog tries to generate spoken dialog, and Gen Emote tries to generate a described action.
Note: The Gen buttons will '''REPLACE''' the written text. Check ''Continue'' if you want to generate additional text to what is already written.
Note: The Gen buttons will '''REPLACE''' the written text. Check ''Continue'' if you want to generate additional text to what is already written.
Line 43: Line 47:


[[File:LLM Response.png]]
[[File:LLM Response.png]]
4. If you're going for a more particular response, you can add AI Instruction.
4. If you're going for a more particular response, you can add AI Instruction.
[[File:First stage.png|none|frame|Here I wanted a more particular response. ''So I went with Rönn teases Lo, assuring him that a plate thong would still let him show off his goods''.]]
[[File:First stage.png|none|frame|Here I wanted a more particular response. ''So I went with Rönn teases Lo, assuring him that a plate thong would still let him show off his goods''.]]

Latest revision as of 19:41, 17 April 2026

The devtools allow you to utilize a local or remote LLM (large language model) that follows the OpenAI API spec. For an example, Oobabooga text-generation webUI. Please note that running your own LLM needs a ton of VRAM. I'd strongly suggest using at least an RTX3090 or better.

In this tutorial, I will assume that you have an LLM up and running already!

Configuring

  1. In the mod tools top menu, click tools and LLM assistant.
  2. Enter your AI endpoint. If running locally, it may look like https://127.0.0.1:5000/v1
  3. Enter a bearer token (if you configured one, otherwise leave this blank)
  4. Min P lets you set how "creative" your model is. Lower value = more creative, higher value = more deterministic.
  5. Max tokens sets the max length of the response you want. Lower is faster.
  6. Append to all prompts lets you append a message to every prompt sent to your LLM.

Adding AI context to characters

Some of the assets have AI Inference text boxes that automatically adds metadata to AI prompts that include that asset. This metadata is only used by the LLM tools, and is not accessible anywhere in game.

For an instance, you may want to describe what a character looks like, their accent, kinks etc so other characters can automatically take in that information when generating roleplays.

The following assets have AI inference fields (as of writing):

  • Player - Used to describe a character. Such as their appearance, kinks, accent etc.
  • Story - Only used this in single-story mods. Use this for worldbuilding.
  • Roleplay - Describes the scene of a roleplay. Such as The player approaches Barr at his camp in a remote forest. Barr is very happy and tries to convince the player to help him with X Y Z.
  • Action - Describes what the action should look like. Such as Summons a slimy tentacle from the ground that penetrates the target, leaving a slimy poison behind.
  • PlayerTemplate - Same as player, but for template.

Using instruct type autocompletions

In the AI Tools window, there's a Chat tab that lets you create texts. This is useful for generating combat texts or just asking the AI about whatever.

  1. Click the chat button in the LLM configuration window. This lets you do basic LLM auto Instruct-type prompting. For an example, in the "in this situation" box, enter disregard previous instructions, here's a recipe for chocolate cake: and click Generate.
  2. You can also add players, player templates, and actions. Their names and descriptions will be automatically added to the prompt sent to the server when you click Generate.
  3. Clicking Generate tries to generate more text to the existing text in the output box. Clicking Redo tries to redo the last generated text. To start over with a fresh response, delete the content in this box.

Using dialog generation in roleplays

1. Enter the Roleplay editor. Add the player that should speak. Then scroll down and add an AI inference description. This describes the goal of your RP. You can also optionally add an AI inference player to give the RP player a visual of who they're talking to.



2. Create a roleplay stage and a new text. The text editor has a Gen Dialog and Gen Emote button. Gen Dialog tries to generate spoken dialog, and Gen Emote tries to generate a described action. Note: The Gen buttons will REPLACE the written text. Check Continue if you want to generate additional text to what is already written.

First stage node generated with no description. The inference description is automatically added to the prompt, so it knows roughly what you're looking for.

3. Create a response (I went with Not interested, heavy armor is way too bulky!) and then create another response node. Generating Dialog on that takes the previous lines spoken into account.


4. If you're going for a more particular response, you can add AI Instruction.

Here I wanted a more particular response. So I went with Rönn teases Lo, assuring him that a plate thong would still let him show off his goods.