If you are interested in knowledge graphs, I did a whole bunch of research and work on fine-tuning Inkbot to create knowledge graphs. The structure returned is proper YAML, and I got much better results with my fine-tune than using GPT4.
I’ll give you some better examples, just didn’t have time right then. Give me a few.
It was trained on a whole bunch of prompts asking for each task, so it’s not reliant on the exact wording from one of them in training to work. Set the task in the meta section as “kg”, and the model will respond with a knowledge graph if you ask for one (and sometimes if you don’t).
Here are a few of them:
Create a Knowledge Graph based on the provided document.
Create a Knowledge Graph based on the details in the conversation.
Your task isto construct a comprehensive Temporal Knowledge Graph
1. Read and understand the Document: Familiarize yourself with the essential elements, including (but not limited to) ideas, events, people, organizations, impacts, and key points, along withany explicitly mentioned or inferred dates or chronology
- Pretend the date found in'Date written'is the currentdate-Create an inferred chronology (e.g., "before the car crash" or "shortly after police arrived") when exact dates or times arenot available
2.Create Nodes: Designate eachof the essential elements identified earlier as a node with a unique ID using random letters from the greek alphabet. Populate each node with relevant details.
3. Establish andDescribe Edges: Determine the relationships between nodes, forming the edges of your knowledge graph. Foreach edge:
- Specify the nodes it connects
-Describe the relationship and its direction
- Assign a confidence level (high, medium, low) indicating the certainty of the connection
4. Represent All Nodes: Make sure all nodes are included in the edge list
I haven’t noticed a huge difference in the outcome at inference time depending on prompt used, but sprinkling in some more detailed instructions helped lower loss when training.
As far as dataset, I used a little bit of the Dolphin dataset, to not lose the usual conversational ability. A little bit of the SponsorBlock dataset as a seed, and then I improved it, and the rest is custom…I spent ~$1k or so on API calls creating it. I plan on releasing it at some point, but I want to improve some aspects of it first.
It was not an insignificant amount of work to get it working as well as it is tbh.
For example, one of the tweaks I did that had the most impact…you’ll notice the node IDs are all greek letters. They were originally contextually-relevant IDs, like the name of the entity in the graph.
```
- id: Eta
event: Construction of the Eiffel Tower
date: 1889
```
would have been
```
- id: eiffel
event: Construction of the Eiffel Tower
date: 1889
```
But that lead to the model relying on context clues from that piece of text, rather than being forced to actually look up the data in the knowledge graph during training. So switching that out to use a symbol approach worked much better for relying on data in the graph, rather than model built-in knowledge.
I was planning on testing that out on my own, but then I ran into this paper: https://arxiv.org/abs/2305.08298, which made me pull the trigger and convert my whole dataset and creation process to support symbolic identifiers.
If you are interested in knowledge graphs, I did a whole bunch of research and work on fine-tuning Inkbot to create knowledge graphs. The structure returned is proper YAML, and I got much better results with my fine-tune than using GPT4.
https://huggingface.co/Tostino/Inkbot-13B-8k-0.2
Here is an example knowledge graph generated from an article about the Ukraine conflict: https://gist.github.com/Tostino/f6f19e88e39176452c1a765cb7c2caff
Still, it’s sort of cool for us non programmers to be able to do this: https://poe.com/s/MLqxYzcczvnfnUkozR52
Agreed that it is quite cool, but you don’t need to be a programmer to use a custom model.
Inkbot works just fine with ooba or sillytavern if you want to use a UI, TheBloke has done quants.
Is your approach to constructing the F/T dataset written up anywhere?
Thanks for sharing the model!
See the info I just posted here: https://www.reddit.com/r/LocalLLaMA/comments/186qq92/comment/kbbpnel/?utm_source=share&utm_medium=web2x&context=3
I haven’t written up anything more comprehensive yet.
Great work! Would you mind sharing the datasets you used and/or how you augmented the data for training?
I’ll give you some better examples, just didn’t have time right then. Give me a few.
It was trained on a whole bunch of prompts asking for each task, so it’s not reliant on the exact wording from one of them in training to work. Set the task in the meta section as “kg”, and the model will respond with a knowledge graph if you ask for one (and sometimes if you don’t).
Here are a few of them:
Create a Knowledge Graph based on the provided document.
Create a Knowledge Graph based on the details in the conversation.
Your task is to construct a comprehensive Temporal Knowledge Graph 1. Read and understand the Document: Familiarize yourself with the essential elements, including (but not limited to) ideas, events, people, organizations, impacts, and key points, along with any explicitly mentioned or inferred dates or chronology - Pretend the date found in 'Date written' is the current date - Create an inferred chronology (e.g., "before the car crash" or "shortly after police arrived") when exact dates or times are not available 2. Create Nodes: Designate each of the essential elements identified earlier as a node with a unique ID using random letters from the greek alphabet. Populate each node with relevant details. 3. Establish and Describe Edges: Determine the relationships between nodes, forming the edges of your knowledge graph. For each edge: - Specify the nodes it connects - Describe the relationship and its direction - Assign a confidence level (high, medium, low) indicating the certainty of the connection 4. Represent All Nodes: Make sure all nodes are included in the edge list
I haven’t noticed a huge difference in the outcome at inference time depending on prompt used, but sprinkling in some more detailed instructions helped lower loss when training.
As far as dataset, I used a little bit of the Dolphin dataset, to not lose the usual conversational ability. A little bit of the SponsorBlock dataset as a seed, and then I improved it, and the rest is custom…I spent ~$1k or so on API calls creating it. I plan on releasing it at some point, but I want to improve some aspects of it first.
Total dataset size I used for training is ~85mb.
Alright, here are two full logs, Inkbot generated everything below the <#bot#> response.
Simple prompt: https://gist.github.com/Tostino/c3541f3a01d420e771f66c62014e6a24
Complex prompt: https://gist.github.com/Tostino/44bbc6a6321df5df23ba5b400a01e37d
So in this case, the complex prompt did perform better.
Great work, this is impressive, especially for a 13B model!
It was not an insignificant amount of work to get it working as well as it is tbh.
For example, one of the tweaks I did that had the most impact…you’ll notice the node IDs are all greek letters. They were originally contextually-relevant IDs, like the name of the entity in the graph.
```
- id: Eta
event: Construction of the Eiffel Tower
date: 1889
```
would have been
```
- id: eiffel
event: Construction of the Eiffel Tower
date: 1889
```
But that lead to the model relying on context clues from that piece of text, rather than being forced to actually look up the data in the knowledge graph during training. So switching that out to use a symbol approach worked much better for relying on data in the graph, rather than model built-in knowledge.
I was planning on testing that out on my own, but then I ran into this paper: https://arxiv.org/abs/2305.08298, which made me pull the trigger and convert my whole dataset and creation process to support symbolic identifiers.
Curious if you’ve tried GoLLIE for generating knowledge graphs from text?