Feed Item
Added a class  to  , openai_embeddings_llm

Graphrag - Private Server - Gemma2/Mistral with Nomic Embeddings running on Ollama with Streamlit

Step by Step Process

Install Miniconda

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh

After installing, initialize your newly-installed Miniconda. The following commands initialize for bash and zsh shells:

~/miniconda3/bin/conda init bash
~/miniconda3/bin/conda init zsh

Create your new Environment

conda create -n graphrag python=3.10

Activate Conda Environment

conda activate graphrag ??

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

ollama pull mistral

ollama pull nomic-embed-text

Install Graphrag

pip install graphrag

Initialize graphrag

python -m graphrag.index --init --root .

Find embeddings file and amend

sudo find / -name openai_embeddings_llm.py

Amend the file as follows:

.py
from typing_extensions import Unpack

from graphrag.llm.base import BaseLLM

from graphrag.llm.types import (

  EmbeddingInput,

  EmbeddingOutput,

  LLMInput,

)

from .openai_configuration import OpenAIConfiguration

from .types import OpenAIClientTypes

import ollama

class OpenAIEmbeddingsLLM(BaseLLM[EmbeddingInput, EmbeddingOutput]):

  _client: OpenAIClientTypes

  _configuration: OpenAIConfiguration


  def __init__(self, client: OpenAIClientTypes, configuration: OpenAIConfiguration):

    self._client = client

    self._configuration = configuration

  async def _execute_llm(

    self, input: EmbeddingInput, **kwargs: Unpack[LLMInput]

  ) -> EmbeddingOutput | None:

    args = {

      "model": self._configuration.model,

      **(kwargs.get("model_parameters") or {}),

    }

    embedding_list = []

    for inp in input:

      embedding = ollama.embeddings(model="nomic-embed-text", prompt=inp)

      embedding_list.append(embedding["embedding"])

    return embedding_list

Index your Knowledge base ( First upload .txt files to /input folder)

python -m graphrag.index --root

Test with command line query

python -m graphrag.query --root . --method global "Tell me about FLAST-AI"

Install Streamlit

pip install streamlit

Create Streamlit APP

import streamlit as st
import subprocess
import os

def query_graphrag(query):
  result = subprocess.run(
    ['python', '-m', 'graphrag.query', '--root', '.', '--method', 'gl>
    capture_output=True, text=True
  )
  return result.stdout, result.stderr

st.title("GraphRAG Query Interface")
user_query = st.text_input("Enter your query:")

if st.button("Submit"):
  output, error = query_graphrag(user_query)
  st.write("Output:")
  st.write(output)
  if error:
    st.write("Error:")
    st.write(error)

Run Streamlit

streamlit run app.py --server.port=8080

Test with curl

curl -X POST \
 -H "Content-Type: application/json" \
 -H "Authorization: Bearer ollama" \
 -d '{"texts": ["Hello, world!"]}' \
 http://localhost:11434/api/embed

IMREAL.LIFE

Close