Introduction

Welcome to the internal documentation for the Artificial Intelligence (AI) group at HVL. This comprehensive guide is designed to provide our team members with essential information about accessing and utilizing various Large Language Model (LLM) services within our organization.

Purpose of This Documentation

The primary objectives of this documentation are to:

  • Streamline access to LLM services
  • Provide clarity on deployment locations and configurations
  • Ensure consistent usage across projects and teams
  • Facilitate onboarding of new team members
  • Serve as a reference for best practices and troubleshooting

What You'll Find Here

This documentation covers:

  • Detailed instructions for accessing different LLM services
  • Information on deployment environments (e.g., cloud, on-premises)
  • Configuration guidelines and best practices
  • Authentication and security protocols
  • API documentation and usage examples
  • Troubleshooting tips and known issues

We encourage all team members to familiarize themselves with this documentation and to contribute to its ongoing improvement. If you notice any discrepancies or have suggestions for additions, please don't hesitate to reach out to the documentation maintenance team.

Let's leverage our collective knowledge to drive innovation and excellence in AI at HVL!

Avaliable Models for testing on HVL Azure cloud

Note that you will need a key to access these APIs. You can aquire a key by contact an admin.

Meta Llama 3.1

The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.1

Meta Llama 3.1 is deployed and accessible at this address:

https://Meta-Llama-3-1-405B-Instruct-nvn.eastus2.models.ai.azure.com

You can access the completions api here: /chat/completions or alternatively here: /v1/chat/completions

pricing:

Input Tokens: $0.00533 per 1000 tokens Output Tokens: $0.016 per 1000 tokens Cost per 1m I/O tokens: $21.3

Mistral Large V2

https://Mistral-large-2407-kfedo.eastus2.models.ai.azure.com

You can access the completions api here: /chat/completions or alternatively here: /v1/chat/completions

pricing:

Input Tokens: $0.009 per 1000 tokens Output Tokens: $0.003 per 1000 tokens Cost per 1m I/O tokens: $11

Getting started with hugging chat

Clone the repo and install using npm, pnpm or bun:

git clone https://github.com/HVL-ML/chat-ui
cd chat-ui
npm install
npm run dev -- --open

Make sure a mongodb instance is running:

docker run -d -p 27017:27017 --name mongo-chatui mongo:latest

Create an .env.local in the root with the following information (Make sure to replace YOUR_API_KEY with a real key):

MODELS= `[{
  "name": "Meta-Llama-3-1-405B-Instruct",
  "displayName": "Meta-Llama-3-1-405B-Instruct",
 "chatPromptTemplate": "<s>[INST] <<SYS>>\n{{preprompt}}\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}}",
  "endpoints": [{
      "type" : "openai",
      "baseURL": "https://Meta-Llama-3-1-405B-Instruct-nvn.eastus2.models.ai.azure.com/v1",
     "apiKey": "YOUR_API_KEY",
  }]},{
    "name": "Mistral-large-2407",
  "displayName": "Mistral-large-2407",
  "chatPromptTemplate": "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s> {{/ifAssistant}}{{/each}}",
  "endpoints": [{
     "url": "https://Mistral-large-2407-kfedo.eastus2.models.ai.azure.com/v1",
     "type": "openai",
     "apiKey": "YOUR_API_KEY",
  }]
}]`
MONGODB_URL=mongodb://localhost:27017