Skip to main content

What are Conversations?

Conversations in Noxus represent interactive chat sessions with AI models. They provide a structured way to build chatbots, virtual assistants, and other conversational AI applications with support for multiple AI models, custom tools, context management, and file handling. Conversation Flow

Key Features

Multi-Model Support

Choose from various AI models including GPT-4, Claude, and more

Tool Integration

Enhance conversations with web search, knowledge bases, and custom tools

Context Management

Maintain conversation history and context across multiple interactions

File Handling

Process and discuss documents, images, and other file types

Async Support

Build responsive applications with asynchronous operations

Customizable Settings

Fine-tune model parameters, temperature, tokens, and behavior

Core Concepts

Conversation Settings

Every conversation is configured with settings that control its behavior:
from noxus_sdk.resources.conversations import ConversationSettings

settings = ConversationSettings(
    model=["gpt-4o-mini"],           # AI model(s) to use
    temperature=0.7,                 # Creativity level (0.0-1.0)
    max_tokens=500,                  # Maximum response length
    tools=[],                        # Available tools
    extra_instructions="Be helpful"  # Additional instructions
)

Message Flow

Conversations follow a simple request-response pattern:
  1. Create a conversation with specific settings
  2. Send messages with text, files, or tool requests
  3. Receive AI responses with generated content
  4. Continue the conversation with follow-up messages

Basic Usage

Creating a Simple Conversation

from noxus_sdk.client import Client
from noxus_sdk.resources.conversations import ConversationSettings, MessageRequest

# Initialize client
client = Client(api_key="your_api_key_here")

# Configure conversation settings
settings = ConversationSettings(
    model=["gpt-4o-mini"],
    temperature=0.7,
    max_tokens=150,
    extra_instructions="You are a helpful assistant. Be concise and friendly."
)

# Create conversation
conversation = client.conversations.create(
    name="My Chat Assistant",
    settings=settings
)

print(f"Created conversation: {conversation.id}")

Sending Messages

# Send a simple text message
message = MessageRequest(content="Hello! How can you help me today?")
response = conversation.add_message(message)

print(f"AI Response: {response.message_parts}")

# Continue the conversation
follow_up = MessageRequest(content="Can you explain quantum computing in simple terms?")
response = conversation.add_message(follow_up)

print(f"AI Response: {response.message_parts}")

Working with Files

import base64

# Prepare file for upload
with open("document.pdf", "rb") as file:
    file_content = base64.b64encode(file.read()).decode("utf-8")

from noxus_sdk.resources.conversations import ConversationFile

# Create file object
conversation_file = ConversationFile(
    name="document.pdf",
    status="success",
    b64_content=file_content
)

# Send message with file
message = MessageRequest(
    content="Please summarize this document",
    files=[conversation_file]
)

response = conversation.add_message(message)
print(f"Document Summary: {response.message_parts}")

Conversation Tools

Tools extend conversation capabilities beyond basic text generation:

Available Tools

Enable AI to search the web for current information:
from noxus_sdk.resources.conversations import WebResearchTool

web_tool = WebResearchTool(
    enabled=True,
    extra_instructions="Focus on recent and reliable sources"
)
Access your knowledge bases for specialized information:
from noxus_sdk.resources.conversations import KnowledgeBaseQaTool

kb_tool = KnowledgeBaseQaTool(
    enabled=True,
    kb_id="your_knowledge_base_id",
    extra_instructions="Provide detailed answers with citations"
)
Execute workflows from within conversations:
from noxus_sdk.resources.conversations import WorkflowTool

workflow_tool = WorkflowTool(
    enabled=True,
    workflow={
        "id": "workflow_id",
        "name": "Data Processor",
        "description": "Process and analyze data"
    }
)
Get help with Noxus platform features:
from noxus_sdk.resources.conversations import NoxusQaTool

noxus_tool = NoxusQaTool(
    enabled=True,
    extra_instructions="Explain features clearly with examples"
)

Using Tools in Conversations

from noxus_sdk.resources.conversations import (
    ConversationSettings,
    WebResearchTool,
    KnowledgeBaseQaTool
)

# Configure tools
web_research = WebResearchTool(
    enabled=True,
    extra_instructions="Focus on recent developments and reliable sources"
)

kb_tool = KnowledgeBaseQaTool(
    enabled=True,
    kb_id="company_kb_id",
    extra_instructions="Reference company policies and procedures"
)

# Create conversation with tools
settings = ConversationSettings(
    model=["gpt-4o-mini"],
    temperature=0.7,
    max_tokens=300,
    tools=[web_research, kb_tool],
    extra_instructions="Use tools when needed to provide accurate, up-to-date information"
)

conversation = client.conversations.create(
    name="Research Assistant",
    settings=settings
)

# Ask questions that trigger tool usage
message = MessageRequest(
    content="What are the latest developments in renewable energy technology?",
    tool="web_research"  # Explicitly request web research
)

response = conversation.add_message(message)
print(f"Research Results: {response.message_parts}")

Advanced Features

Asynchronous Operations

For high-performance applications:
import asyncio

async def async_conversation_example():
    client = Client(api_key="your_api_key_here")

    # Create conversation asynchronously
    settings = ConversationSettings(
        model=["gpt-4o-mini"],
        temperature=0.7,
        max_tokens=200
    )

    conversation = await client.conversations.acreate(
        name="Async Chat",
        settings=settings
    )

    # Send message asynchronously
    message = MessageRequest(content="Explain machine learning briefly")
    response = await conversation.aadd_message(message)

    return response.message_parts

# Run async conversation
result = asyncio.run(async_conversation_example())
print(result)

Conversation Management

# List all conversations
conversations = client.conversations.list(page=1, page_size=10)
for conv in conversations:
    print(f"Conversation: {conv.name} (ID: {conv.id})")

# Get specific conversation
conversation = client.conversations.get("conversation_id")

# Get conversation messages
messages = conversation.get_messages()
for msg in messages:
    print(f"Message: {msg.content[:50]}...")

# Update conversation settings
new_settings = ConversationSettings(
    model=["gpt-4o"],  # Upgrade to more powerful model
    temperature=0.5,
    max_tokens=400
)

updated_conversation = client.conversations.update(
    conversation_id=conversation.id,
    name="Updated Chat",
    settings=new_settings
)

# Delete conversation
client.conversations.delete(conversation_id=conversation.id)

Use Cases

Customer Support

Build intelligent support bots that can access knowledge bases and escalate to humans

Content Creation

Create writing assistants that help with blogs, marketing copy, and documentation

Research Assistant

Build tools that can search the web and analyze documents for insights

Educational Tutor

Create personalized learning experiences with adaptive questioning

Code Assistant

Build programming helpers that can explain code and suggest improvements

Data Analyst

Create assistants that can interpret data and generate reports

Conversation Patterns

Question-Answer Bot

# Simple Q&A bot with knowledge base
qa_settings = ConversationSettings(
    model=["gpt-4o-mini"],
    temperature=0.3,  # Lower temperature for factual responses
    tools=[KnowledgeBaseQaTool(
        enabled=True,
        kb_id="faq_kb_id",
        extra_instructions="Provide accurate answers based on the knowledge base"
    )],
    extra_instructions="You are a helpful FAQ bot. Always check the knowledge base first."
)

qa_bot = client.conversations.create(name="FAQ Bot", settings=qa_settings)

Research Assistant

# Research assistant with web access
research_settings = ConversationSettings(
    model=["gpt-4o"],
    temperature=0.4,
    tools=[
        WebResearchTool(
            enabled=True,
            extra_instructions="Use recent, authoritative sources"
        )
    ],
    extra_instructions="You are a research assistant. Always cite your sources and provide comprehensive answers."
)

research_assistant = client.conversations.create(
    name="Research Assistant",
    settings=research_settings
)

Multi-Tool Assistant

# Comprehensive assistant with multiple tools
multi_tool_settings = ConversationSettings(
    model=["gpt-4o"],
    temperature=0.6,
    tools=[
        WebResearchTool(enabled=True),
        KnowledgeBaseQaTool(
            enabled=True,
            kb_id="company_kb_id"
        ),
        WorkflowTool(
            enabled=True,
            workflow={
                "id": "data_analysis_workflow_id",
                "name": "Data Analyzer",
                "description": "Analyze datasets and generate insights"
            }
        )
    ],
    extra_instructions="You are a versatile assistant. Use the most appropriate tool for each request."
)

multi_assistant = client.conversations.create(
    name="Multi-Tool Assistant",
    settings=multi_tool_settings
)

Best Practices

Choose the right model for your use case:
  • gpt-4o-mini: Fast, cost-effective for simple tasks
  • gpt-4o: More capable for complex reasoning
  • claude-3-sonnet: Good balance of speed and capability
# For simple Q&A
settings = ConversationSettings(model=["gpt-4o-mini"])

# For complex analysis
settings = ConversationSettings(model=["gpt-4o"])
Adjust creativity based on your needs:
# Factual, consistent responses
settings = ConversationSettings(temperature=0.1)

# Balanced responses
settings = ConversationSettings(temperature=0.7)

# Creative responses
settings = ConversationSettings(temperature=0.9)
Manage conversation length and context:
# Limit response length
settings = ConversationSettings(max_tokens=300)

# Provide clear instructions
settings = ConversationSettings(
    extra_instructions="Keep responses under 100 words. Be direct and helpful."
)
Implement robust error handling:
try:
    response = conversation.add_message(message)
    return response.message_parts
except Exception as e:
    print(f"Error in conversation: {e}")
    return "I'm sorry, I encountered an error. Please try again."

Performance Considerations

Response Time

  • Use faster models for simple tasks
  • Set appropriate max_tokens limits
  • Consider async operations for multiple conversations

Cost Optimization

  • Choose cost-effective models when possible
  • Limit token usage with max_tokens
  • Use tools strategically to reduce model calls

Context Length

  • Monitor conversation length
  • Implement context pruning for long conversations
  • Use summarization for context compression

Tool Usage

  • Enable only necessary tools
  • Provide clear tool instructions
  • Monitor tool usage and costs

Next Steps