#include <ollama_ai_service.h>
Public Attributes | |
| std::string | base_url = "http://localhost:11434" |
| std::string | model = "qwen2.5-coder:7b" |
| float | temperature = 0.1 |
| int | max_tokens = 2048 |
| std::string | system_prompt |
| bool | use_enhanced_prompting = true |
Definition at line 16 of file ollama_ai_service.h.
| std::string yaze::cli::OllamaConfig::base_url = "http://localhost:11434" |
Definition at line 17 of file ollama_ai_service.h.
Referenced by yaze::cli::OllamaAIService::CheckAvailability(), yaze::cli::CreateAIService(), yaze::cli::OllamaAIService::GenerateResponse(), and yaze::cli::OllamaAIService::ListAvailableModels().
| std::string yaze::cli::OllamaConfig::model = "qwen2.5-coder:7b" |
Definition at line 18 of file ollama_ai_service.h.
Referenced by yaze::cli::OllamaAIService::CheckAvailability(), yaze::cli::CreateAIService(), and yaze::cli::OllamaAIService::GenerateResponse().
| float yaze::cli::OllamaConfig::temperature = 0.1 |
Definition at line 19 of file ollama_ai_service.h.
Referenced by yaze::cli::OllamaAIService::GenerateResponse().
| int yaze::cli::OllamaConfig::max_tokens = 2048 |
Definition at line 20 of file ollama_ai_service.h.
Referenced by yaze::cli::OllamaAIService::GenerateResponse().
| std::string yaze::cli::OllamaConfig::system_prompt |
Definition at line 21 of file ollama_ai_service.h.
Referenced by yaze::cli::OllamaAIService::GenerateResponse(), and yaze::cli::OllamaAIService::OllamaAIService().
| bool yaze::cli::OllamaConfig::use_enhanced_prompting = true |
Definition at line 22 of file ollama_ai_service.h.
Referenced by yaze::cli::OllamaAIService::OllamaAIService().