Interface AgenticaContext<Model>

Context of the Nestia A.I. agent.

IAgenticaContext is a structure defining the context of the internal agents composing the Agentica, like function selector, executor, and describer, and so on. For example, if an agent has been configured to utilize the OpenAI, the context will be delivered to the below components.

  • ChatGptAgent
    • ChatGptInitializeFunctionAgent
    • ChatGptSelectFunctionAgent
    • ChatGptExecuteFunctionAgent
    • ChatGptDescribeFunctionAgent
    • ChatGptCancelFunctionAgent

Also, as its name is context, it contains every information that is required to interact with the AI vendor like OpenAI. It contains every operations for LLM function calling, and configuration used for the agent construction. And it contains the prompt histories, and facade controller functions for interacting with the Agentica like dispatch.

In such reasons, if you're planning to customize some internal agents, or add new agents with new process routine, you have to understand this context structure. Otherwise you don't have any plan to customize the internal agents, this context information is not important for you.

Samchon

interface AgenticaContext<Model extends ILlmSchema.Model> {
    config: undefined | IAgenticaConfig<Model>;
    dispatch: (event: AgenticaEvent<Model>) => Promise<void>;
    histories: AgenticaPrompt<Model>[];
    initialize: () => Promise<void>;
    operations: AgenticaOperationCollection<Model>;
    prompt: AgenticaTextPrompt<"user">;
    ready: () => boolean;
    request: (
        source: AgenticaEventSource,
        body: Omit<ChatCompletionCreateParamsStreaming, "model" | "stream">,
    ) => Promise<ReadableStream<ChatCompletionChunk>>;
    stack: AgenticaOperationSelection<Model>[];
}

Type Parameters

  • Model extends ILlmSchema.Model

Properties

config: undefined | IAgenticaConfig<Model>

Configuration of the agent.

Configuration of the agent, that is used when constructing the Agentica instance.

Write detaily after supporting the agent customization feature

dispatch: (event: AgenticaEvent<Model>) => Promise<void>

Dispatch event.

Dispatch event so that the agent can be handle the event through the Agentica.on function.

Type declaration

histories: AgenticaPrompt<Model>[]

Prompt histories.

initialize: () => Promise<void>

Initialize the agent.

Collection of operations.

Collection of operations from every controllers, and their groups composed by the divide and conquer rule for the efficient operation selection if configured.

prompt: AgenticaTextPrompt<"user">

Text prompt of the user.

Text conversation written the by user through the Agentica.conversate function.

ready: () => boolean

Whether the agent is ready.

Returns a boolean value indicates whether the agent is ready to perform the function calling.

If the agent has called the AgenticaContext.initialize, it returns true. Otherwise the initialize has never been called, returns false.

request: (
    source: AgenticaEventSource,
    body: Omit<ChatCompletionCreateParamsStreaming, "model" | "stream">,
) => Promise<ReadableStream<ChatCompletionChunk>>

Request to the OpenAI server.

Type declaration

    • (
          source: AgenticaEventSource,
          body: Omit<ChatCompletionCreateParamsStreaming, "model" | "stream">,
      ): Promise<ReadableStream<ChatCompletionChunk>>
    • Parameters

      • source: AgenticaEventSource

        The source agent of the agent

      • body: Omit<ChatCompletionCreateParamsStreaming, "model" | "stream">

        The request body to the OpenAI server

      Returns Promise<ReadableStream<ChatCompletionChunk>>

      Response from the OpenAI server

Stacked operations.

In other words, list of candidate operations for the LLM function calling.