-
Open the project you have created in 03 Add Plugin (Function Call) in VS Code or Visual Studio.
-
Install Extensions Logging nuget package
dotnet add package Microsoft.Extensions.Logging
-
Install Extensions Logging console nuget package
dotnet add package Microsoft.Extensions.Logging.Console
-
Add the following using statments at the top of
Program.cs
file.using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging;
-
Add logging services to the builder before initializing the kernel
// Add logging services to the builder builder.Services.AddLogging(b => b.AddConsole().SetMinimumLevel(LogLevel.Trace));
-
Run the application by entering
dotnet run
into the terminal. Experiment with a user prompt "Hello" " you will get something similar output as shown belowQ: Hello trce: Microsoft.SemanticKernel.Connectors.OpenAI.OpenAIChatCompletionService[0] ChatHistory: [{"Role":{"Label":"user"},"Items":[{"$type":"TextContent","Text":"Hello"}]}], Settings: {"temperature":1,"top_p":1,"presence_penalty":0,"frequency_penalty":0,"max_tokens":null,"stop_sequences":null,"results_per_prompt":1,"seed":null,"response_format":null,"chat_system_prompt":null,"token_selection_biases":null,"ToolCallBehavior":null,"User":null,"logprobs":null,"top_logprobs":null,"model_id":null} info: Microsoft.SemanticKernel.Connectors.OpenAI.OpenAIChatCompletionService[0] Prompt tokens: 8. Completion tokens: 9. Total tokens: 17. Hello! How can I help you today? Q:
Note: From the output on the console, notice the log information that provided detailed information about our model settings.
View the completed sample in the 04 Add Logging) project.