Debug Sessions in Azure Cognitive Search
Debug Sessions is a visual editor that works with an existing skillset in the Azure portal, exposing the structure and content of a single enriched document, as it's produced by an indexer and skillset for the duration of the session. Because you are working with a live document, the session is interactive - you can identify errors, modify and invoke skill execution, and validate the results in real time. If your changes resolve the problem, you can commit them to a published skillset to apply the fixes globally.
How a debug session works
When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a new blob container created by the Azure Cognitive Search service in an Azure Storage account that you provide. The name of the generated container has a prefix of "ms-az-cognitive-search-debugsession". The Azure Storage flow to choose the target storage account where the debug session data is exported always requests a container to be chosen by the user. This is omitted by default to avoid debug session data to be exported by mistake to a customer created container that may have data not related to the debug session.
A cached copy of the enriched document and skillset is loaded into the visual editor so that you can inspect the content and metadata of the enriched document, with the ability to check each document node and edit any aspect of the skillset definition. Any changes made within the session are cached. Those changes will not affect the published skillset unless you commit them. Committing changes will overwrite the production skillset.
If the enrichment pipeline does not have any errors, a debug session can be used to incrementally enrich a document, test and validate each change before committing the changes.
Managing the Debug Session state
The debug session can be run again after its creation and its first run using the Start button. It may also be canceled while it is still executing using the Cancel button. Finally, it may be deleted using the Delete button.
AI Enrichments tab > Skill Graph
The visual editor is organized into tabs and panes. This section introduces the components of the visual editor.
The Skill Graph provides a visual hierarchy of the skillset and its order of execution from top to bottom. Skills that are dependent upon the output of other skills are positioned lower in the graph. Skills at the same level in the hierarchy can execute in parallel. Color coded labels of skills in the graph indicate the types of skills that are being executed in the skillset (TEXT or VISION).
Selecting a skill in the graph will display the details of that instance of the skill in the right pane, including its definition, errors or warnings, and execution history. The Skill Graph is where you will select which skill to debug or enhance. The details pane to the right is where you edit and explore.
Skill details pane
When you select an object in the Skill Graph, the adjacent pane provides interactive work areas in a tabbed layout. An illustration of the details pane can be found in the previous screenshot.
Skill details include the following areas:
- Skill Settings shows a formatted version of the skill definition.
- Skill JSON Editor shows the raw JSON document of the definition.
- Executions shows the data corresponding to each time a skill was executed.
- Errors and warnings shows the messages generated upon session start or refresh.
On Executions or Skill Settings, select the
</> symbol to open the Expression Evaluator used for viewing and editing the expressions of the skills inputs and outputs.
Nested input controls in Skill Settings can be used to build complex shapes for projections, output field mappings for a complex type field, or an input to a skill. When used with the Expression Evaluator, nested inputs provide an easy test and validate expression builder.
A skill can execute multiple times in a skillset for a single document. For example, the OCR skill will execute once for each image extracted from a single document. The Executions pane displays the skill's execution history providing a deeper look into each invocation of the skill.
The execution history enables tracking a specific enrichment back to the skill that generated it. Clicking on a skill input navigates to the skill that generated that input, providing a stack-trace like feature. This allows identification of the root cause of a problem that may manifest in a downstream skill.
When you debug an error with a custom skill, there is the option to generate a request for a skill invocation in the execution history.
AI Enrichments tab > Enriched Data Structure
The Enriched Data Structure pane shows the document's enrichments through the skillset, detailing the context for each enrichment and the originating skill. The Expression Evaluator can also be used to view the contents for each enrichment.
Expression Evaluator gives a quick peek into the value of any path. It allows for editing the path and testing the results before updating any of the inputs or context for a skill or projection.
You can open the window from any node or element that shows the
</> symbol, including parts of a dependency graph or nodes in an enrichment tree.
Expression Evaluator gives you full interactive access for testing skill context, inputs, and checking outputs.
Now that you understand the elements of debug sessions, start your first debug session on an existing skillset.