Skip to content

Draft spec for auto-discovering feedback provider and tab-completer #386

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

daxian-dbw
Copy link
Member

This is a draft spec for auto-discovering feedback provider and tab-completer. Relevant GitHub issues:


Today, to enable a feedback provider or a tab-completer for a native command,
a user has to import a module or run a script manually, or do that in their profile.
There is no way to auto-discover a feedback provider or a tab-completer for a specific native command.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we consider predictors in this set of supported auto discoverable "things"?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example the completion predictor may be good to auto register since it doesnt really have any commands

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There won't be a specific trigger for any predictor, so if needed, a predictor module can just be placed under the _startup_ folder of Feedbacks or Completions.

Think about it more, I guess we may want to unify the _startup_ folders from feedbacks and completions, since they all will be loaded at startup. Maybe we should have a startup folder at the same level of feedbacks and completions, so any predictor/feedback provider/tab-completer that needs to be loaded at session startup can be put there.


2. Should we add another key to indicate the target OS?
- A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows.
- Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be good, thinking about the linux cmd-not-found predictor. Also if a user wants to bring this configuration across their different machines they dont need to have multiple different folder structures? I am not entirely sure how they would necessarily do it but I know some community folks use external tools to share their $PROFILE across machines.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they dont need to have multiple different folder structures?

Folder structures under feedbacks and completions are the same. I guess a user can create a symbolic link to make <personal>/powershell/feedbacks or <personal>/powershell/completions points to the folders from a cloud drive.

- A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows.
- Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive.

3. Do we really need a folder for each feedback provider?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is. there a reason why this all can't be in a single json file? i.e feedbackproviders.json?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tool installation needs to easily deploy/remove the auto-discovery configuration. We want to avoid updating a single file to make that easy.

@iSazonov
Copy link
Contributor

I see the key elements here - autodiscover, autoload, and trigger.
All this is already available in PowerShell modules.
Why not just adapt the existing mechanism to meet new needs?

One option was suggested by me a few years ago PowerShell/PowerShell#13428
This allows us to load any necessary extensions for native commands.

For example, if we introduce a naming standard, we can put Native-dotnet in any module and the engine (NativeCommandProcessor) will trigger finding this module and load, for example, an assembly/script with completer for dotnet command.
(If we have to strictly follow the existing naming rules for cmdlets, then the name could be Get-NativeCommand<name>- Get-NativeCommanddotnet)

If we need a CommandNotFound feedback provider for WinGet, we can agree that we are looking for a module with a name like FeedbackProviderWinGet where FeedbackProvider is a search key for the scenario.

Everything for modules is familiar and understandable to users and it can be implemented in small steps.

whose file names should match names of the commands.
Those completion scripts are loaded only when their corresponding commands' completion is triggered for the first time.

We will have separate directories for feedback providers and tab-completers, for 2 reasons:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make it not possible for an Appx app to participate since they can't place files outside of their own installed folder

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mentioned Appx and MSIX packages in the "Discussion Points" below. I think for those apps:

  • If they want to provide tab-completer, then the tab-completer needs to be exposed by running the tool with a special flag, such as <tool> --ps-completion, then the user can manually create the "deployment+configuration" for the tool using the output script. Or, even better, the user can just run the output script, which will create the deployment automatically.
  • If they want to provide feedback provider, I'm not sure how that will be possible. According to Add a way to lazily autoload argument completers PowerShell#17283 (comment), the tool's DLL should not be used by another process, but feedback provider and predictor are binary implementation only.

@andrewducker
Copy link

Possibly silly question, particularly at this late stage, but as a DLL can contain a bunch of information for how to access it through PowerShell, and in .Net an exe is just another assembly, why not have it be queryable for what its capabilities are?
The first time you access the exe PowerShell can query it to see what commands it supports, and from then on you just get autocompletion.
Effectively, use Import-Module on the EXE, and then direct the commands to the Cmdlet classes you just loaded in.

Or am I missing something obvious?

@andrewducker
Copy link

Or am I missing something obvious?

I am! This is for entirely non-PowerShell commands which we want to have autocompletion for. Sorry, my misunderstanding.

if there is any, will be from the same module.
So, when the command becomes available, the feedback provider and/or tab-completer will become available too.

Therefore, the auto-discovery for feedback provider and tab-completer targets native commands only.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for the record, there's been a lot of discussion in Discord chat where people are going so far as to use dynamic parameters just to try and create argument completion for standalone scripts. It might be worth considering them as well.

Comment on lines +94 to +99
```powershell
@{
module = '<module-name-or-path>[@<version>]', # Module to load to register the feedback provider.
arguments = @('<arg1>', '<arg2>'), # Optional arguments for module loading.
disable = $false, # Control whether auto-discovery should find this feedback provider.
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[discussion point] With this design, a use can specify only 1 feedback provider for a native command.
But, would there be any reason we want to allow multiple feedback providers for a single native command?

Note that it's not possible to have multiple completers for a single native command because it's 1-to-1 mapping in native completer table, unless the user can provide a custom completer script block that aggregate the completion results from multiple tab completers.

Given that we don't support it for native completers, I don't think we should support it for feedback provider either.

@{
module = '<module-name-or-path>[@<version>]', # Module to load to register the completer.
script = '<script-path>', # Script to run to register the completer.
arguments = @('<arg1>', '<arg2>'), # Optional arguments for module loading or script invocation.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Discussion point] Do we really need the arguments key?
I personally think it provides additional flexibility. Any concerns if we keep this key?

Different discussions:
1. Do we really need a folder for each feedback provider?
- [dongbo] Yes, I think we need.
Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Discussion point] Appx/MSIX package cannot integrate with a broader plugin ecosystem easily [reference]:

  • An MSIX package cannot change $PATH
    • An MSIX package cannot set a global system environment variable of any sort on install
  • An MSIX package cannot write to a user folder (or a system folder outside of its package root) on install.

Additional constraints that make it difficult to integrate Appx/MSIX with a broader plugin ecosystem:

  • MSIX packages are "sealed": you're not supposed to be able to poke at their insides without explicit permission
  • Packages are not designed to have their DLLs used directly in-proc from another process, because that may break the updatability guarantees around files inside the package not being used.

We should support Appx/MSIX packages without requiring user's manual work. But, how to achieve it given the constrains?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's trivial. An extensibility model has 2 requirements

  1. Discovery
  2. Access

Both have multiple options with the simplest being Discovery=AppExtension and Access=DynamicDependencies. More details below.

Discovery

A Powershell extension (be it completer or other scenarios) needs to advertise it exists such that Powershell can discover it. MSIX' standard recommendation is for the packaged app to declare an appextension in its manifest and Powershell uses the AppExtensionCatalog API to enumerate these extensions and any needed info about them.

Example snippet in appxmanifest.xml

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        Name="com.microsoft.powershell.completer"
        Id="winget.completer"
        PublicFolder="public"
        DisplayName="WinGet Completer"/>
</uap3:Extension>

and sample enumeration

var catalog = new AppExtensionCatalog.Open("com.microsoft.powershell.completer")
foreach (var extension in catalog)
{
    ...use it...
}

Access

Now that you know a package has Powershell extension(s) of interest you need to access file(s) in the package. This requires Windows knows Powershell is using a package's content. The simplest solution is to use the Dynamic Dependencies API to tell Windows you're using the package's content. This ensures Windows doesn't try to service the package (e.g. updating or removing the package) while in use. It also adds the package to the Powershell process' package graph, making its resources available for LoadLibrary, ActivateInstance and other APIs.

Recommended But Not Only

These are the commonly recommended solutions for Discovery and Access but there are other options.

Discovery Alternatives

For access the windows.packageExtension can be a good alternative. The windows.appExtension extension has app-scope i.e. you must declare it under an <Application> and thus can only be defined in Main or Optional packages. windows.packageExtension which has package-scope (no app required) and can be declared in any package (including Resource and Framework packages) and use the PackageExtensionCatalog API to enumerate them. This is new in 24H2. The recommendation if you need downlevel support is to use AppExtension (supported since ~RS1) or use both. The latter's advantage is it supports all package types and simpler developer experience in some cases. Using both provides best experience while also providing downlevel support.

Access Alternatives

Dynamic Dependencies enables you to use several tech to interact with a packaged Powershell extension, e.g. WinRT inproc and out-of-proc (ActivateInstance), inproc DLLs (LoadLibrary), inproc .NET assemblies (Assembly.Load*()), read a text file (CreateFile(GetPackagePath()+"\foo.ps1") and other like tech.

NOTE: Inproc is generally discouraged for extensibility models given the reliability and security implications. Technically feasible of course as there are times it's necessary (especially older legacy software), but if you can do better, do better :-)

There are other options if you need code a package -- Packaged COM OOP servers, AppServices, etc. Some don't require Dynamic Dependencies but bring their other caveats to the table so while supported the Dynamic Dependencies options are recommended.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And note, the MSIX discovery+access recommendations are easily secure. Package content is strongly secure and verifiably trustable. When Powershell uses a package's content it can be assured the content is authentic and unmodified from the developer.

Package authors can be Store signed or declare

<uap10:PackageIntegrity>
  <uap10:Content Enforcement="on"/>
</uap10:PackageIntegrity>

and its content is hardened even a process running as LocalSystem can't alter the package's files. Powershell can use various APIs to confirm the package is signed and is unmodified e.g. Package.SignatureKind, Package.Status.VerifyIsOK(), Package.VerifyContentIntegrityAsync(). This can jive well with Powershell's ExecutionPolicy options.

@iSazonov
Copy link
Contributor

@daxian-dbw Could you please explain why PowerShell module infrastructure can not be used/adopted for auto-discovering feedback providers and tab-completers?
I described one approach earlier. Here second one.

All we need to do is

  • Add new triggers. This is unavoidable with any approach.
  • Provide meta information. Here we have alternatives.

We can use standard psd1 module manifest to expose metadata. For compatibility reasons, we cannot add a new keyword (CompletersToExport), but we can use existing one FunctionsToExport provided that we accept the naming convention.
For example, it could be in psd1:

FunctionsToExport = @(
    PSTabCompleter-git,
    PSFeedbackProvider-msiexec
)

Such function could return IArgumentCompleter/IFeedbackProvider and/or register the entity.

This provides, with minimal effort, almost all the needs that we expect - auto-discovering, lazy loading, signing, versioning and so on.

Different discussions:
1. Do we really need a folder for each feedback provider?
- [dongbo] Yes, I think we need.
Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's trivial. An extensibility model has 2 requirements

  1. Discovery
  2. Access

Both have multiple options with the simplest being Discovery=AppExtension and Access=DynamicDependencies. More details below.

Discovery

A Powershell extension (be it completer or other scenarios) needs to advertise it exists such that Powershell can discover it. MSIX' standard recommendation is for the packaged app to declare an appextension in its manifest and Powershell uses the AppExtensionCatalog API to enumerate these extensions and any needed info about them.

Example snippet in appxmanifest.xml

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        Name="com.microsoft.powershell.completer"
        Id="winget.completer"
        PublicFolder="public"
        DisplayName="WinGet Completer"/>
</uap3:Extension>

and sample enumeration

var catalog = new AppExtensionCatalog.Open("com.microsoft.powershell.completer")
foreach (var extension in catalog)
{
    ...use it...
}

Access

Now that you know a package has Powershell extension(s) of interest you need to access file(s) in the package. This requires Windows knows Powershell is using a package's content. The simplest solution is to use the Dynamic Dependencies API to tell Windows you're using the package's content. This ensures Windows doesn't try to service the package (e.g. updating or removing the package) while in use. It also adds the package to the Powershell process' package graph, making its resources available for LoadLibrary, ActivateInstance and other APIs.

Recommended But Not Only

These are the commonly recommended solutions for Discovery and Access but there are other options.

Discovery Alternatives

For access the windows.packageExtension can be a good alternative. The windows.appExtension extension has app-scope i.e. you must declare it under an <Application> and thus can only be defined in Main or Optional packages. windows.packageExtension which has package-scope (no app required) and can be declared in any package (including Resource and Framework packages) and use the PackageExtensionCatalog API to enumerate them. This is new in 24H2. The recommendation if you need downlevel support is to use AppExtension (supported since ~RS1) or use both. The latter's advantage is it supports all package types and simpler developer experience in some cases. Using both provides best experience while also providing downlevel support.

Access Alternatives

Dynamic Dependencies enables you to use several tech to interact with a packaged Powershell extension, e.g. WinRT inproc and out-of-proc (ActivateInstance), inproc DLLs (LoadLibrary), inproc .NET assemblies (Assembly.Load*()), read a text file (CreateFile(GetPackagePath()+"\foo.ps1") and other like tech.

NOTE: Inproc is generally discouraged for extensibility models given the reliability and security implications. Technically feasible of course as there are times it's necessary (especially older legacy software), but if you can do better, do better :-)

There are other options if you need code a package -- Packaged COM OOP servers, AppServices, etc. Some don't require Dynamic Dependencies but bring their other caveats to the table so while supported the Dynamic Dependencies options are recommended.

Different discussions:
1. Do we really need a folder for each feedback provider?
- [dongbo] Yes, I think we need.
Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And note, the MSIX discovery+access recommendations are easily secure. Package content is strongly secure and verifiably trustable. When Powershell uses a package's content it can be assured the content is authentic and unmodified from the developer.

Package authors can be Store signed or declare

<uap10:PackageIntegrity>
  <uap10:Content Enforcement="on"/>
</uap10:PackageIntegrity>

and its content is hardened even a process running as LocalSystem can't alter the package's files. Powershell can use various APIs to confirm the package is signed and is unmodified e.g. Package.SignatureKind, Package.Status.VerifyIsOK(), Package.VerifyContentIntegrityAsync(). This can jive well with Powershell's ExecutionPolicy options.

so for a predictor to be auto-discovered, it has to be loaded at the startup of an interactive session.

Given that, maybe it's better to have a unified location for all the load-at-startup configurations:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSIX packages can explicitly declare this sort of information e.g.

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        Name="com.microsoft.powershell.completer"
        Id="winget.completer"
        PublicFolder="public"
        DisplayName="WinGet Completer">
        <uap3:Properties>
            <Tab-Completers>
                <Tab-Completer Type="Powershell" Name="foo" File="foo\bar.s1" />
                <Tab-Completer Type="WinRT" Name="blah" ActivatableClassId="blah.de.blah.de.blah" Load="startup"/>
                <Tab-Completer Type="DLLExport" Name="meh" File="in\proc.dll" Function="CallMe" Discovery="auto"/>
            </Tab-Completers>
        </uap3:Properties>
</uap3:Extension>

The data under <uap3:Properties> can be as elaborate as you need and desire.

This model can securely provide explicit details and avoid ambiguitities (No startup folder? Is that intentional? Accidentally deleted? CHKDSK removed it? Malware tampered?).

This also offers better performance than walking the filesystem and reading/parsing files to determine what to do.

And it ensures clean behavior on uninstall - packages registered for the user are discoverable and cleanly become invisible when the package is deregistered for the user. No concerns about incomplete uninstalls or manual hackery or otherwise 'winrot' leaving the system in inconsistent states.

@daxian-dbw
Copy link
Member Author

daxian-dbw commented Apr 29, 2025

@iSazonov There are mainly 2 reasons for not choosing to extend module manifest for the auto-discovery/auto-loading for completers, feedback providers, and potentially predictors.

  1. The completer/feedback provider/predictor are not necessarily backed by a fully-fledged module. It could be a .ps1 file for a native command completer, or a single dll served as a bare binary module. In those cases, there is no fully-fledged module existing in module path. Steve also described this in the related issue:

    the intent is to enable apps/tools that DON'T have a PowerShell module, they are native commands that want PowerShell integration. We have some partners asking for this, so this isn't a hypothetical situation.

  2. We want to make it easy for users to
    • discover what are deployed for auto-discover/loading (eagerly or lazily);
    • disable auto-discovery/loading for a particular native command easily.

There are other reasons too, such as

  • Depending on updating module manifest means existing modules cannot participate in this new feature, while the described design allows users to quickly enable auto-discovery/load for existing completer/feedback provider/predictor modules.
  • Complexity and risk of touching the module auto-loading code path. e.g. the module analysis cache and code would need to be updated to accommodate the changes.

@iSazonov
Copy link
Contributor

@daxian-dbw Thanks for the clarifications! I'm somewhat discouraged as I was expecting arguments like "it's impossible for security reasons" or "it's technically impossible".

I was convinced that modules are the preferred distribution method for public use. The manifests were designed just for convenience. Therefore, I am perplexed that significant efforts will be directed not at expanding the capabilities of the modules, but at creating just another config/plug-in model.

I guess the only reason to create new code is if you need to do it quickly and avoid regression. Although I don't see how the implementation of these requirements would affect the existing module functionality.

Also, I don't see the need to enable and disable on the fly. Since we are talking about an interactive session, the user installs the module if he needs it and deletes it otherwise in seconds. It's always been like this and it wasn't a problem. Or are you talking about a public pre-prepared environment where the user does not have any rights at all, then in any case he will not be able to disable any feature. If there are some rights, it means that it can save the desired state or configuration. In this case, why not make the solution more general and allow it to enable and disable loading of any module? The costs of implementation are the same, the benefits are greater.

Depending on updating module manifest means existing modules cannot participate in this new feature, while the describe design allows users to quickly enable auto-discovery/load for existing completer/feedback provider/predictor modules.

I don't understand this argument. Creating or updating a manifest file is not a problem at all.

Complexity and risk of touching the module auto-loading code path. e.g. the module analysis cache and code would need to be updated to accommodate the changes.

If we talk about my proposal, then there is only a naming convention, i.e. this code does not change.
Although I understand you, if the goal is to avoid the risk of regression and create a new one quickly. I believe that this is the main factor.

@daxian-dbw
Copy link
Member Author

daxian-dbw commented Apr 30, 2025

I don't understand this argument. Creating or updating a manifest file is not a problem at all.

@iSazonov not all existing modules will be updated, and we cannot ask users to update a module locally that they don't own.

I don't see the need to enable and disable on the fly. Since we are talking about an interactive session, the user installs the module if he needs it and deletes it otherwise in seconds.

It's very possible that a user needs the module, which offers more than completer/feedback provider/predictor, but just don't want the module to participate in auto-discovery/auto-loading for tab completion/feedback provider etc.

if the goal is to avoid the risk of regression and create a new one quickly. I believe that this is the main factor.

The main reasons and some other concerns are listed out clearly in my last comment. You can read again to make sure you don't miss any points.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants