Local-Test-Validator Plugins

Problem: Testing in local is difficult due to not having all the programs of mainnet-beta

A lot of people give up on testing in local because all of their programs required in prod don’t exist. There is a way to load each program in a local environment, but it is cumbersome and causes everyone to reinvent the wheel when loading a specific program set that is commonly used such as Metaplex for NFTs.

Not only that, but a decent amount of programs such as Openbook require web2 infra to work correctly. Without good documentation, developers cannot correctly test these programs locally.

Solution: Helm-like plugin framework for loading programs

If someone could download plugins to load programs and run them locally, local testing would be much easier.

On startup, we run solana-test-validator. To load individual programs, we add the --bpf-program 9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin binary.so with any programs we’d like to load. Anchor allows you to do this a little bit easier by listing the programs in the cargo.toml file.

This brings us closer to the solution, but not good enough for local development to be a breeze since everyone is redoing this for their individual programs.

Let’s bring it into a folder structure.

Under a specified plugin folder, you can add each individual plugin with the required configurations to run. A config.yml can list all the required mainnet-beta programs and accounts to load locally to run. The Dockerfile can be used to run any web2 infra required to get the program to work. Any program developer can provide this structure so that it is copy-pastable to anyone working in local.

Proposed config.yml structure:

programs:

  • programId1 or programName(as referenced in explorer)

  • programId2

accounts:

  • account1

  • account2

overrides:

  • accountAddress: address

    accountOwner: address

The Dockerfile can just be loaded via a docker network across all plugins so that the local-test-validator can run with each web2 infra running separately.

Option 1: Add —plugin-dir to local-test-validator

We can add a new flag called --plugin-dir to the local-test-validator.

Pros:

  1. All-in-one tool for local testing
  2. Works with Anchor by default

Cons:

  1. Bloats the local-test-validator more
  2. Not aligned with Labs toolset
  3. Outside maintainers may cause some friction

Option 2: Completely new CLI using local-test-validator

We could start a whole new CLI that uses local-test-validator under the hood, much like Anchor-CLI.

Pros:

  1. Fully maintained by new team
  2. Freedom to manage and update outside of local-test-validator
  3. Can add new features

Cons:

  1. Yet another CLI

Option 3: Add to Anchor CLI

Pros:

  1. Immediately available to all users of Anchor

Cons:

  1. Tied directly to Anchor

Recommendation

Let’s try for option 2 and build it out so that it can extend more than just the plugin framework. This CLI can work under the hood with something like Lava Suite to give even better capabilities.

Work required

  • Account overrides needs to be added and exposed on the local-test-validator
  • Based on the option chosen, create CLI or add new CLI command to load plugins
  • Setup plugin repository for anyone to add new plugins to
  • Create a few example plugins(Metaplex, Openbook) for usage
6 Likes

:wave:

I am quite impressed with the parity between testing using the local-test-validator and e.g. devnet. Mostly, I enjoy being able to leave (for the most part) the local validator running for my whole dev day, and I just interact with it as though it were persistent. The recovery of the validator through the test-ledger is excellent.

So, it would be nice to have an opt-in flag that, when trying to interact with an account that does not exist on the local-validator, the account is pulled in from a public cluster (configurable). Then, as expected, this is persisted with the whole test-ledger.

Essentially, there would be a hierarchy when finding an account on the ledger: local > mainnet

This would be a mostly non-configurable option; external tools just need to rely on a public account, and it would be pulled in and deployed.

Considerations

I have no idea what this would mean for CPIs, but imagine any initial pull/deploy would result in a large increase in CUs. CUs could be paused for the fetch, or they could default to much more than 200_000 when the flag is enabled.

Just my 2 cents. I hope this is alright to just randomly comment on

This is a great idea. As a potential first step we should have account auto downloading via the local-test-validator.

The problem people will run into is when the program relies on cranks or other external infra to keep running.