Test Validator Plugin Framework

Context

When developers are building locally using local-test-validator, they often want to build on top of protocols already live on mainnet-beta. They can load each individual program and required account with CLI flags, but it is tedious and takes a lot of time for each developer. Not only that, but some programs require a level of traditional infrastructure to be working properly, which the developer will also be required to learn just to build locally.

See the RFP outlining a framework that can:

  • Load programs from mainnet-beta on start
  • Load any account from mainnet-beta on start
  • Update an account’s data on start
  • Run traditional infrastructure as needed to run the program

Logistics

Take note the end date (11/15) and be sure to make sure all criteria is met prior to sending in an application. The listed grant amount is a maximum allocation and is issued in USD-equivalent locked SOL and gated behind delivery milestones.

Ground Rules

This thread can be used for comments, questions, praise, and / or criticism, and is intended to be an open forum for any prospective responders. This thread is also an experiment in increasing the transparency through which RFPs are fielded by the Solana ecosystem too, so please be mindful that we’re all here to learn and grow.

Responses to this RFP are not required to be public, but if it is helpful to share notes or combine forces, then please use this thread for such purposes.

10 Likes

Here is my suggestion for implementing the plugin marketplace.

Instead of building it from the ground up, I recommend looking into the feasibility of utilizing the APR (Anchor Program Repository) for retrieving programs and their corresponding versions.

By leveraging the existing APR infrastructure, we can expedite the development process and ensure a more efficient marketplace integration.

At the moment it is not up but talking to the anchor team should be helpful.

4 Likes

I want this to exist outside of Anchor - the plugins can be built for both Native and Anchor programs.

3 Likes

Hi @jacobcreech,

We have a few questions regarding this RFP:

Ability to run traditional infrastructure as needed to operate the program

What do you mean by “traditional” infrastructure?

The solution must offer a means of discovering different plugins for each program

Are you suggesting that users should be able to discover plugins by providing the program or account address, similar to docker images discovery or like crates.io homepage?

One plugin must include traditional infrastructure as part of the load

Could you provide an example or clarify your specific requirements for this?

Distribution model of plugins

Are you looking for a separate application to facilitate the discovery of plugins, similar to something like npm?

3 Likes

What do you mean by “traditional” infrastructure?

Like cranks. Not going as far to say something like postgres, but I want to make sure something basic like cranks can run. Maybe going as far to make sure something like a bot that can market make to create fake activity in a docker container as well.

Are you suggesting that users should be able to discover plugins by providing the program or account address, similar to docker images discovery or like crates.io homepage?

Yes. There needs to be a way for people to both upload and retrieve different plugins.

One plugin must include traditional infrastructure as part of the load

Something like create an Openbook plugin and make sure the crank is running when the plugin is installed and local validator is started.

Are you looking for a separate application to facilitate the discovery of plugins, similar to something like npm?

Discovery and upload.

3 Likes

Hey @jacobcreech,

I have couple of questions, I’d be grateful to understand more, so that i can thoroughly look into and send a proposal over for review

When developers are building locally using local-test-validator, they often want to build on top of protocols already live on mainnet-beta. They can load each individual program and required account with CLI flags, but it is tedious and takes a lot of time for each developer

In order to comprehensively assess the necessary functionalities, particularly the “Ability to load programs from mainnet-beta on start” and “Ability to load any account from mainnet-beta on start,” providing precise examples of commands for loading individual programs becomes paramount. This would greatly assist in developing a proof of concept (POC) and exploring integration possibilities. Although seemingly straightforward, validation is essential due to the limited descriptive nature of the current documentation regarding the envisioned capabilities. A plausible example command could resemble the following:

solana program dump -u m 9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin serum_dex_v3.so && solana-test-validator --bpf-program 9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin serum_dex_v3.so --reset

Solution must support a configuration file for each plugin, denoting how to load each program and plugin - Could you elaborate on the envisioned configurability?

Solution must have a way of packaging these plugins per program - Could you provide further details on the anticipated outcome? For instance, should a developer possess a configuration file for a program to be cloned from mainnet-beta and initiated with the validator? Subsequently, if the developer wishes to load another program, would the local-test-validator be capable of starting with a distinct configuration file for the latter program, specified as a path argument to the plugin?

3 Likes

Hi,

For the RFP’s requirement of a configuration file for each plugin, could you clarify its function?

Is it meant to:

  1. Act as a descriptive, recipe-like file for setting up the plugin, or
  2. Serve as a means to input parameters into the plugin? Additionally, if it’s the latter, should there be an option to modify these parameters via the user interface?

Also, concerning the requirements for packaging, distributing, and discovering plugins, are you envisioning a system akin to an API marketplace, complete with a website and a CLI tool for searching, acquiring, and uploading plugins? If this is the case, are there any specific hosting constraints or requirements we should be aware of?

3 Likes

The commands you posted are correct. The goal of this is to hide or abstract all of the commands away though so it can load more gracefully. You find a lot of people that just want NFTs locally are rebuilding the same command, but what if they need 100+ accounts? Not scalable anymore.

Solution must support a configuration file for each plugin, denoting how to load each program and plugin

Sure. Let’s say you have a config file for a plugin formatted something like this in yml:

programs:
- programId1 or programName(as referenced in explorer)
- programId2

accounts:
- account1
- account2

overrides:
- accountAddress: address
- accountOwner: address

This is just an example. Programs within the config will get me the full list of programs to load for that specific plugin, same with accounts. Overrides could potentially overwrite the data so that you can get more usefulness out of it locally. For example, you pull a USDC account from mainnet down to local but you’re not the owner, so you update it so you can transfer the USDC around.

Could you provide further details on the anticipated outcome? For instance, should a developer possess a configuration file for a program to be cloned from mainnet-beta and initiated with the validator? Subsequently, if the developer wishes to load another program, would the local-test-validator be capable of starting with a distinct configuration file for the latter program, specified as a path argument to the plugin?

Let’s say you have a directory that contains plugins on your local that your local validator plugin framework picks configs from. For the sake of discussion, structured as follows:

/plugins
 /mango
 /drift
 /openbook
 /metaplex

Each plugin would then have a config file like mentioned earlier in this post to give the information about what accounts to load to successfully run the program locally, plug potentially some additional accounts(like USDC) to make developing locally on these programs even easier. Framework would crawl through each directory, grab the account, load validator with the accounts loaded, override any accounts necessary, and then start.
Ideally there’s an easy way with the distribution to also just “install” these plugins so that devs are not moving folders around as well. That’s in the separate milestone.

3 Likes

Configuration ideally like a recipe for each program that the framework then uses to load on test validator start. It’d be cool to have input parameters, but that would be increased scope of this current rfp.

Packaging, distributing, discovering - Could be just a website that helps discovery + easy install of plugins. Something like how plugins are discovered and installed for something like minecraft or skyrim mods today.

3 Likes

@jacobcreech
Hi,
I have attempted to submit our proposal several times, but I consistently encounter the same error. How can I successfully submit our proposal?

3 Likes

Hey @SE7EN, I just ran a test submission and it worked. Could you reload and try again?

3 Likes

It worked, thank you

4 Likes

Hi @jacobcreech,

We’ve submitted our proposal, but haven’t received any feedback yet. Could you please provide an update on the status of the RFP?

Thanks,

3 Likes

We should be reaching out to everyone this week. Apologies, holidays in the US got in the way.

5 Likes

What is local-test-validator? Do you mean the solana-test-validator binary?

The solana-genesis command already supports a “primordial accounts file” which can be used to deploy programs and accounts from mainnet beta on start. This can then be used with a solana-validator. We do this regularly with Firedancer development.

I don’t think there are any Solana monorepo changes required to support this, apart from a 100 line shell script maybe to glue things together.

3 Likes