-
Notifications
You must be signed in to change notification settings - Fork 28
Publish a Proxy-Wasm roadmap #74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This is sure to be a living document, subject to much discussion. Signed-off-by: Martijn Stevenson <mstevenson@google.com>
Signed-off-by: Martijn Stevenson <mstevenson@google.com>
Signed-off-by: Martijn Stevenson <mstevenson@google.com>
Signed-off-by: Martijn Stevenson <mstevenson@google.com>
docs/Roadmap.md
Outdated
(envoyproxy/envoy#36996). Documentation, security scanning, tests, bug | ||
fixes, etc. | ||
* (TBD: @mpwarres) Implement the v0.3 Proxy-Wasm ABI. | ||
* (Help wanted) Decouple from the thread-local execution model. As wasm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am very interested in this, as it has a significant meaning. If I understand correctly, it allows wasm modules to run an event loop in a separate thread without blocking the Envoy worker threads. This makes it possible to reuse native IO-related libraries from Go/Rust/... SDKs (such as HTTP/Redis/MySQL).
@martijneken @mpwarres Will we set achieving this capability as a work goal for this task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh interesting point. AIUI proxy-wasm already has some support for scheduling work (e.g. event callbacks, timers, shared queues), but I think it all ends up back on the calling VM thread or the Envoy main thread (via singletons).
Our interest was mostly about decoupling wasm CPU (so running wasm doesn't block Envoy threads) and limiting overheads (little-used plugins don't need a VM per Envoy worker thread). So we imagined a similar model as today (streams/requests sticky to independent threads), but a manager component that scales each plugin's threads up/down dynamically. The major risk in all of this is blowing up CPU/NUMA cache locality and incurring CPU scheduling overhead.
IIUC you'd like to take it further and give the wasm runtime access to a shared thread pool? This would require new primitives for scheduling work and/or launching threads?
https://github.com/WebAssembly/component-model/blob/main/design/mvp/Explainer.md#-threading-built-ins
This would be very interesting to discuss in the next community meeting.
docs/Roadmap.md
Outdated
map to components? | ||
* What are the API gaps? How should we evolve Proxy-Wasm to become | ||
WASI-compatible? What are good incremental steps? | ||
* Are there any performance gaps? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@martijneken FYI, it's by no means a ready solution, but I started playing with some kind of shim that converts proxy-wasm plugin into WASI. In the ideal world, in the future once it's more or less ready, we can just use it to build a proxy-wasm plugin code into a wasi-http proxy component. The goal is to smooth migration from proxy-wasm to WASI by providing some level of backward compatibility for existing proxy-wasm code.
If you think it's interesting, you can find work in progress in https://github.com/krinkinmu/wasi-http and I'm happy to hear any feedback you may have on the approach and on the details of implementation as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Added a link here.
This whole roadmap is really exciting, appreciate the effort in compiling all these initiatives and excited to make some progress here! |
It is suggested to add this goal: enhance the security of doAfterVmCallActions, as the current mechanism may lead to unexpected calls on the Host side, resulting in envoy crash(refer to proxy-wasm/proxy-wasm-cpp-host#326). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two high-level comments:
- While I agree that we need to publish a (high-level) roadmap, a lot of items here (especially for the host and Envoy integration) are effectively implementation issues that should be tracked in the issue tracker and not here.
- This roadmap completely ignores hosts other than Envoy, so it would be great to rewrite some of those issues into proxy-agnostic way (where applicable, but see above).
Signed-off-by: Martijn Stevenson <mstevenson@google.com>
Signed-off-by: Martijn Stevenson <mstevenson@google.com>
Signed-off-by: Martijn Stevenson <mstevenson@google.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR should be moved to the community repo, now that we have it.
- (Help wanted) Expand the use of SharedArrayBuffer to reduce memcpy into wasm | ||
runtimes. This is promising for HTTP body chunks (see relevant | ||
[WASI issue](https://github.com/WebAssembly/WASI/issues/594)) and | ||
[wasm binaries](https://github.com/proxy-wasm/proxy-wasm-cpp-host/blob/21a5b089f136712f74bfa03cde43ae8d82e066b6/src/v8/v8.cc#L272). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why this is linked here?
- (Help wanted) Support dynamic (per VM) limits for RAM and CPU. | ||
- (Help wanted) Expand the use of SharedArrayBuffer to reduce memcpy into wasm | ||
runtimes. This is promising for HTTP body chunks (see relevant | ||
[WASI issue](https://github.com/WebAssembly/WASI/issues/594)) and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unlikely to ever work, without host being specifically architected to read data from the network directly into Wasm's linear memory block.
AFAIK, the "multiple memories" proposal doesn't allow to attach / detach memories on-the-fly.
independent thread scaling (expensive wasms get more CPU), improved | ||
parallelism (multiple requests' wasm at the same time), and reduced memory | ||
costs (one VM serves multiple Envoy threads). It adds performance risks (CPU | ||
scheduling latency, CPU cache misses, NUMA hopping). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drive-by comment - I would love this as an option, but I'm not sure I would love it as a one-size-fits-all due to the concerns mentioned. Thinking out loud; I wonder if the problem could be addressed by offering the decoupling as an api to modules instead, something along the lines of that modules could opt in to posting work on a threadpool (and register for completions or some such)?
Share in-flight efforts and outline the next high-priority features.
This PR is a request for comment. Please tell us what's missing!