If you can create Lua code β or rather, its base code β to work with Erlang VM, you have several options.
Consider one of the most important Erlang VM functions: controlling the execution of (potentially large) simple Erlang processes on a relatively small set of scheduler threads. It uses several methods to know when a process has used its time or is waiting, and therefore should be planned to give another process a chance to start.
It seems you are asking how you can run your own code, but you like it in VM, but as you have already hinted, the reason that the native code can cause problems for the virtual machine is because it has no practical way to stop its own the code completely intercepts the scheduler thread and thereby prevents the execution of regular Erlang processes. Because of this, native code must jointly return the scheduler thread back to the virtual machine.
For older NIFs, the choice for such collaboration is:
- Keep the number of NIF calls made in the scheduler thread up to 1 ms or less.
- Create one or more private threads. Transfer each long NIF call from the scheduler thread to a private thread to execute, then return the scheduler thread to the virtual machine.
The problems here are that not all calls can be made in 1 ms or less, and the management of personal flows may be error prone. To get around the first problem, some developers split the work into pieces and used the Erlang function as a wrapper to manage a series of short NIF calls, each of which performed one piece of work. As for the second problem, well, sometimes you just canβt avoid it, despite its inherent difficulties.
NIFs running on Erlang 17.3 or later can also share the scheduler stream using the enif_schedule_nif function. To use this function, native code must be able to do its job in chunks, so that each fragment can execute in the normal NIF 1 ms execution window, similar to the approach described above, but without the need to artificially return to the Erlang wrapper. My bitwise sample code contains a lot of details about this.
Erlang 17 also provided an experimental feature that was disabled by default, called dirty schedulers. This is a set of VM schedulers that do not have the same code runtime limits as regular schedulers; work there can be blocked for almost infinite periods without disrupting the normal operation of the virtual machine.
Dirty schedulers come in two flavors: CPU schedulers for working with the processor and I / O schedulers for working with I / O. In a VM compiled to include dirty schedulers, by default there are as many dirty CPU schedulers as there are regular schedulers, and there are 10 I / O schedulers. These numbers can be changed using command line commands, but note that to prevent starvation in the scheduler, you can never use more dirty CPU schedulers than regular schedulers. Applications use the same enif_schedule_nif function, mentioned earlier, to run NIF for dirty schedulers. My bitwise sample code contains a lot of details about this too. Dirty planners will remain an experimental feature for Erlang 18.
The native code in connected port drivers depends on the runtime limitations on the scheduler as NIF, but the drivers have two NIF functions:
- Driver code can register file descriptors in the VM polling subsystem and receive notifications when any of these file descriptors becomes ready for input / output.
- The driver API supports access to the pool of non-scheduler asynchronous threads, the size of which is configurable but defaults to 10 threads.
The first function allows the built-in driver to avoid blocking the stream for I / O. For example, instead of making a blocking call to recv the driver code may register a socket file descriptor so that the virtual machine can interrogate it and call the driver back when the file descriptor becomes readable.
The second function provides a separate thread pool, useful for driver tasks that cannot meet the runtime limits of the scheduler threadβs own code. You can achieve the same in NIF, but you need to set up your own thread pool and write your own custom code to manage and access it. But regardless of whether you use an asynchronous driver thread pool, your own NIF thread pool, or dirty schedulers, note that they are all normal operating system threads, so trying to run a huge number of them is simply impractical.
The native driver code does not yet have access to the dirty scheduler, but this work continues, and it may become available as an experimental function in the 18.x release.
If your Lua code can use one or more of these functions to interact with the Erlang virtual machine, then what you are trying might be possible.