TL DR:
No, you cannot get the memory location of a variable.
Discussion
In principle, everything is copied. Each process has its own heap. And it is as it is.
In fact, there are several major hackers. Most notable are
- Literals known at compile time reference the global heap (in some cases this is a huge performance increase).
- Binary files larger than 64 bytes refer to the global heap (which also leads to the fact that binary files are a fuzzy abstraction, hence
binary:copy/1,2 ). - Updates for most structures do not actually require copying the entire structure (especially what is happening inside the cards ) - but how much and when do you need to copy ever changing as more efficient work enters the working environment.
- Garbage collection takes place for each process, so Erlang seems to have a magical additional incremental GC scheme, but actually has a rather boring collection of heap generation from below (in general, that is, the approach is actually somewhat hybrid - another part of the ever-changing landscape of improving performance EVM ...).
If you intend to write code for EVM, in any language you should abandon the idea that you intend to outsmart the runtime. This is for the same reason that trying to outsmart most C compiler optimizers (and especially C ++) is a forbidden practice almost everywhere.
Each major release includes some recently implemented performance improvements that do not violate language assumptions. If you start writing code that is “more efficient” for any particular underlying memory scheme on the R20, you may get a slight increase in performance here or today, but when R21 comes out, there is a good chance that all your code will be broken and you just get stuck with R20 forever.
Just consider the R20.0 release announcement . Saving such changes will require most of your development time.
Some projects attempt to completely drop the entire work environment. For example, consider Twisted . Such projects exist specifically so that all these (large and non-trivial) efforts should not be duplicated in each project in its downward flow. With that in mind, Erlang runtime, Core compiler, LFE , Elixir project, etc. Themselves are the place for such hackers and absolutely not in the top-down client code. The happy thing to note here (and Yes, there is a happy ending to my harsh story!) Is that this is exactly what we see.
Performance Note
What efficiency are you using? Loops? Bus traffic? Does the cache miss? Financial expenses? Exception / exception I / O-ops / bypass / read? More general sample buffer processing? and etc.
If you are not writing an interface for a super-thin game engine on well-known hardware that should be effective this year (because next year the hardware will outshine most hackers anyway), paying for more processor time is much cheaper than the amount of developer time required to find out what happens in mass parallel mode of operation, with thousands of processes sending millions of ephemeral messages, and each time doing its own garbage collection at different times i.
The case where someone can “know what is happening”, which I saw most often, is when people try to use Elixir as a “faster Ruby” and write not a massive parallel system, but one massive single-threaded program on top of EVM. Erlang completely overlooks this approach to writing fast programs at run time.
In the case when you have a very specific task with intensive CPU usage, which is absolutely necessary for fast speed, one of
- Port written in Rust or C
- NIF written in Rust or C
- A high-performance compute node that can communicate over the network with basic nodes (s) written in some language that is ideal for your heavy task ( BERT is very useful here)
- Waiting for a year or two for the runtime improves productivity and hardware acceleration - the speed of this form of speed increase is absolutely insane for simultaneous systems, especially if you work on your own equipment (if you work in the "cloud", of course, these improvements are beneficial to the provider, but not to you, and even then it’s cheaper to allow you to get a fleece for more copies than trying to outsmart the runtime).
Writing separate programs (or NIF) allows any developer or team to work on a specific problem to work in a single, unified, single problem space that concerns only the execution of any protocol that they transmitted over the main project. This is significantly more efficient than using Erlang or LFE or Elixir dev / team flip between writing some Erlang, then some LangX, then again Erlang, context switching for granted (or, even worse, context switching between one language, which they are experts in, and one in which they are inexperienced and naive).
(Keep in mind also that NIF should be considered as a last resort and only considered when you know your actual case. In particular, the case when your processing task is small for each call, predictable and well-limited, and that frequent calls are overhead cost is your bottleneck more than NIF's processing speed. NIF destroys every guarantee of runtime security. A broken NIF is a broken runtime - that's the kind of problem Erlang was designed to avoid. Anyone who uvstvuet comfortable enough in C to C is easy to recommend the NIF over the place obviously do not have experience in C, writing the NIF.)
At the project level, efficiency issues are primarily business decisions, not technical ones. It's a bad business (or community management, if decided by the FOSS community) to try and outsmart the lead time.