Is there any advantage to defining val over def in a trait?

In Scala, a val can override def , but a def cannot override val .

So is there any advantage to declaring a tag, for example. eg:

 trait Resource { val id: String } 

instead of this?

 trait Resource { def id: String } 

The next question is: how does the compiler consider calling val and def differently in practice, and what optimizations does it really do with val s? The compiler insists that val stable - what does it mean in practice for the compiler? Suppose a subclass actually implements id using val . Is there a penalty for being listed as def in a tag?

If my code does not require the stability of the id member, we can assume that in these cases it is recommended to use def in this case and switch to val only when the performance bottleneck has been identified here - how unlikely can this be?

+11
compiler-optimization scala traits
Oct 29 '12 at 16:50
source share
4 answers

Short answer:

As far as I can tell, values ​​are always accessible through an access method. Using def defines a simple method that returns a value. Using val defines the final private [*] field using the access method. Thus, in terms of access, there are very few differences between them. The difference is conceptual, def reviewed every time, and val is evaluated only once. This can obviously affect performance.

[*] Java private

Long answer:

Take the following example:

 trait ResourceDef { def id: String = "5" } trait ResourceVal { val id: String = "5" } 

ResourceDef and ResourceVal create the same code, ignoring initializers:

 public interface ResourceVal extends ScalaObject { volatile void foo$ResourceVal$_setter_$id_$eq(String s); String id(); } public interface ResourceDef extends ScalaObject { String id(); } 

For helper classes created (which contain the implementation of methods), ResourceDef produces, as one would expect, noting that the method is static:

 public abstract class ResourceDef$class { public static String id(ResourceDef $this) { return "5"; } public static void $init$(ResourceDef resourcedef) {} } 

and for val we just call the initializer in the containing class

 public abstract class ResourceVal$class { public static void $init$(ResourceVal $this) { $this.foo$ResourceVal$_setter_$id_$eq("5"); } } 

When we start to expand:

 class ResourceDefClass extends ResourceDef { override def id: String = "6" } class ResourceValClass extends ResourceVal { override val id: String = "6" def foobar() = id } class ResourceNoneClass extends ResourceDef 

Where we override, we get a method in the class that just does what you expect. Def is a simple method:

 public class ResourceDefClass implements ResourceDef, ScalaObject { public String id() { return "6"; } } 

and val defines the private field and access method:

 public class ResourceValClass implements ResourceVal, ScalaObject { public String id() { return id; } private final String id = "6"; public String foobar() { return id(); } } 

Note that even foobar() does not use the id field, but uses an access method.

And finally, if we do not override, then we will get a method that calls the static method in the auxiliary feature class:

 public class ResourceNoneClass implements ResourceDef, ScalaObject { public volatile String id() { return ResourceDef$class.id(this); } } 

I cut out the constructors in these examples.

So, the access method is always used. I suggest that this is to avoid complications when expanding several features that can implement the same methods. This gets complicated very quickly.

Even longer answer:

Josh Sueret made a very interesting talk about Binary Resilience in Scala Days 2012 , which covers the background of this issue. Summary for this:

This article focuses on binary compatibility on the JVM, and what it means to be binary. The Scala binary incompatibility scheme in Scala is described in detail with the following set of rules and recommendations that will help developers ensure their own library releases are binary and binary.

In particular, this conversation is as follows:

  • Features and binary compatibility
  • Java serialization and anonymous classes
  • Hidden creations of lazy vals
  • Developing code that is binary resilient
+14
Oct 30 '12 at 9:11
source share

The difference basically lies in the fact that you can implement / override def with val, but not vice versa. Moreover, val is evaluated only once and def is evaluated every time they are used, using def in the abstract definition, gives a code that mixes the attribute with more freedom as to how to process and / or optimize the implementation. So my point is to use defs whenever there is no obvious good reason to force val.

+2
Oct 29 '12 at 17:04
source share

A val expression is evaluated once when a variable is declared; it is strict and immutable.

A def reevaluated every time you call it

0
Oct 29 '12 at 17:05
source share

def is evaluated by name and val by value. This means more or less that val should always return the actual value, and def more like you can get the value when evaluating it. For example, if you have a function

 def trace(s: => String ) { if (level == "trace") println s } // note the => in parameter definition 

which only logs an event if a trace level is set for the log level and you want to register toString objects. If you have exceeded toString with a value, you need to pass that value to the trace function. If toString however, def , it will only be evaluated after it is certain that the log level is a trace, which can save you some overhead. def gives you more flexibility and val potentially faster

Compiler, traits compiled for java interfaces, therefore, when defining a member on trait does not matter if its a var or def . The difference in performance will depend on how you decide to implement it.

-2
Oct 29 '12 at 17:18
source share



All Articles