For reliable code, NModel, Spec Explorer, F # or another?

I have a C # business application with unit tests. Can I increase reliability and reduce testing time and costs with NModel or Spec Explorer? Alternatively, if I were to rewrite it in F # (or even Haskell), what kinds (if any) of increased reliability could I see?

Code Contracts? ASML?

I understand that this is subjective and perhaps reasoned, so if possible, back up your answers with the data. :) Or, maybe, a well-developed example, for example, Eric Evans Cargo Shipping System?

If we consider

Unit tests should be concrete and strong theorems verified quasistatically for specific “interesting instances”, and types should be general but weak theorems (usually statically verified) and contracts are general and strong theorems verified dynamically for specific cases that occur during ordinary program work. (from B. Pierce Types are considered harmful )

Where do these other tools fit?

We could pose a similar question for Java using Java PathFinder, Scala, etc.

+4
source share
2 answers

Reliability is a function of several variables, including the overall software architecture, programmer capabilities, quality requirements and the maturity of your configuration management, and general quality control processes. All this will affect the reliability of dubbing.

Having said that, language certainly has a significant impact. Other things being equal:

  • Defects are approximately proportional to the amount of SLOC. Languages ​​that are terser see fewer coding errors. Haskell seems to require about 10% of the SLOC required by C ++, Erlang about 14%, Java about 50%. I assume that C # probably fits Java on this scale.
  • The type of system is unmatched. Languages ​​with an output type (such as Haskell and, to a lesser extent, O'Caml) will have fewer defects. Haskell, in particular, will allow you to code invariants in a type system, so that a program will compile only if they can be proved. This requires additional work, so consider the compromise in each case.
  • State management is the source of many defects. Functional languages ​​and especially pure functional languages ​​avoid this problem.
  • QuickCheck and its relatives allow you to write unit and system tests that test common properties, rather than individual test cases. This can significantly reduce the work required to test the code, especially if you focus on high test coverage. The QuickCheck property set is similar to the formal specification, and this concept goes well with Test Driven Development (write your tests first, and when the code passes them, you're done).

Put all these things together, and you should have powerful tools to ensure quality throughout the development life cycle. Unfortunately, I do not know any serious studies that actually prove this. All the factors that I listed at the beginning will confuse any real research, and you will need a lot of data before an unambiguous sample is demonstrated.

+5
source

Some comments on the quote in the context of C #, which is my "first" language:

Unit tests should be concrete and strong theorems,

Yes, but they may not give you first-order logical checks, such as "for all x there is ay where f (y)", more similar to "there is ay, here it is (!), F (y)", aka setup, act, assert .;) *

checked quasistatically for specific "interesting examples" and Types are general but weak theorems (usually checked statically),

Types are not necessarily weak **.

and contracts are general and strong theorems verified dynamically for specific cases that occur during regular program operation. (from B. Types of pier are considered harmful)


Device testing

Pex + Moles I think that it is approaching a logical type of first order, since it generates edge cases and uses the C9 solver to work with solving whole limits. I would really like to see more Moles lessons (moles for replacing implementations), in particular, with some inversion of the control container, which can use what abstract classes and interfaces already exist and implement.

Weak types

In C #, they are pretty weak, of course: generic typing / types allows you to add protocol semantics for a single operation - i.e. restrict the types that should be on interfaces, which are, in a sense, the protocols that implement classes. However, static typing of a protocol is performed for only one operation .

Example: Reactive Extensions API

Take Reactive Extensions as a discussion topic.

The contract required by the consumer, implemented by the observable.

interface IObserver<in T> : IDisposable { void OnNext(T); void OnCompleted(); void OnError(System.Exception); } 

There is more to the protocol than shown on this interface: methods called IObserver <in T> instance should follow this protocol:

Order:

OnNext {0, n} (OnCompleted | OnError) {0, 1}

Also on the other axis; measurement time:

Time:

 for all t|-> t:(method -> time). t(OnNext) < t(OnCompleted) for all t|-> t:(method -> time). t(OnNext) < t(OnError) 

i.e. no OnNext call can be made after one of the OnCompleted xor OnError.

In addition, the axis of parallelism:

Parallelism:

 no invocation to OnNext may be done in parallel 

i.e. There is a limitation on the planning that must be done by the developers of IObservable. IObservable cannot work simultaneously with multiple threads without first synchronizing the call around the context.

How do you easily check this contract? C # I don't know.

Consumer API

From the consumer side of the application, there may be interactions between different contexts , such as Dispatcher, Background / other threads and, preferably, we would like to guarantee that we will not end up in a dead end.

In addition, there is a requirement to handle deterministic order observables. This may not be clear, all the time when the extension method returns an IObservable instance, takes care of the arguments of the IObservable method and deletes them, so there is a requirement to know about the internal operation of the black box (alternatively, you can let the links go in a "reasonable way", and GC will take them at some point)

<<<Without Reactive Extensions this is not always easier:

There is a task pool on top of TPL. In the task pool, we have a queue for processing delegates to call workflows.

Using APM / start / end or an async pattern (which is queued to the task pool) may leave us open to callback errors if we mutate the state. In addition, the begin-invocations protocol and their callbacks can be too confusing and therefore impossible to follow. The other day I read a posthumous post about the silverlight project, which has problems with business logic - a forest for all return trees. Then it became possible to implement the poor person’s asynchronous monad, IEnumerable with an asynchronous “manager”, iterating through it and calling MoveNext () each time IAsyncResult completes.

... and don't make me start with nuuuumerous hidden protocols in IAsyncResult.

Another problem that does not use Reactive extensions is the turtles problem - if you decide that you want the I / O blocking operation to be asynchronous, there should still be turtles right up to the p / invoke call, which puts the associated Win32 - which in the input port -Output! If you have three levels, and then some kind of logic inside the very top layer, you need to make all three levels implement the APM template; and fulfill IAsyncResult’s many contractual obligations (or leave it partially broken) —and there is no standard public implementation of AsyncResult in the base class library.

→>

Working with interface exceptions

Even when using the memory management + parallelism + contract described above + contract + protocol elements, there are still exceptions that need to be handled (and not just received and forgotten) in a good, reliable application. I want to give an example:

Context

Let's say that we find that we are catching an exception from the contract / interface (not necessarily from implementations of IObservable reactive extensions that have monoidal exception handling, and not based on stack frames).

Hopefully the programmer has been diligent and has documented possible exceptions, but the possibility of an exception can be completely omitted. If everything is correctly defined using code contracts, at least we can be sure that we can catch some of the exceptions, but many different reasons can be combined inside one type of exception, and as soon as an exception occurs, how do we ensure that the work is the least possible size fixed?

goal

Let's say that we push some data record from the user-bus-consumer in our application and get them in the background thread, which decides what to do with them.

Example

A real example here might be Spotify, which I use every day.

My router / access point worth $ 100 throws in a towel at random. I assume that it has a cache error or some kind of error, as it happens every time I push through it LAN / WAN data more than 2 MB / s.

I have network adapters; Wi-Fi and Ethernet card. The Ethernet connection is terminated. Spotify event-handler loop sockets return invalid code (I think it's C or C ++) or throw exceptions. Spotify has to deal with this, but it does not know what my network topology looks like (and there is no code to check all routes / update the routing table and, therefore, the interface that will be used); I still have a route to the Internet, but not on the same interface. Spotify failed.

Thesis

Exceptions are simply not semantic enough. I believe that one can look at exceptions from the perspective of the monad of errors in Haskell. We either continue or break: we unwind the stack, execute catches, finally doing a prayer, we don’t end the race conditions either on other exception handlers or on the GC, or on async exceptions for outstanding I / O ports.

But when the connection / route of one of my interfaces goes down, Spotify will crash.

Now we have SEH / Structured Exception Handling, but I think that in the future we will have SEH2, where each exception source gives, with actual exception, a discriminated union (i.e. it should be statically injected into the linked library / assembly) of possible compensation actions - in this example, I could introduce the Windows network API, asking the application to perform a compensation action to open the same socket on another interface or process it on its own (for example, now) or to restart the socket with a certain replay policy th scan engine. Each of these parameters is part of a type of discriminatory association, so the developer must use one of them.

I think that when we have SEH2, it will no longer be called exceptions.

^^

In any case, I have already been distracted too much.

Instead of reading my mind, listen to some of Eric Meyer's - this is a very good round table between him and Joe Duffy . They discuss handling side effects of calls. Or look at this search list .

Today I find myself as a consultant, supporting a system in which stronger static semantics can be good, and I look at tools that can give me programming speed + validation to a level that is accurate and accurate. I haven't found it yet.

I just think that we are another 20 years, if not more, than from developer-centric reliable computing. Currently, there are too many languages, frameworks, marketing BSs and concepts on the air on the air, as ordinary ones develop to stay on top of things.

Why is it under the heading of "weak types"?

Because I find that the type system will be part of the solution; types should not be weak! Strong code and strong type systems (I think Haskell) help programmers create reliable software.

+1
source

Source: https://habr.com/ru/post/1308795/


All Articles