Is F # Better Than C # For Math?

Despite unmanaged languages, is F # really better than C # for implementing math? And if so, why?

+30
math c # f #
Dec 18 '08 at 23:39
source share
7 answers

I think most of the important points have already been mentioned by someone else:

  • F # allows you to solve problems the way mathematicians think of them.
  • Thanks to higher-order functions, you can use simpler concepts to solve complex problems.
  • Everything is invariable by default, which makes the program more understandable (and also easier to parallelize).

Of course, you can use some of the F # concepts in C # 3.0, but there are limitations. You cannot use any recursive calculations (because C # does not have tail recursion), and this is how you write primitive calculations in functional / mathematical order. In addition, the complexity of writing complex higher-order functions (which take other functions as arguments) in C # is difficult because you must explicitly write types (while in F # types are output, but also automatically generalized, so you don’t you must explicitly specify the generic function).

In addition, I think the following paragraph from Mark Gravel is not a valid objection:

From a maintenance angle, I believe that suitable named properties, etc. easier to use (over the full lifecycle) than tuples and headers / tails, but it can only be me.

This, of course, is so. However, the great thing about F # is that you can start writing a program using tuples and head / tail lists, and later in the development process turn it into a program that uses .NET IEnumerables and types with properties (and that I think a typical F # programmer works *). Tuples, etc. And the F # interactive development tools give you a great way to quickly prototype solutions (and when doing something mathematical, this is important because most of the development just experiments when you are looking for the best solution). Once you have a prototype, you can use simple source code transformations to wrap inisde F # type code (which can also be used with C # as a regular class). F # also gives you many ways to optimize your code in the future in terms of performance.

This gives you the benefits of easy-to-use langauges (like Python) that many use for the prototyping phase. However, you won’t have to rewrite the entire program later when you finish prototyping using an efficient language (for example, C ++ or possibly C #), since F # is “easy to use” and “efficient” and you can switch freely between these two styles.

(*) I also use this style in my book of functional programming .

+32
Dec 19 '08 at 11:11
source share

F # has huge advantages over C # in the context of math programs:

  • Interactive F # sessions allow you to run code "on the fly" for immediate results, and even visualize them , without having to build and run a complete application.

  • F # supports some functions that can provide significant performance improvements in the context of mathematics. In particular, the combination of inline and higher order functions makes it possible to elegantly consider mathematical code without sacrificing performance. C # cannot express this.

  • F # supports some functions that allow you to implement mathematical concepts much more naturally than you can get in C #. For example, tail calls greatly simplify and reliably perform recursive relationships. C # also cannot express this.

  • Mathematical problems often require the use of more complex data structures and algorithms. Expressing complex decisions is much simpler with F # compared to C #.

If you want to study the case, I converted the implementation of the QR decomposition over System.Double from 2kLOC from C #. F # was only 100 lines of code, it works more than 10 × faster and is generalized as a number, so it works not only on float32 , float and System.Numerics.Complex , but can even be applied to symbolic matrices to obtain symbolic results!

FWIW, I write books on this subject, as well as commercial software.

+18
Feb 14 '10 at 0:09
source share

F # supports units that can be very useful for math work.

+13
Dec 24 '08 at 13:51
source share

I'm from a mathematical background, and looked at F #, but I still prefer C # for most purposes. There are a few things that F # simplifies, but overall, I still prefer C # by a wide margin.

Some of the proposed advantages of F # (immutability, higher-order functions, etc.) can still be performed in C # (using delegates, etc. for the latter). This is even more obvious when using C # 3.0 with lambda support, which makes it very easy and expressive to declare functional code.

From a maintenance angle, I believe that suitable named properties, etc. easier to use (over the full lifecycle) than tuples and headers / tails, but it can only be me.

One area in which C # allows you to do math is with generics and their support for operators. Therefore, I spend some time on this: -p My results are available in MiscUtil , with an overview.

+7
Dec 19 '08 at 8:34
source share

This post looks like it might be relevant: http://fsharpnews.blogspot.com/2007/05/ffts-again.html

Also: C # / F # Performance Comparison

The biggest advantage to pure math is that PerpetualCoder said that F # is more like a math problem, making math easier to write. It reminded me a lot of MATLAB when I looked at it.

+4
Dec 19 '08 at 0:00
source share

I'm not sure if this is better or worse, but there is certainly a difference in approach. Static languages ​​determine how the problem is resolved. Functional languages, such as F # or Haskell, do not do this and are more tailored to how a mathematician will solve a specific problem. Then you have books like this to use python for this. If you speak in terms of performance, nothing can beat C. If you speak from libraries, I believe that functional Langauges (F # and the like), Fortan (yes, not dead yet), Python has excellent libraries for mathematics.

+1
Dec 18 '08 at 23:50
source share

One of the important advantages of functional languages ​​is the fact that they can work on multiprocessor or multi-core systems in parallel, without requiring you to change any code. This means that you can speed up your algorithms by simply adding kernels.

-3
Dec 19 '08 at 8:39
source share



All Articles