Friday, 4. January 2013
Jan 04
In my last post I described how we can access Dynamics NAV 2009 SOAP web services from F# and the benefits we get by using a type provder. Since version 2013 it’s also possible to expose NAV pages via OData. In this article I will show you how the OData type provider which is part of F# 3 can help you to easily access this data.
Exposing the data
First of all follow this walkthrough and expose the Customer Page from Microsoft Dynamics NAV 2013 as an OData feed.
Show the available companies
Let’s try to connect to the OData feed and list all available companies. Therefore we create a new F# console project (.NET 4.0) in Visual Studio 2012 and add references to FSharp.Data.TypeProviders and System.Data.Services.Client. With the following snippet we can access and print the company names:
As you can see we don’t need to use the “Add Service Reference” dialog. All service type are generated on the fly.
Access data within a company
Unfortunately Dynamics NAV 2013 seems to have a bug in the generated metadata. In order to access data within a company we need to apply a small trick. In the following sample we create a modified data context which points directly to a company:
Now we can start to access the data:
As you can see this approach is very easy and avoids the problem with the manual code generation. If you expose more pages then they are instantly available in your code.
As with the Wsdl type provider you can expose the generated types from this F# project for use in C# projects.
Further information:
Tags:
fsharp,
Microsoft Dynamics NAV,
Navision,
type providers
Thursday, 3. January 2013
Jan 03
If you are a Dynamics NAV developer you have probably heared of the web services feature which comes with the 2009 version. In this walkthrough you can learn how to create and consume a Codeunit Web Service. This method works very well if you only need to create the C# proxy classes once. If you have more than one developer, an automated build system, changing web services or many web services then you will come to a point where this code generation system is very difficult to handle.
Microsoft Visual F# 3.0 comes with feature called “type providers” that helps to simplify your life in such a situation. For the case of WCF the Wsdl type provider allows us to automate the proxy generation. Let’s see how this works.
Web service generation
In the first step we create the Dynamics NAV 2009 Codeunit web service in exactly the same way as in the MSDN walkthrough.
Connecting to the web service
Now we create a new F# console project (.NET 4.0 or 4.5) in Visual Studio 2012 and add references to FSharp.Data.TypeProviders, System.Runtime.Serialization and System.ServiceModel. After this we are ready to use the Wsdl type provider:
At this point the type provider creates the proxy classes in the background – no “add web reference” dialog is needed. The only thing we need to do is configuring the access security and consume the webservice:
Access from C#
This is so much easier than the C# version from the walkthrough. But if you want you can still access the provided types from C#. Just add a new C# project to the solution and reference the F# project. In the F# project rename Program.fs to Services.fs and expose the service via a new function:
In the C# project you can now access the service like this:
Changing the service
Now let’s see what happens if we change the service. Go to the Letters Codeunit in Dynamics NAV 2009 and add a second parameter (allLetters:Boolean) to the Capitalize method. After saving the Codeunit go back to the C# project and try to compile it again. As you can see the changes are directly reflected as a compile error.
In the next blog post I will show you how you can easily access a Dynamics NAV 2013 OData feed from F#.
Tags:
fsharp,
Microsoft Dynamics NAV,
nav,
Navision
Thursday, 13. September 2012
Sep 13
After the official Visual Studio 2012 launch yesterday I think it’s a good idea to announce two new type providers which are based on the DGMLTypeProvider from the F# 3.0 Sample Pack.
Synchronous and asynchronous state machine
The first one is only a small extension to the DGMLTypeProvider by Tao. which allows to generate state machines from DGML files. The extension is simply that you can choose between the original async state machine and a synchronous version, which allows easier testing.

If you want the async version, which performs all state transitions asynchronously, you only have to write AsyncStateMachine instead of StateMachine.
State machine as a network of types
The generated state machine performs only valid state transitions, but we can go one step further and model the state transitions as compile time restrictions:

As you can see the compiler knows that we are in State2 and allows only the transitions to State3 and State4.
If you write labels on the edges of the graph the type provider will generate the method names based on the edge label. In the following sample I’ve created a small finite-state machine which allows to check a binary number if it has an even or odd number of zeros:

As you can see in this case the compiler has already calculated that 10100 has an odd number of zeros – no need to run the test
.
This stuff is already part of the FSharpx.TypeProviders.Graph package on nuget so please check it out and give feedback.
Tags:
F#,
FSharpx,
type providers
Tuesday, 29. May 2012
May 29
In the last post I wrote about the PersistentVector which I ported from Clojure to .NET. It’s implemented as an immutable hash array mapped trie which optimizes the constant factors so that we can assume O(1) for count, nth, conj, assocN, peek and pop.
But for some applications Rich Hickey shows us that we can optimize the constant factors even further. If we are sure that we are only holding exactly one reference to the vector than we can mutate the underlying representation instead of copying the arrays on every change. This idea is implemented in the TransientVector which I also ported from Clojure.Core to FSharpx.
Let’s look at a small example. Here we want to convert a collection to a PersistentVector:
As you can see we iterate over the collection and conj the items to the current PersistentVector. But in every step we forget the reference to the last version. This is exactly the point where can use TransientVector:
Here are the results:
Most higher-order functions on vectors like map are also implemented with this trick.
Tags:
clojure,
F#,
FSharpx,
persistent data structures
May 29
Rich Hickey created a very nice set of persistent collections for Clojure. I started to port them to FSharpx and today I want to present the PersistentVector. The basic idea is that we want to have something like an array but immutable.
Vectors (IPersistentVector)
A Vector is a collection of values indexed by contiguous integers. Vectors support access to items by index in log32N hops. count is O(1). conj puts the item at the end of the vector.
From http://clojure.org/data_structures
These vectors are very fast in practical applications since the depth of the underlying tree is not greater than 7. First performance tests show the following on my machine:
These results are not that far away from the Clojure/Java implementation (see below). The lookup seems to be a bit faster but assoc is slower. Maybe that has something to do with the internal array copy function of .NET:
After installing the FSharpx nuget package can use this Vector<T> from C# like this:
More samples can be found in the PersistentVectorTest.fs file.
Additional resources:
Tags:
clojure,
F#,
FSharpx,
Java,
persistent data structures,
vector
Thursday, 22. March 2012
Mar 22
F# 3.0 brings the new type providers feature which enables a lot of different applications. Today I’m happy to announce that the FSharpx project brings you a WPF designer for F# in the Visual Studio 11 beta.

A big kudos to Johann Deneux for writing the underlying XAML type provider.
- Create a new F# 3.0 Console application project
- Set the output type to “Windows Application” in the project settings
- Use nuget and install-package FSharpx.TypeProviders.Xaml
- Add references to WindowsBase.dll, PresentationFramework.dll, System.Xaml.dll and PresentationCore.dll
- Add a Xaml file to your project. Something like this:
- Now you can access the Xaml in a typed way:
- Start the project
Enjoy!
BTW: No code generation needed is for this. No more nasty *.xaml.cs files.
BTW2: Try to change the name of Button1 in the Xaml and press save. 
Tags:
F#,
FSharpx,
type providers,
WPF,
xaml
Tuesday, 31. January 2012
Jan 31
This is yet another blog post in my Currying and Partial application series. This is what I have posted so far:
In this post I want to show you a way to simulate type classes in C# and F#. Type classes are this wonderful feature in Haskell which allow you to specify constraints on your polymorphic types. We don’t have this in C# nor F#.
Let’s start with the following problem: We want to compute the sum of the squares of arbitrary numbers. We want to write something like this:

The problem is we don’t have a generic * function and of course we’d also need a generic + operator and maybe a generic zero. Obviously we need a constraint on the generic parameter T since + might not be defined for any type. So let’s define an interface for numbers:
Nothing special here, so let’s get straight to the implementation for integers and doubles:
So far so good. With this in our pocket we rewrite SumOfSquares() into this:
The trick is that we pass the concrete implementation as the first parameter into our function. This works exactly like a type constraint or as Simon Peyton-Jones would say: the vtable travels into the function. Notice that we don’t have access to the definition of in nor double. There is no way for us to express that int or double implement a number interface.
Now let’s try this out:
As you can see, this is perfectly type safe. We now have a way for poor man’s type classes in C#. Yay!
Now what has this to do with partial application? Let’s look at the same thing in F#:
We’re using a lot of partial application here. Exercise: Try to spot all the places.
Ok, you’re right. This post might be a little bit far away from the partial application stuff, but it’s still related. Somehow.
Tags:
C-Sharp,
Currying,
F#,
partial application,
type classes
Monday, 30. January 2012
Jan 30
In the last couple of days I started to write some posts about Currying and Partial application:
This time I want to show you how we can write a higher-order function which allows us to curry another function. Remember the multiplication function from the first post and it’s curried form:
Currying
The question is: how can we automate this transformation process? Remember we want to have the curryied form for partial application:
Let’s look at the signature of the desired Curry-function: in our case it has to take Func<int, int, int> and returns Func<int, Func<int, int>>.
If we generalize the ints to generic parameters and fix the signature then the implementation is trivial (Compiler Driven Programming). There is exactly one way to make this work:
The F# implementation does exactly the same, but without all the annoying the type hints:
Uncurrying
Of course you can undo the currying by applying a generic Uncurry-function:
And in F# this looks like this:
Libraries
Currying and Uncurrying are two very important concepts in functional programming so they are included in a couple of libraries:
- You can find it at the top of the Prelude in FSharpx (read more).
- You can find it in the Haskell Prelude.
- You can find similar functions in Scalaz.
- Adrian Lang wrote a library called partial-js which allows to do something similar in JavaScript.
Tags:
C-Sharp,
Currying,
F#,
partial application
Sunday, 29. January 2012
Jan 29
My last blog post was yet another introduction to Currying and Partial application. Now I want to put the focus more on the why part. Why do we want to have our functions in curryied form most of the time? This is the first part of a small blog post series and shows partial application in F# pipelines.
Using “Fluent interfaces” is a popular technique to write code in a more readable form. In languages like C# they also provide a way to create the code much faster. On every . we get IntelliSense and this gives us a “fluid” way of writing.
Let’s consider the following task: we want to compute the sum of the square roots of all odd numbers between 1 and 100. In C# we can use the LINQ method chaining approach in order to do something like this:

Now how does this look in F#? It’s basically the same. We replace every . with the |> operator and use the analogous Seq.* functions:

Oups! What happened here? The F# compiler noticed a type error. Math.Sqrt needs a float as input but we gave it an int. C# uses implicit casts between int and float so we didn’t noticed the problem there. Implicit casts are a little bit problematic, at least if you want to have proper type inference so F# doesn’t have this feature. No problem, we are programmers so let’s add the conversion manually:
Notice that float is a function from int to float and not a cast.
Now you might ask: how does this all relate to partial application? The answer is simple: In every pipeline step we use a higher-order function (Seq.*) and apply the first parameter with a lambda. The second parameter is given via the |> operator from the line above.

By applying our rule of thumb from the last post were are able to remove the x parameters:
Now let’s step back to C#. Keeping this knowledge in mind we try to apply the same rule in order to get rid of the x parameters:

Oups again! Now we see same error in C#. In this case it doesn’t know how to apply the implicit cast. As I said they are “problematic”, but we know how to fix this:
Tags:
Currying,
F#,
partial application
Sunday, 20. June 2010
Jun 20
Auf dem .NET OpenSpace Süd in Karlsruhe hatte ich heute die Gelegenheit bei einem Coding Dojo mit Ilker Cetinkaya mitzumachen. Leider ist das Dojo nicht so richtig in Fahrt gekommen und wir haben es praktisch nicht geschafft auch nur eine einzige Regel der Kata.RomanNumerals zu implementieren. Der Grund dafür war (sicherlich nicht nur) aus meiner Sicht, dass es zwei starke Fraktionen innerhalb des Teams gab. Auf der einen Seite “Coder”, die sich tatsächlich von den Tests treiben lassen wollten und auf der anderen Seite eine Gruppe der “Knobler”, die sich nicht mit dem Test-first-Ansatz identifizieren können und lieber am Board einen Algorithmus entwickeln.
Aus meiner Sicht sind beide Ansätze für sich betrachtet auch völlig in Ordnung und können zu sehr guten Lösungen führen.
Wer z.B. nicht glaubt, dass der strikte Test-first-Ansatz für Kata.RomanNumerals funktioniert, der kann gern die Teilnehmer vom 2.ten Hamburger Coding Dojo fragen oder das github-Repository dazu durch browsen und die Entwicklung der Commits verfolgen.
Was aber offensichtlich überhaupt nicht funktioniert ist der Mittelweg. Wenn sich beide Gruppen nicht auf das Vorgehen einigen und im Zweifel immer nur der “lauteste” Änderungsvorschlag gewinnt, dann wird das Ergebnis mitunter unbefriedigend für beide Seiten. Beim Karlsruher-Dojo hatten wir am Ende nach einer gescheiterten Implementierungsphase und abrupten Abbruch sogar nur 7 rote und einen grünen Test vorzuweisen. Bei dem Grünen frage ich momentan sogar immer noch warum ausgerechnet dieser Test überhaupt gelaufen ist.
Um also eine solche Situation zu vermeiden sollte man sich also am Anfang entscheiden wie man sein Dojo gemeinsam durchziehen will. Im weiteren werde ich jetzt eher auf die von mir favorisierte Variante mit striktem TDD eingehen, auch wenn mich eine Knobler-Session mit ausgeprägter Analysephase auch mal als Vergleichswert interessieren würde.
Wie gesagt: Ich bin ausdrücklich davon überzeugt, dass beide Lösungswege funktionieren können – je nach Problem mal besser und mal schlechter.
Striktes TDD und der einfachste Lösungsvorschlag gewinnt
Ein oft genanntes Argument der “Knobler”-Fraktion am TDD-Ansatz ist, dass nicht vorrausschauend entwickelt wird. Hey Jungs – das ist doch genau der Trick. Solange ich keine Anforderung für irgendwelche Dinge habe (nicht im Kopf oder auf dem Papier, sondern als Test) wird keine Komplexität hinzugefügt. Es wird dann behauptet, dass bei Test-first am Anfang sinnfreie Implementierungen gewählt werden und dass man diesen Code am Ende ganz offensichtlich sowieso wieder wegschmeißen würde. Aber das stimmt einfach nicht. Auch wenn ich immer mehr Tests hinzunehme und meine Implementierung immer generischer wird, so wird die vorherige Lösung letztlich immer ein Spezialfall der generischen Variante sein und geht somit nie verloren.
Gerade der Fokus auf diese minimal mögliche Generalsierung der aktuellen Implementierung sorgt dafür, dass der Algorithmus am Ende korrekt herausfällt. Das Knobeln kommt an dieser Stelle natürlich wieder in den Ablauf herein. Allerdings knobeln wir so an wesentlich kleineren und damit leichteren Problemen – selbstverständlich nun viel öfter und in kürzeren Zyklen.
Bei zukünftigen Coding Dojos würde ich also gern als Regel sehen, dass der einfachste Vorschlag, der zu Grün führt implementiert wird.
Temposteigerung durch gruppenweises Anlegen von roten Tests
Zu bestimmten Zeitpunkten und insbesondere auch am Anfang von vielen Katas sieht man jedoch oft Implementierungsmuster die gleich eine ganze Klasse an Tests abdecken könnten. Wenn man so eine Idee hat, dann sollte man die natürlich auch nutzen und so das Tempo steigern. Anstatt aber diese Idee einfach zu programmieren würde ich erst die komplette Gruppe von Tests schreiben und sicherstellen, dass alle neuen Tests auch wirklich "rot sind. Dieser Shortcut ist aus meiner Sicht methodisch völlig unproblematisch, da zu diesem Zeitpunkt nur neue Anforderungen im System dokumentiert werden und diese dann als Begründung für neue Komplexität herangezogen werden können.
Aber auch hier sollte die selbe Regel gelten: Die leichteste Implementierung gewinnt. Wenn also ein Teilnehmer eine einfachere Lösung anbieten kann, dann wird die genommen.
Training durch Wiederholung
Wie Ilker in der Auswertung bereits gesagt hat, würde ich auch gern nochmal in der selben Truppe ein Dojo machen. Vielleicht ergibt sich das ja sogar beim nächsten OpenSpace. Denn selbstverständlich geht es beim Coding Dojo nicht nur um Code sondern auch um die beteiligten Menschen.
Obwohl wir am Ende keine Lösung vorweisen konnten, habe ich in der kurzen Zeit sehr viel über die Entwicklungsmethodik anderer Teilnehmer gelernt – ich denke das hat mir wirklich eine Menge gebracht.
Training durch Wiederholung – Reloaded
Da ein wichtiges Element bei Katas auch die Wiederholung ist (und mir im Zug langweilig ist), habe ich zusätzlich für mich selbst und andere noch eine aufgeräumtere Variante der Kata mit minimalen Commits in das Repository hochgeladen. Unter http://github.com/forki/DojoHamburg/commits/Karlsruhe kann man die Änderungen von unten nach oben genau nachvollziehen. Wenn jemand ein Stelle findet wo ich zu viel implementiert habe oder wo die Änderung nicht sinnvoll und einfach erscheint, dann würde ich mich freuen darüber diskutieren zu können.
Tags:
.Net Open Space SĂĽd,
CodingDojo