Extending the AL Language

Over the past year, I’ve taught many people how to develop extensions for Business Central using Visual Studio Code. Usually I try to keep the workshop to standard features in VSCode and the standard AL Language extension. One of the things I don’t usually cover in any detail is an additional extension that extends that language, the “CRS AL Language Extension”. Since I am one of the owners of CRS, I could take part of the credit for it, but you should know that it was developed pretty much 100% by Waldo. If you want to read more about the extension itself, read Waldo’s latest blog post about it, he’s much better at explaining it than I am.

There are a bunch of really useful features in this extension, but I want to specifically mention a couple that I think are indispensable. In fact, I would bet quite a bit of money that Microsoft will include some of these features in the official AL Language sooner rather than later. It would really not even be necessary, since these extensions are all open source anyway.

Rename/Reorganize

The feature that I use the most myself is the rename and reorganize feature. The extension provides a way to set up how you want files to be organized, and what you want the naming convention to be. Personally I don’t really care all that much about the specifics of any particular convention, as long as what I am doing is consistent, so that at some point things will be in the same place for every project that you work on. I usually just leave the default settings in there, and I know exactly where to find my objects. Go here to read more about how you can customize it to your needs.

Run in Web Client

There are a few standard ways to run a page that you are currently working on. If you’ve added access to the page to a role center you can just start the web client and browse to the page. If this is not the case, you can use the Search feature and start your page from there. You could also set a startup object in launch.json, and when you start the web client from VSCode, it will open on that object. Waldo’s AL extension provides a really easy way to start the current object from the Command Palette, using the ‘Run current object’ command. In the new version of the extension, this command now also shows up in the status bar. Finally, you can right click an object and the ‘Run Current Object’ command can be selected from the context menu.

These two features are the ones that I use the most, and they alone are worth getting the extension. I could not do AL development work without this extension. Download this extension and use it. If you have ideas to make it better, let Waldo know, he loves getting feedback and making it better.

My Take on Using Docker

This past week, there was another post by my good friend Arend-Jan Kauffmann about using Docker directly on Windows 10 (what are you still doing here? Go read AJ’s post!). He had previously written about using Docker in a Hyper-V VM, and he has helped me understand how this all works a number of times. Just to be sure I mention this here, you can read all about the technical details on Tobias Fenster’s blog but that goes over my head very quickly.

The reason why I am writing this is because I am very reluctant to make the step to install Docker directly on my laptop. What works for me at the moment is where I have Hyper-V enabled on my laptop, and I have a VM with just Windows Server 2016 (creating one with Windows Server 2019 is very high on the todo list). My Docker is installed in a snapshot of that VM, and that is where I do all of my development work. I wrote about this before, read it here.

See… I am the king of screwing up my computer. If there is anything, ANYTHING, that will mess up my computer and render it absolutely useless, I WILL find it, and I will kill my computer (I am hearing that in Liam Neeson’s voice by the way). I have had to re-install my laptop so many times because of things that went wrong. When I have a problem like this in my VM, I don’t even spend any time trying to figure out what went wrong (that gives me a headache just thinking about it). All I need to do is delete the snapshot, create a new one, and I’m back up in a matter of minutes. All my dev work is in repos that I sync regularly, so I never have to worry about losing any work.

I’ve read about Docker straight on Windows 10, and it sounds very nice and easy to use. At the same time, I read blog posts and even Tweets that mention damage to the host OS from normal Docker operations, and I just KNOW that if I try it will happen to me. My reluctance to use Docker on Windows 10 directly does not come from wanting to stay in the past, but it is more from the knowledge that I’m going to screw up my computer.

Maybe I’m too cautious, but for now I will stick to my setup and continue to use Docker inside a VM. It works for me, and for now that’s good enough.

Docs on GitHub

Maybe you remember, last year I wrote about signing an App Package file, but that post was really about how I got to collaborate with someone at Microsoft, and one of the things we did was improve the online documentation for this topic.

At the time, I had noticed that there was a feedback button on each page in Docs, and underneath the feedback button it said something like ‘feedback is linked to GitHub Issues’, which led me to wonder if we’d ever see Docs in a repo that we could actually contribute to.

Now today, through a tweet by one of the managers at Microsoft, there was a link to a blog post about that very topic: here it is.

Just think how great this is! Not only do we get access to the source files of the actual documentation, we have a mechanism to contribute to the content. If you ever find yourself confused by any of the documentation, you can either leave your feedback on Docs, or you can make a change and submit a pull request to the repo itself! Either way, the actual system that is used to maintain the docs source files is also used to track issues, and you can create issues yourself in that very system!

This is what we call a BFD 🙂

 

Look Out for the Code Police

One of the coolest features in VSCode is the ability to check your code at design time for specific things. This post will explain how you can turn on code analysis, and how to get away with breaking the rules that it tries to enforce.

There are three things you have to know about code analysis: First, it is a feature that can be enabled and disabled at will. Second, there are sets of rules for specific purposes that you can turn on and off. Finally, you can define exceptions to those rules, and what to do when the code analyzer finds a violation of one of the rules. All three items are found in the user settings, and the exceptions are then stored in a separate file called a ‘ruleset.json’ file.

Open the user settings from the Command Palette. You will need to have different levels of scrutiny for different projects, like one client has an on premises implementation, and another is developing an app for AppSource. These must follow different sets of rules, so they get their own codeanalyzers. Since each project is different, I would say that you define the code analysis attributes at the workspace level. You can set these features up in the sort of UI rendering of user settings, but I like to see the json file in the editor and use Intellisense there.

The code analysis feature is turned on by setting “al.enableCodeAnalysis” to “true”. In the “al.codeAnalyzers” property, you can define which set of rules is enabled. The one you should always enable is the ‘CodeCop’,m which enforces some basic syntax rules. Then, depending on whether you are doing development for AppSource or for a tenant specific extension you can choose either the ‘AppSourceCop’ or the ‘PerTenantExtensionCop’. You should not have both of those last two enabled at the same time, because some rules for AppSource don’t apply for PerTenant and vice versa.

In my settings.json, I’ve turned on code analysis, and I have enabled the CodeCop and the AppSourceCop. To show you what this looks like when code analysis finds a violation in a code editor I’ve created a very simple codeunit:

Code analysis doesn’t like my code, the CodeCop does not approve of using BEGIN..END for a single statement. Personally I don’t agree with that rule, because I always use BEGIN..END in IF statements, I make fewer mistakes that way. The rule is not really a big problem, because the squiggly line under my code is green. If I had violated a really important rule, like missing a prefix in a field name, it would have been red.

Lucky for me, I can define for myself how certain rules are handled. Note that the problems screen shows which rule is broken (number AA0005). Let me show you how you can define what happens.

First, you create a new .json file in your workspace, and you set it up to be a ruleset. I am calling mine ‘Daniel.ruleset.json’ and I am putting it in my workspace root. Here’s a screenshot of the ruleset file:

Under the “action” you can set what you want to happen when this rule is broken. I don’t like the rule at all so I want it to ignore this rule altogether so I’ve set it to “None”. All you have to do now is tell settings where to look for additional rulesets, like this:

The rule itself still works, I’ve just overridden its behavior to something that I like. Going back to my codeunit, there is no longer the annoying little squiggly line, and this violation is no longer listed in the problems window.

No problem, I’m happy 🙂

One word of warning about using the ruleset to create exceptions on AppSource rules. Some of these rules are there because they are required for acceptance into AppSource. For instance, you MUST give EVERY field name a specific prefix/suffix. You can turn this rule off, but if any of your fields is missing a prefix/suffix, your app will not be accepted. Be aware which rules you break, because the code police WILL find you eventually 🙂

NAV Techdays 2018 Recap

I started to write this post while flying across the Atlantic Ocean on the second of a three leg journey home, a BA flight from London to Phoenix. It has been a very long trip that started when I traveled to Holland for Directions EMEA in Den Haag at the end of October. Since Directions and NAV Techdays were relatively close together, I decided to just stay with my family in Holland for those 4 weeks rather than fly back and forth twice in less than a month. This has been the longest that I’ve ever been away from home, and I was SO ready to be back in my own house.

NAV Techdays ended last Friday, and it’s been another fantastic week, as we’ve come to expect. As far as I can tell, the attendees in my pre-conference workshop were happy with the content, I can’t wait to get the feedback and see what I can improve for next time.

As per usual, Luc has posted the videos in record time, less than a week after the event. The whole playlist can be found here, and I wanted to highlight some of my favorite sessions. One of the most important developments in current technology is machine learning and AI. Dmitry Katson and Steven Renders put together an awesome session to introduce machine learning to us. The award for most entertaining session goes to Waldo and Vjeko, who put on a concert and wowed the audience with some really cool content. I also want to point out the session about CI/CD, which is going to be one of the most important things for everyone that is serious about implementing a professional development practice. Of course, I have to also mention the Docker session, which is the technology that makes it all possible.

Furtunately, next year’s event is not scheduled on Thanksgiving, which is a national holiday here in the US, one that typically involves lots of friends and family, and lots of food. I’ve had to miss it the past couple of years, and each time I’ve been bummed to hear the stories of all the great meals and gatherings that my family got to have without me. Next year I’ll be home for Turkey Day!

Thanks for another super event, it’s one of my favorite weeks of the year.

Extensible Enums

The AL language has an object type called ‘enum’. This object type defines a list of possible values in the form of a set of key/value pairs, plus captions. You can then create a field in a table or table extension enum as its data type, and the field will provide the user with a drop down list of those values. Just like option fields, the database stores the numerical values of the enum in the field.

To define a new enum, you create a new .al file in which you define the enum as an object, and you list the options of the enum as follows:

Note that the ‘Extensible’ property is set to true, so it will be possible to extend the enum with additional options when the enum is used in other extensions.

To link a field in a table or a table extension, you define the field as an enum type field, and specify the enum name as part of the field definition. In the following screenshot we’re adding an enum type field to the Customer table in a new tableextension:

Now, in order for this enum to be extended, you would have the app that includes the enum as a dependency (which puts the original enum into the current app’s symbol references), and then you would create a new object called an ‘enumextension’, in which you define additional values.

Now when you look at the Customer Card, you can see all the values in the dropdown for the new field:

It is also possible to link an option field in C/SIDE to an enum in AL, as shown in the following screenshot:

When I learned about the extensible enum type, I was salivating at the thought that it would be possible to extend the available options in a ton of tables (type in sales/purchase line, account type in journals, entry type in ledgers to name just a few of them). It IS possible to do just that, and eventually the goal is to replace all option type fields in Business Central with enum type fields, it’s just that it comes with a crap ton of refactoring of existing code.

There is a lot of code that checks for all available option values, with an ELSE leg in the CASE statement for ‘other values’. All of that code will need to be refactored to allow for extended enums instead of just raising an error with an unrecognized value.

Now you know about enums, start using them instead of option type fields, and make them as extensible as possible.

Directions 2018 Recap

Today’s the last day of Directions EMEA 2018, which was in Den Haag in The Netherlands. This is the town where I was born, and since I haven’t lived in Holland for almost 20 years, it was kind of strange to be here on a business trip. The event was hosted in the World Forum, which used to be called ‘Het Congres Gebouw’ which translates to ‘The Conference Building’. I had never been there for any conference, but it used to also be the home of the famous North Sea Jazz Festival.

My contribution to both events (I did the same workshop and sessions for Directions in San Diego as well as Den Haag) were:

  • An all day workshop to introduce C/SIDE developers to extensions and VSCode. There was a great buzz around the room at both events. Last year there was a bit of anger about the direction of NAV, but now that is settled, I saw a lot of excitement about the new environment, and everyone was eager to learn new things.
  • App Source Test Drive. In San Diego I was a co-presenter with Mike Glue, one of my fellow MVPs, who has developed the only Test Drive experience that is currently in AppSource. He could not make it to Holland, so I did this session by myself in Den Haag.
  • Source Code Management. I was surprised at how busy this session was, there was pretty much a full at both events, and the audience in Den Haag even posed for the picture in this post, which was a lot of fun to do with them.

Other than being very busy with my own workshops and sessions, I was able to attend some sessions myself. The ones that I will remember most, and that I will want to learn much more about was the session about Machine Learning, and the session about CI/CD for Business Central development. Especially the latter one will be important, because if we want to do repeatable software on a bigger scale, we will need, we MUST, learn how to be more professional. The days of flying by the seat of your pants as a partner are over, we must all adapt and become the professionals that we’ve pretended to be for so many years.

During my sessions and workshops I asked almost every staff member who looks old enough to remember if they knew anything about the history of the rooms. I would have loved to be able to say that I shared a stage with some of the greats of jazz, leaving out the fact that there are decades between those performances of course. Unfortunately, nobody remembered, and there does not seem to be any history for the building that I can find. I did find old programs for NSJ, but nobody seems to know what the rooms used to be called.

Whether I can say I share the stage with anybody or not, it was cool to be in The Hague for this conference.

AppSource Test Drive

Everybody knows about AppSource by now. Everybody is also struggling how to make AppSource work for them, and especially how to provide customers and prospects a trial of their functionality. You could create a sandbox environment and try things out in there, but that doesn’t have test data that is specific for your product. You could install the product into a production tenant, but then you have an app in there that you might not want after all.

One of the lesser known features of AppSource is the Test Drive. This feature provides an ISV partner a completely isolated trial experience  of their product, in an environment that is completely in their control. What’s even better is that the Test Drive can be done in a number of different ways, so you can tailor it exactly to your requirements.

The Test Drive can be a part of a comprehensive marketing strategy, in which you can implement an environment that can showcase even the most complex features of your software, in a way that provides ample opportunity to your customers to learn how to use your product in a non-production environment that is still in the cloud, without having to get a team of consultants onsite.

The way that it works is essentially that the Test Drive is a standalone tenant that has a template company. This template company has your product already installed, and it has proper test data already populated. You can create all the data that you need for your product to run properly. Then, through the SaaSification techniques, you would implement a path into the features of your product, taking the user into your product one step at a time.

If you are interested in providing a Test Drive, please watch this video, in which I go into some more detail about this feature.

To find out more about the test drive, and other information about apps for Business Central, visit http://aka.ms/ReadyToGo

Business Central on YouTube

Today is a Very Big Day! Allow me to tell you why 🙂

Maybe you remember a few months ago, when I posted some ‘how do I’ videos, I also mentioned that I was working on hours and hours of training material for Business Central. To make a long story short: my company was commissioned to create a long list of technical training videos. Originally, those videos would be published in the Dynamics Learning Portal (DLP). For those of you that don’t know, the DLP is a website where you can find a ton of training resources for a variety of Microsoft Dynamics products, including NAV and Business Central. There are a few caveats about DLP: not only is it inside PartnerPortal, you have to pay extra to get access to it. Partners who don’t pay this extra fee do not get access to DLP.

As soon as I started working on these training videos, I started mentioning how cool it would be to have these videos available for a wider audience, and every chance I got I would repeat that to anybody who would listen. At some point the decision was made to lift these videos from behind the paywall. They would still be inside DLP, but anyone with PartnerSource access would be able to see them. I was still not happy with that, after all PartnerSource is not free.

Now, exactly how much influence I have over these types of decisions is up for debate, but I do know that I recorded these videos, I had daily status calls with people from Microsoft, and I mentioned it to everyone that it would be great to make ALL of these videos available to the public.

So, the reason why I am so excited, is that I am very proud to be able to share that as of today, there is a separate channel for Business Central on YouTube, and the first thing that was published is a playlist with ALL of the technical training videos that are currently available. That’s right, the entire list of videos that are currently in the Business Central YouTube channel are recorded by ME!!

We still have more videos to create, and they should be added to the playlist as I finish them. At some point I’m expecting Microsoft to add more content to this channel, so it won’t be just me on there, but for now I am almost giddy with excitement.

Now like I said I don’t know just how much my insistence has played a part in this, but when I first started with this project the only plan was to publish these videos to DLP. In my mind I single-handedly convinced Microsoft to release all this great content to the public.

Source Code Management for Business Central

So you’re getting your feet wet with Visual Studio Code, and you’re starting to get the hang of how developing extensions for Business Central works. You’re getting comfortable with all the elements of a VSCode workspace and how to connect your workspace to your Docker container. Now is the time to dive into source code management.

One thing that makes it easy to get started is that Visual Studio Code supports source code management as a built-in feature of the program itself, through integrating with the GIT source control management protocol. You have to install this separately, but once you have all the right bits in place, VSCode will keep track of the changes that you make to your objects, and you can take action based on that, right inside VSCode.

Install the right bits

From what I understand, the reason why Git itself is not installed as part of VSCode is because of the open source license that comes with Git. As a result, you have to install the protocol separately. The installer can be found on the Git website at https://git-scm.com/. Click on the download button and just accept all the prompts in the installer (hit ‘next’ about 9 times). All you need to do now is restart VSCode, and now VSCode is integrated with Git. It will issue all the commands that you want as if they were features of VSCode itself.

Tell Git who you are

Basically, source control management is a system that keeps track of who made what changes to which files at what point in time. It is the ultimate CYA tool, and it also can and will be used against you :). You have to tell Git who you are by setting two global system parameters, the user’s name and email address. Enter these parameters in the Terminal window in VSCode.

Sign up for GitHub and create a repository

Go to https://github.com/, sign up for an account, using the same email that you entered in VSCode as the Git user email. This is a super easy thing to do, I’m sure you can figure that part out yourself.

Once you’re in your Github account, create a repository there. Either click on ‘new repository’ from the dropdown button right next to your avatar, or click on ‘repositories’ and then on ‘new’. Give it a name, and if you want to be adventurous add a .gitignore and/or a license. You will figure out about all the details as you learn to use this.

The repository on Github is your ‘remote’. The image shows a new empty repository called ‘Kerplunk’ in my GitHub account. I set it to private because there’s really nothing to share in there at this point.

Initialize your first repository

As you probably already know, the VSCode ‘workspace’ is nothing more than a folder on your local drive. By some lucky coincidence, Git also works with folders, only in SCM terms, a folder is called a ‘repository’. You will have to learn to call these ‘repos’ if you want to look like you know what you’re talking about. A ‘folder’, a ‘workspace’, a ‘repository’, they are really all the same thing.

In order to make this happen, you need to initialize the folder as a repo. The easiest way to do this is to copy the repo from github into VSCode. As you may have noticed, in Github there was a big green button called ‘clone or download’. When you click on that button, it will drop down a little box where you can copy the link to your repo.

Now go back to VSCode, open the command palette and issue the ‘Git: Clone’ command. It will now prompt you for a link (paste in the clone link) and a folder. VSCode will then download the content of the repo from Github. Just some terms that you have to know: the repo on Github is the ‘Remote’, the copy of that repo on your hard drive is called the ‘local repo’ and the process of downloading the remote to local is  called a ‘cloning’.

All Set

You are now ready to rock and roll with source code management. Open Windows Explorer and VSCode at the same time, and note that the content of the workspace in VSCode hides the Git folder. VSCode now knows that the workspace is also a repo, and to track all changes in there. Copy the files of a brand new AL project into the repo and see what happens. VSCode will notice that there are a few new files in the repo.

Note how the files in the workspace  get a green color, and there is a badge with the number 3 on the source control tab.

Stage, Commit and Sync

Remember, you are always working on a LOCAL copy of the repo, you never work directly in the remote repo. To get your changes into the remote is a two step process. First you have to save the changes to the LOCAL repo, which is called ‘Committing’ the changes. You get to decide which files you want to commit by way of a process called ‘staging’.

Click on the Source Control button in VSCode, and note that all changed files are listed under a heading called ‘CHANGES’. When you hover your mouse over either one of the files, it shows a plus sign. When you click on that plus, it will move the file to another heading called ‘STAGED CHANGES’. Those staged changes are the files that will be committed to the local repo. You can select all or some of the files and then hit the checkmark, which is the commit button. This writes the changes into your local repo.

The last step is to send the commits from your local to your remote repo. This is called the ‘Sync’ process. You can either issue the ‘Git: sync’ command from the command palette, or hit the Sync button in the status bar. VSCode will send any new changes to the remote, and it will download any new changes from the remote.

Now check your repo in github and verify that your changes are up there.

Congratulations my friend, are now using source control management 🙂

Update Sep 24, 2018: added YouTube link. As I was writing this post, I was also working on a training video to get started with source code management. You can find it on YouTube here:

Update March 30, 2019 – new screenshots to show current look of VSCode