Translation File Names Must Match App.json

This is a quick follow-up on my previous post about creating a container for modified Base App development, about the translation file issue. After publishing that post, I also reported the error message to the AL repo on Github and to the MicrosoftDocs repo.

As @NKarolak suggested, the names of the translation files must match the name in app.json. I was very skeptical about this, because this was never the case in any of the AppSource apps I’ve worked on, and the Doc for the translation files specifically says that there is no enforced naming of the translation files. It might be a new requirement though.

When I first created my AL workspace by exporting it from my container, the translation files were named as follows:

The name in app.json is ‘Base Application’ so the space character is replaced with ‘%20’ which is the html representation for the space character. Since the original error message did not mention the file name, I did not think that the file name itself was the problem.

I decided to try Natalie’s suggestion and replaced the ‘%20’ with a regular space, and voila, it published the app as expected.

Next, I changed the name in my app.json to ‘Super Base Application’ and it errored out again. Once I changed the translation files to match the name in app.json, it worked again.

Moral of the story: when developing a modified Base App, you have to match the translation files to the name in app.json.

Modified Base App on Docker

How to get started with modifying the Base Application using Docker

Many partners are still focused on doing custom development for their customers with their one-off implementations. MANY of those customers are existing customers with existing NAV systems with existing customized objects. As much as everyone wants to go to extensions only, and most partners see the need and are more than willing to make the necessary changes, the reality is that many of these existing customers do not want to pay for migrating all of their custom modifications. This reality comes with the need to modify the base app. Since C/SIDE is no longer available, the only way to do this is to use VSCode. This post will explain how you can create a Docker container, and use that container to do modifications on the Base Application.

To get started, click here to read the article on docs.microsoft.com. I say ‘get started’ because it was not enough to get me all the way there, which is the reason why I wrote this post. This article seems to have been written for an actual installation from the product DVD, and there were some additional things you need to know to make it all work if you want to use Docker. At least, that is per the date of this post, because things may change :). I’ll try to revisit this post if it does change.

Alright, so to make this work, you need a few things:

  • A Docker container that is based on the latest Business Central Docker image.
  • Configure the Service Tier in the container
  • Extract the objects from the container into a new AL workspace
  • Uninstall and Unpublish the Base Application and its dependencies

Create a new Container

For Business Central development I always use the NavContainerHelper module, so before you use any of the commands in this post, update your module:

Update-Module navcontainerhelper

To get the latest Docker image for Business Central I will be using the ‘mcr.microsoft.com/businesscentral/onprem:na-ltsc2019’ image. You can leave the ‘ltsc2019’ part out if you are not sure about the host OS or if you are on Windows Server 2016. You can substitute ‘na’ for your own localization, or leave that tag out altogether if you want to be on the W1 version. To read about which image to use, visit Freddy’s blog here and follow the links to what you need to know. Here is the script that I used to create my container:

$imageName = 'mcr.microsoft.com/businesscentral/onprem:na-ltsc2019'
$licenseFile = '<path to your BC 15 developer license>.flf'
$ContainerName = 'mysandbox'
$UserName = 'admin'
$Password = ConvertTo-SecureString 'Navision4ever!' -AsPlainText -Force
$Credential = New-Object System.Management.Automation.PSCredential ($UserName, $Password)


New-NavContainer `
    -accept_eula `
    -containerName $ContainerName `
    -imageName $imageName `
    -licenseFile $licenseFile `
    -auth NavUserPassword `
    -alwaysPull `
    -Credential $Credential `
    -includeAL `
    -updateHosts `
    -additionalParameters @("-e customNavSettings=ExtensionAllowedTargetLevel=OnPrem")

I use the ‘-alwaysPull’ switch to make sure that I always have the latest version of the Docker image. The ‘-includeAL’ switch is necessary to include references to the DotNet assemblies in the Docker container. The ‘-additionalParameters’ switch (h/t @tobiasfenster) is used to set the ExtensionAllowedTargetLevel property to ‘OnPrem’. I’ll explain how to set this with a simple PowerShell Cmdlet in a minute.

One more important switch is the ‘-useCleanDatabase’ switch, which can be used to uninstall and unpublish the Base Application and its dependencies, as I will discuss in a little bit. At this point, you have a vanilla Docker container with the latest on premises version of Business Central.

Configure the Service Tier

As the Doc states, there are three things you need to set. It is not very clear exactly how to do that, and not at all how that works on Docker, so let me just explain from scratch.

First, you need to know how to look at, and modify, the Service Tier settings inside the container. Some of these types of commands are available in the navcontainerhelper module, but some of them are not. I did find a Cmdlet to see the settings, but I could not find one to actually modify them. So, to cover all of it, I will show you how you can connect to the container and run regular BC PowerShell Cmdlets from inside the container.

Open a PowerShell ISE window as administrator, and run the commands in the screenshot

Our container name is ‘mysandbox’, and you connect to it by using the ‘Enter-BCContainer’ Cmdlet. You can see how the prompt changes to show you that you are inside the container. At this stage, the navcontainerhelper does not work, so you will have to use the regular BC PowerShell Cmdlets. The next Cmdlet shows you all the properties of the Service Tier that runs inside your container, which in this version of Business Central is called ‘BC’.

According to the Doc, the following settings are important. I am using the names that are used in PowerShell rather than the names in the Doc.

  • ExtensionAllowedTargetLevel should be set to ‘OnPrem’, although it seems that the value ‘Internal’ also works.
  • DeveloperServicesEnabled should be set to true. This should be the default value of this particular setting
  • There is also a mention of the EnableSymbolLoadingAtServerStartup property in the Doc, but I’ve received confirmation (h/t @freddydk) that this property was meant for hybrid C/AL and AL environments, so that is not needed anymore for BC 2019 wave 2

To modify these settings, use the following PowerShell command

Set-NAVServerConfiguration `
      -ServerInstance BC `
      -KeyName ExtensionAllowedTargetLevel `
      -KeyValue OnPrem

After modifying those settings, restart the service tier using the ‘Restart-NAVServerInstance -ServerInstance BC’ command. At that point, the service tier in your container should be configured for doing on premises development. The next thing you need to do is get the application objects out of the container.

Create AL Workspace from Base App

This step is easy, using a navcontainerhelper Cmdlet, so you need to first exit the container (type ‘exit’ and then enter). Then, run this Cmdlet:

$ContainerName = 'mysandbox'
$UserName = 'admin'
$Password = ConvertTo-SecureString 'Navision4ever!' -AsPlainText -Force
$Credential = New-Object System.Management.Automation.PSCredential ($UserName, $Password)

Create-AlProjectFolderFromBcContainer `
    -containerName $ContainerName `
    -alProjectFolder 'C:\MyProjects\BaseApp' `
    -useBaseAppProperties `
    -credential $Credential 

One thing to note here is that the ‘-useBaseAppProperties’ switch uses the properties from the container. You will end up with a fully functioning AL workspace, with an app.json and launch.json that is configured to look inside the container for the objects and the DotNet probing path. You will need to configure this yourself if your configuration needs to be different. But, since we’re making this work for a standard container, we’re going to use the standard configuration as well.

One other important thing to note…. As I am writing this post, I’ve had a persistent error message that prevented me from compiling the app, which I narrowed down to having to remove the translation files. The annoying part is that the error message itself does not mention the translation files, but it started working again after I removed them. In your new BaseApp folder, there is a folder called ‘Translations’. Remove all files from that folder, except the ‘*.g.xlf’ file.

Update 2019/11/27 follow up on the translation file issue

One final thing to note is that this is just a simple AL workspace. In a real life situation, you are doing this for a particular customer, so you need to think about source control, workspace settings, things like that. There are some capabilities in the Cmdlet, so take a look here to see all the available parameters of the Cmdlet.

The last thing you will need is to download the symbols for the system apps from the container. The Doc also mentions adding the assemblyProbingPaths to the workspace settings, but if you used the ‘-useBaseAppProperties’ switch, that is already taken care of for you and the setting will point to one of the container’s shared folders.

Uninstall / Unpublish Base App

In the previous step, you’ve created an AL workspace with all of the objects from the Base Application. Now, your container already has a Base App, so in order to create a modified Base App, you will have to get rid of the standard one first. You can be a PowerShell warrior and run the Cmdlets in this section, or you can also use the ‘-useCleanDatabase’ switch in the New-BCContainer Cmdlet in the first section. This will remove the Base App and all its dependencies from your container right away.

On to the PowerShell… In the Doc, under bullet 11, you will find the functions to accomplish this. These are regular NAV PowerShell Cmdlets, so you will need to enter the container first:

function UnpublishAppAndDependencies($ServerInstance, $ApplicationName)
{
     Get-NAVAppInfo -ServerInstance $ServerInstance | Where-Object { 
    # If the dependencies of this extension include the application that we want to unpublish, it means we have to unpublish this application first.
    (Get-NavAppInfo -ServerInstance $ServerInstance -Name $_.Name).Dependencies | Where-Object {$_.Name -eq $ApplicationName}
 } | ForEach-Object {
    UnpublishAppAndDependencies $ServerInstance $_.Name
 }

 Unpublish-NavApp -ServerInstance $ServerInstance -Name $ApplicationName
}

function UninstallAndUnpublish($ServerInstance, $ApplicationName)
{
    Uninstall-NavApp -ServerInstance $ServerInstance -Name $ApplicationName -Force
    UnpublishAppAndDependencies $ServerInstance  $ApplicationName

}

This loads the functions into memory, and then you can run the script:

UninstallAndUnpublish -ServerInstance BC -ApplicationName "Base Application"

This will completely remove the Base App and its dependencies.

Ready to Start Developing

That’s it, you should now be ready to start your development. See how that works. Add a field to a table, add that field to its Card page and hit Ctrl+F5. It will probably take a while to compile, but you should see your new field on the page.

Now I do need to say that I completely and wholeheartedly agree with the entire community, and code customizations should really not be done anymore. All development should be done using extensions instead of change the Base App itself. It makes everyone’s life a lot easier if you minimize the amount of development done to the Base App, so even if you have no other choice, try to design the development in such a way that most of it is in an extension, and only modify the Base App for the parts that you can’t figure out how to do in an extension.

Update 2019/11/27: created a GitHub repo with the scripts

Extending the AL Language

Over the past year, I’ve taught many people how to develop extensions for Business Central using Visual Studio Code. Usually I try to keep the workshop to standard features in VSCode and the standard AL Language extension. One of the things I don’t usually cover in any detail is an additional extension that extends that language, the “CRS AL Language Extension”. Since I am one of the owners of CRS, I could take part of the credit for it, but you should know that it was developed pretty much 100% by Waldo. If you want to read more about the extension itself, read Waldo’s latest blog post about it, he’s much better at explaining it than I am.

There are a bunch of really useful features in this extension, but I want to specifically mention a couple that I think are indispensable. In fact, I would bet quite a bit of money that Microsoft will include some of these features in the official AL Language sooner rather than later. It would really not even be necessary, since these extensions are all open source anyway.

Rename/Reorganize

The feature that I use the most myself is the rename and reorganize feature. The extension provides a way to set up how you want files to be organized, and what you want the naming convention to be. Personally I don’t really care all that much about the specifics of any particular convention, as long as what I am doing is consistent, so that at some point things will be in the same place for every project that you work on. I usually just leave the default settings in there, and I know exactly where to find my objects. Go here to read more about how you can customize it to your needs.

Run in Web Client

There are a few standard ways to run a page that you are currently working on. If you’ve added access to the page to a role center you can just start the web client and browse to the page. If this is not the case, you can use the Search feature and start your page from there. You could also set a startup object in launch.json, and when you start the web client from VSCode, it will open on that object. Waldo’s AL extension provides a really easy way to start the current object from the Command Palette, using the ‘Run current object’ command. In the new version of the extension, this command now also shows up in the status bar. Finally, you can right click an object and the ‘Run Current Object’ command can be selected from the context menu.

These two features are the ones that I use the most, and they alone are worth getting the extension. I could not do AL development work without this extension. Download this extension and use it. If you have ideas to make it better, let Waldo know, he loves getting feedback and making it better.

My Take on Using Docker

This past week, there was another post by my good friend Arend-Jan Kauffmann about using Docker directly on Windows 10 (what are you still doing here? Go read AJ’s post!). He had previously written about using Docker in a Hyper-V VM, and he has helped me understand how this all works a number of times. Just to be sure I mention this here, you can read all about the technical details on Tobias Fenster’s blog but that goes over my head very quickly.

The reason why I am writing this is because I am very reluctant to make the step to install Docker directly on my laptop. What works for me at the moment is where I have Hyper-V enabled on my laptop, and I have a VM with just Windows Server 2016 (creating one with Windows Server 2019 is very high on the todo list). My Docker is installed in a snapshot of that VM, and that is where I do all of my development work. I wrote about this before, read it here.

See… I am the king of screwing up my computer. If there is anything, ANYTHING, that will mess up my computer and render it absolutely useless, I WILL find it, and I will kill my computer (I am hearing that in Liam Neeson’s voice by the way). I have had to re-install my laptop so many times because of things that went wrong. When I have a problem like this in my VM, I don’t even spend any time trying to figure out what went wrong (that gives me a headache just thinking about it). All I need to do is delete the snapshot, create a new one, and I’m back up in a matter of minutes. All my dev work is in repos that I sync regularly, so I never have to worry about losing any work.

I’ve read about Docker straight on Windows 10, and it sounds very nice and easy to use. At the same time, I read blog posts and even Tweets that mention damage to the host OS from normal Docker operations, and I just KNOW that if I try it will happen to me. My reluctance to use Docker on Windows 10 directly does not come from wanting to stay in the past, but it is more from the knowledge that I’m going to screw up my computer.

Maybe I’m too cautious, but for now I will stick to my setup and continue to use Docker inside a VM. It works for me, and for now that’s good enough.

Docs on GitHub

Maybe you remember, last year I wrote about signing an App Package file, but that post was really about how I got to collaborate with someone at Microsoft, and one of the things we did was improve the online documentation for this topic.

At the time, I had noticed that there was a feedback button on each page in Docs, and underneath the feedback button it said something like ‘feedback is linked to GitHub Issues’, which led me to wonder if we’d ever see Docs in a repo that we could actually contribute to.

Now today, through a tweet by one of the managers at Microsoft, there was a link to a blog post about that very topic: here it is.

Just think how great this is! Not only do we get access to the source files of the actual documentation, we have a mechanism to contribute to the content. If you ever find yourself confused by any of the documentation, you can either leave your feedback on Docs, or you can make a change and submit a pull request to the repo itself! Either way, the actual system that is used to maintain the docs source files is also used to track issues, and you can create issues yourself in that very system!

This is what we call a BFD 🙂

Extensible Enums

The AL language has an object type called ‘enum’. This object type defines a list of possible values in the form of a set of key/value pairs, plus captions. You can then create a field in a table or table extension enum as its data type, and the field will provide the user with a drop down list of those values. Just like option fields, the database stores the numerical values of the enum in the field.

To define a new enum, you create a new .al file in which you define the enum as an object, and you list the options of the enum as follows:

Note that the ‘Extensible’ property is set to true, so it will be possible to extend the enum with additional options when the enum is used in other extensions.

To link a field in a table or a table extension, you define the field as an enum type field, and specify the enum name as part of the field definition. In the following screenshot we’re adding an enum type field to the Customer table in a new tableextension:

Now, in order for this enum to be extended, you would have the app that includes the enum as a dependency (which puts the original enum into the current app’s symbol references), and then you would create a new object called an ‘enumextension’, in which you define additional values.

Now when you look at the Customer Card, you can see all the values in the dropdown for the new field:

It is also possible to link an option field in C/SIDE to an enum in AL, as shown in the following screenshot:

When I learned about the extensible enum type, I was salivating at the thought that it would be possible to extend the available options in a ton of tables (type in sales/purchase line, account type in journals, entry type in ledgers to name just a few of them). It IS possible to do just that, and eventually the goal is to replace all option type fields in Business Central with enum type fields, it’s just that it comes with a crap ton of refactoring of existing code.

There is a lot of code that checks for all available option values, with an ELSE leg in the CASE statement for ‘other values’. All of that code will need to be refactored to allow for extended enums instead of just raising an error with an unrecognized value.

Now you know about enums, start using them instead of option type fields, and make them as extensible as possible.

Source Code Management for Business Central

So you’re getting your feet wet with Visual Studio Code, and you’re starting to get the hang of how developing extensions for Business Central works. You’re getting comfortable with all the elements of a VSCode workspace and how to connect your workspace to your Docker container. Now is the time to dive into source code management.

One thing that makes it easy to get started is that Visual Studio Code supports source code management as a built-in feature of the program itself, through integrating with the GIT source control management protocol. You have to install this separately, but once you have all the right bits in place, VSCode will keep track of the changes that you make to your objects, and you can take action based on that, right inside VSCode.

Install the right bits

From what I understand, the reason why Git itself is not installed as part of VSCode is because of the open source license that comes with Git. As a result, you have to install the protocol separately. The installer can be found on the Git website at https://git-scm.com/. Click on the download button and just accept all the prompts in the installer (hit ‘next’ about 9 times). All you need to do now is restart VSCode, and now VSCode is integrated with Git. It will issue all the commands that you want as if they were features of VSCode itself.

Tell Git who you are

Basically, source control management is a system that keeps track of who made what changes to which files at what point in time. It is the ultimate CYA tool, and it also can and will be used against you :). You have to tell Git who you are by setting two global system parameters, the user’s name and email address. Enter these parameters in the Terminal window in VSCode.

Sign up for GitHub and create a repository

Go to https://github.com/, sign up for an account, using the same email that you entered in VSCode as the Git user email. This is a super easy thing to do, I’m sure you can figure that part out yourself.

Once you’re in your Github account, create a repository there. Either click on ‘new repository’ from the dropdown button right next to your avatar, or click on ‘repositories’ and then on ‘new’. Give it a name, and if you want to be adventurous add a .gitignore and/or a license. You will figure out about all the details as you learn to use this.

The repository on Github is your ‘remote’. The image shows a new empty repository called ‘Kerplunk’ in my GitHub account. I set it to private because there’s really nothing to share in there at this point.

Initialize your first repository

As you probably already know, the VSCode ‘workspace’ is nothing more than a folder on your local drive. By some lucky coincidence, Git also works with folders, only in SCM terms, a folder is called a ‘repository’. You will have to learn to call these ‘repos’ if you want to look like you know what you’re talking about. A ‘folder’, a ‘workspace’, a ‘repository’, they are really all the same thing.

In order to make this happen, you need to initialize the folder as a repo. The easiest way to do this is to copy the repo from github into VSCode. As you may have noticed, in Github there was a big green button called ‘clone or download’. When you click on that button, it will drop down a little box where you can copy the link to your repo.

Now go back to VSCode, open the command palette and issue the ‘Git: Clone’ command. It will now prompt you for a link (paste in the clone link) and a folder. VSCode will then download the content of the repo from Github. Just some terms that you have to know: the repo on Github is the ‘Remote’, the copy of that repo on your hard drive is called the ‘local repo’ and the process of downloading the remote to local is  called a ‘cloning’.

All Set

You are now ready to rock and roll with source code management. Open Windows Explorer and VSCode at the same time, and note that the content of the workspace in VSCode hides the Git folder. VSCode now knows that the workspace is also a repo, and to track all changes in there. Copy the files of a brand new AL project into the repo and see what happens. VSCode will notice that there are a few new files in the repo.

Note how the files in the workspace  get a green color, and there is a badge with the number 3 on the source control tab.

Stage, Commit and Sync

Remember, you are always working on a LOCAL copy of the repo, you never work directly in the remote repo. To get your changes into the remote is a two step process. First you have to save the changes to the LOCAL repo, which is called ‘Committing’ the changes. You get to decide which files you want to commit by way of a process called ‘staging’.

Click on the Source Control button in VSCode, and note that all changed files are listed under a heading called ‘CHANGES’. When you hover your mouse over either one of the files, it shows a plus sign. When you click on that plus, it will move the file to another heading called ‘STAGED CHANGES’. Those staged changes are the files that will be committed to the local repo. You can select all or some of the files and then hit the checkmark, which is the commit button. This writes the changes into your local repo.

The last step is to send the commits from your local to your remote repo. This is called the ‘Sync’ process. You can either issue the ‘Git: sync’ command from the command palette, or hit the Sync button in the status bar. VSCode will send any new changes to the remote, and it will download any new changes from the remote.

Now check your repo in github and verify that your changes are up there.

Congratulations my friend, are now using source control management 🙂

Update Sep 24, 2018: added YouTube link. As I was writing this post, I was also working on a training video to get started with source code management. You can find it on YouTube here:

Update March 30, 2019 – new screenshots to show current look of VSCode

 

Translations for Business Central Apps

My struggle to get the new translation files right has been real and too long to recount on this blog. When I finally got it right, it felt like it was more of a coincidence than actual skill to finally get it done.

It started with the app submission checklist, where one of the requirements is to “include all translations of countries your extension is supporting”. The app that I was working on was targeted for the US market, so all I needed was a translation file for ‘en-US’. The page on docs that explains translations explained what I needed to do. Or did it?

Turning on the translation feature in the app.json file and replacing all ML captions and such was easy enough. The trouble begins when you run into the details.

As soon as you rebuild the app, VSCode generated the default translation file in a new folder called ‘Translations’, and the translation file will be called “<YourAppname>.g.xlf”. Because the development language is ‘en-US’, this generated translation file is also specified for the ‘en-US’ language.

The translation page on Docs specifies that you need to copy the generated file “to avoid that the file is overwritten next time the extension is built”. Each time that you build the app it will re-create the generated xlf file. What that means is that you need to make a copy of the translation file and use the copy to do your actual translation work.

So I did make that copy, and because I was working on a US only app, I felt I was done with my translation. I had the generated file in my workspace, I had a copy of it for the ‘en-US’ language, I thought I was good to go. Alas, the app submission failed because they could not find any translation file in my app. Something was missing from my translation file.

The generated file only has nodes for “source”, which comes from all of your text strings like labels and captions and comments and such. The translation itself is in a node that is NOT included in the generated xlf file, you have to create that. The name of this node is “target”, and this is where the actual translation goes for each source element.

So, I copied and pasted all of the source nodes, made target nodes out of them, and by this time I had a direct line to the validation person at Microsoft, who was willing to hop on a Skype call and look at my workspace. He was also surprised to see that my workspace DID have the translation file, while the app file that I sent to him did not. As it turns out, the target node needs to have an attribute called “state” with a value “translated”.

In order to get the translation file in its proper state, you should use the proper tool, and I found that the Microsoft Multilingual app toolkit editor is the easiest one to work with. It puts the right elements into the xlf file, and when I used that tool, my app file was finally accepted.

How Do I Videos

My company was commissioned late last year to create some short ‘How Do I’ videos on how to accomplish a bunch of simple technical tasks in Business Central. In total we created about a dozen videos, and they were all published on the “Dynamics 365” channel on YouTube. This channel has a TON of video content about all sorts of topics related to Dynamics 365 in general. Business Central is actually not a main topic in this channel, they are mostly focused on Sales, Marketing, Operations.

Anyway, I wanted to put links to the videos that I recorded in a blog post, so it would be easy to find them. As a result of these videos, we’ve been commissioned to create 30 hours of real training material for Business Central. Yes you read that right… THIRTY HOURS!!!! That’s almost a whole week!!!! Unfortunately, these larger videos will be published in the Dynamics Learning Portal, which is a subscription service inside PartnerSource. You have to have PartnerSource access and also pay extra to be able to access those videos.

I’ve already brought up in an internal meeting that it would be really cool if those videos could be public, and I did not get shot down. I doubt that they will be though, but I will make it a mission to mention this at every step of the way. I’ll keep you posted. On to the videos…

The first one is an easy one, how to add a field in an extension:

Next, how to add a field with a foreign key relation to another table:

Next, how to add some AL code to an extension:

Then, how to add upgrade logic to an extension:

And finally, how to connect to a web service:

NAV on Docker in a Local Virtual Machine

Do you want to have a local development environment for Dynamics NAV and Dynamics 365 Business Central, where it is easy to spin up and remove new databases, in whatever version you need? Docker makes it all possible, and this post explains how I was able to get my environment ready for prime time.

One of the most common things that happens in my blogging life is that I will be working on a post about a certain topic, and then as I come near a state where I feel like I can publish, someone else comes along and steals my thunder, and what often happens is that those other people write something much better than what I was working on. It’s demoralizing on one hand, but at the same time great to see so much quality content. Especially when a ton of it comes out on the same day, (as it did today), you ask yourself why am I even trying….

So, having just deleted the content of my attempt at some original Docker content, here are some of the most useful resources for this topic:

  • You can’t start this with anything other than a vast amount of material by Freddy Kristiansen, who has been working tirelessly on improving this area. He came out with a truckload of material today. You can just go to his blog and look for it yourself, but let me give you links to the most useful ones:
  • My journey to finally get Docker to work on my local Hyper-V virtual machine was biased, because I am fortunate enough to work with Arend-Jan Kauffmann. Back in December, he wrote an excellent blog about setting up networking into a local VM and to set up Docker access, where the container runs in the VM, and you can do development directly on the host machine. Thank you AJ for taking some time to look at my computer and helping me set this up.

I now have Docker containers run in multiple versions of Dynamics 365 and NAV, and it is all working seamlessly.

I’m still figuring out how to utilize Hyper-V most efficiently. For instance, I’m not sure yet if I should have multiple VM’s for multiple projects, or just keep it at a single VM with all of my projects. Especially when the version of the VSCode AL Language extension is important I might need to modify my setup. I will be experimenting with this and I’ll share that as I go along.

One thing’s for sure though: with my current working Docker container, this is about as efficient as I’ve ever been in my entire history as a developer.