Containers Are Now Multi Tenant

Containers are now multi-tenant by default. The New-NavContainer Cmdlet has had a “-multitenant” parameter for a while now, it’s just that not specifying a value for this parameter now means that you get a multi-tenant container. Presumably this is because multi-tenancy is the default for SaaS, and should be for everything. Maybe this was implemented with the switch from NavContainerHelper to BcContainerHelper and I just didn’t pay attention to the details.

The way that I discovered this was that I was working on a training about the BC API, and I had learned that to get to the tenant, you specify it by its ID in the endpoint, like this: https://container:7048/BC/v2.0/[tenant]/[environment]/api/v1.0

Adding “?tenant=default” worked but I was curious whether including the tenantId in the URL was supposed to work in containers. Hint: it is NOT supposed to work that way, at least not based on the replies that I got from Twitter

As I was working through these issues I had created a new container, and instead of removing it, I had set the -multitenant parameter to true and didn’t think of it again until I was working on another project. New container, different script, this time without the -multitenant parameter.

To make a long story short…. I was expecting my container NOT to be multi-tenant, and was annoyed to see that my Postman scripts (the version without specifying the tenant) did not work anymore. It took me WAY too long to discover what the issue was, but there you have it 🙂

Docker Artifacts

Quick post today to point out some new posts by Freddy about a change that he’s made to the container logic in his PowerShell module to switch from downloading images to getting the artifacts and assemble images on the fly. I’ll just link to his blog and kind of summarize. The implications for us Docker consumers, as it turned out, was so small that it was almost uneventful.

Background

Until recently, the process to create a container involved downloading a fully prepared image of that container. This was very easy: download the image, create the container. The problem lies with the sheer number of images that had to be prepared for each situation. Are you on Windows Server 2016? 2019? Which build? Which version of NAV? Which localization? Business Central OnPrem or Sandbox? All in all, to accommodate the entire community, there were hundreds if not thousands of images just to create these containers.

So, to cut down on the sheer volume of those images, we now have what is called Artifacts. Instead of a full image, you download a set of instructions to fetch and build a local image yourself, which is layered with a bunch of components. There are a few common building blocks for the generic image and SQL Server and other such components, and then there are the pieces that we need to prepare the NST, the database, the localization, etcetera.

Instead of having hundreds of images with the same common elements, each common element is a separate download that can be re-used for all images that need it. I’ll leave it to Freddy to explain the details.

What Changes For You?

When I first became aware of this change, I was very skeptical and concerned. I’ve been having some pretty persistent and annoying issues with Docker, and I had visions of it all crapping out on me with this change.

The actual change itself is not very big. Instead of specifying the image name, you specify an artifact URL (the ImageName parameter still exists, and it serves a very useful purpose, but it’s no longer necessary to create a new container). The script then does its work, just like it has before. I made the change, ran the script, and it just created the container without any problem. My containers are usually very straightforward (most of the time I just need the latest US sandbox) and I have had a grand total of zero problems with this particular change.

Posts on the Artifacts

So far, Freddy wrote 5 posts about this change:

Just today, the last full image for OnPrem was uploaded. As it seems, artifacts are here to stay. Lucky for us, this particular change to NavContainerHelper has been seamless, at least for me. My New-NavContainer scripts still work, and I’ve had zero problems with the resulting containers, at least none that are related to Artifacts.

PowerShell in Task Scheduler

One of the programs on my computer creates a shortcut on my desktop every time I restart my computer. As people who know me can attest, I am a little compulsive about certain things, and having anything on my computer’s desktop is one of them. The program in question does not have an option to disable this thing, so someone suggested to me to use Task Scheduler to remove the shortcut. Since I’ve been using PowerShell a lot, I thought I’d try that.

The PowerShell Script

First, you need a script to execute the task at hand. In my case I needed to remove the shortcut from my desktop. After searching for a while, I found it in the Users folder. Create a new PowerShell file in the ISE and write this script:

$MyFile = "C:\Users\<UserName>\Desktop\<TheShortCutYouMustDelete>.lnk"

if (Test-Path $MyFile) 
    {
        Remove-Item $MyFile -Force
    } 

Obviously, you replace <UserName> with your user name, and the name for the shortcut. I’m sure you can even replace the folder with an environment variable. Save this script as a .ps1 file and put it in a folder, like your documents folder. I called mine RemoveShortcut.ps1 and put it in the folder “C:\Users\DENSTER\Documents\”.

For me, this was a need of the moment, because this program was stubbornly creating this shortcut. You can use PowerShell for a million different things, so anything you can do with PowerShell you can put in the Task Scheduler.

Update July 24, 2020 – Based on a conversation with @steveendow on Twitter the other day, automatically updating NavContainerHelper would be a good example. I had already written most of this post (started it back in June) and was about to add the update as a second example before publishing it when Steve created his version of this same post himself. I actually gave him the update script that he used, and I suggested using the Task Scheduler, so I don’t feel too bad for this redundant post, and it saves me from having to test the NavContainerHelper update script in the scheduler :).

Task Scheduler

Next, you need to create a task in the Windows Task Scheduler. Hit the Start button and type ‘task scheduler’, and the app will come up. When the Task Scheduler opens, you’ll see the “Task Scheduler Library”. To make it easier on yourself, create a new folder by right clicking on the Library node and then clicking “New Folder”. Enter a name and hit Enter. This creates a folder where you can save your own tasks. Trust me, it is VERY easy to lose tasks in the standard folders. Having your own folder is going to save you a ton of time.

My new folder “theDenSter”

To create a new task, right click your new folder and select ‘Create Task’. Give it a name and then select the SYSTEM account under security options. You can leave it under your own account, but then you will see PowerShell pop up and run as you are using the computer. Selecting the SYSTEM account makes it so that it happens in the background.

You need a trigger to execute the task. I chose to run it at log on, and I set the task to repeat every 5 minutes for 30 minutes. The reason I did that is because the program that created the shortcut sometimes took a while to create the shortcut, and running my script just once did not always remove it right away, so I had to repeat it a few times. Take a look at all the available triggers and select the one that makes sense to you. Make sure that the trigger is enabled.

Now the meat of the task: the action. We are going to run the PowerShell script, so select ‘Start a program’ and enter ‘powershell’ in the “Program/script” box. The file name goes into the “Add arguments” box:

-File "C:\Users\DENSTER\Documents\RemoveShortcut.ps1"

If you want to be able to manually execute the task, you have to check the box for “Allow task to be run on demand” on the Settings tab. This way you can run the task at any time right from the Task Scheduler.

That’s it. Hope this is useful for you. I had not thought of using the Task Scheduler, but it feels like I could automate a bunch of things. It’s nice to have the ability, let me know if you use the Task Scheduler, and what you use it for.

Span Some (but not all) Monitors

If you use multiple monitors, and you would love to span your RDP session or your Hyper-V session across some, but not all, monitors, then this is the post for you. Say you have three monitors, I will tell explain how you can span your RDP/VM across two of those monitors, while still using the software that runs on the host on the remaining monitor.

Why I Wanted to Know This

My desk has my laptop on a stand on the left side, plus two external monitors to the right of the laptop. The first external monitor is my primary work screen, so it is positioned right in the middle of my desk. I think of the middle screen as “screen 1” and the one on the right as “screen 2”. My laptop screen is just ‘my laptop screen’ :).

My screen setup. Note that the laptop screen shows a third of a fantastic looking multi-screen spacescape, mostly hidden by my 2 screen spanning RDP session into my VM on the two external monitors ‘screen 1’ and ‘screen 2’

For my daily development work, I use Hyper-V virtual machines, and I used to full screen it either on screen 1 or 2, depending on what I was doing. Doing development I’d have my VM full on screen 1, and other supporting programs on the host on screen 2. Writing documentation, I’d have the VM full on screen 2 and Word on screen 1, and then I’d use Snagit for taking screenshots.

When I started including documentation (in markdown files) into my source code, it became difficult to work on documentation, because I would want to take screenshots of the app, and then incorporate it into the markdown file. I would have to switch between VSCode and the app inside the VM, switch to the host to activate Snagit, and then switch back to the VM to process the screenshot in the documentation. Very tedious situation with lots of clicks. I did try to use ‘all monitors’ in the Hyper-V connector, but then it would be difficult to take proper screenshots because I only had Snagit installed on the host. I’d have to minimize the VM, start the Snagit screenshot with delay, re-activate the VM and hope that it would be back up in time for me to take the screenshot.

So, then I thought it would be nice if I could span my VM across just 2 of my 3 screens. I could have VSCode on screen 1, the app on screen 2 (both inside the VM), and Snagit on the host on the laptop. Kudos go to Martin, one of the IT leaders at one of my clients, for teaching me how to do this for RDP. Although the Hyper-V connection uses RDP technology, there does not seem to be a way to do the same thing for Hyper-V – it’s either only 1 monitor, or all monitors. “No problem”, Martin said, “you can use RDP to connect to your VM”, and he showed me how to make that work as well. Let me share this useful nugget with you.

RDP Into your VM

First you need to make sure that your VM is set up to accept remote connections. This section explains how to do that for Windows Server 2019, which is what I use in my VMs.

Inside the VM:

  • Click Start and open Settings
  • Go to the “Remote Desktop” page, and turn on Enable Remote Desktop
  • While you are in this settings screen, note the name of your VM. You’ll use this to create an rdp file for each of your VMs

Back on the host – open Hyper-V Manager and select your VM. At the bottom, you will see a screen part with three tabs. In the Networking tab, it shows the IP address of your VM. Be aware that this IP address can change without any clear indication when or why. Hyper-V Manager will go long times using the same IP address when restarting the same VM, and all of a sudden it will change it.

Now open Remote Desktop Connection, and expand the options. I used to use the VM’s IP address as the name, but because Hyper-V changes the IP pretty much every time that the host computer reboots, I was constantly updating the rdp file. Then I discovered that you can also enter the VM’s computer name into the name box. Now click on ‘Save As’ and save the rdp file with the VM name in a folder that is convenient for you; mine are in ‘My Documents’.

Very cool, congratulations, you are now using RDP to connect to your VM.

X of Y Monitors in RDP

The rdp file is nothing more than a text file with key-value pairs. We are going to edit this file using notepad, and add some screen properties. Feel free to use any other text editor. You can even open it with VSCode if you want.

You need the following settings in the rdp file:

  • screen mode id:i:2 – Determines whether the session is opened in full screen, and the value 2 stands for ‘full screen’. I tried leaving this out and for fun tried to use the laptop monitor and screen 1 at the same time, and that did not work well for me. It seems that this only works when the screens have the same resolution capabilities.
  • span monitors:i:1 – I think this is a boolean parameter – 1 means on
  • use multimon:i:1 – Same here, I think this is a boolean so 1 for on
  • selectedmonitors:s:1,2 – The monitor index is specific to the hardware. To get the numbers, run “mstsc.exe /l” from a command prompt. On one computer, this was 1 and 2 for me. My monitors are connected through a USB-C dock, and when I connected a new computer to the same dock, this no longer worked for me. Same monitors, same dock, different computer gave me screen indexes 3 and 4.
The ‘MyVM.rdp’ file in Visual Studio Code, showing the relevant screen properties

If you’re interested, here is where you can read more about rdp properties.

Now, instead of opening the VM through the Hyper-V Connection Manager, just double click the rdp file and it will connect. You should now see the desktop across the monitors that you defined in your rdp file. If you take a closer look at the image above, you can see that it shows an RDP session spanning my ‘screen 1’ and ‘screen 2’ monitors, with my laptop showing the host.

One other thing that is very convenient is that RDP remembers all the connections you make. If you pin RDP to the taskbar, you can right-click it and select which connection you want to use. I have rdp files for each of my VM’s. All I have to do is start the VM, select the right one in RDP, and it connects on two of my three screens.

Great Improvement

For me this was a really big improvement for my workflow. I used to have to maximize, minimize, switch between VM and host, and it was just distracting to me. Now I can have Outlook, Spotify, and Snagit all visible to my left, and have the real estate of two full monitors to work on whatever I want to run inside the VM, all at the same. The most I need to do is click on my host desktop to make my Snagit keyboard shortcut work.

Personally I do most of my development work in my local VMs, but this would also work for regular RDP sessions. This gives you total control over the screens that you want to use.

Hope this helps you, let me know in the comments or send me a message on Twitter.

Bye Bye WITH

This past week there was a PGI for the MVP’s about something that the Business Central team is preparing for, and they were asking the MVP’s for their feedback. The topic was their plan to discontinue the WITH statement from the AL language. For me personally this is not a big deal, because I’ve always hated using WITH, especially when it spans more than a page of code. I get distracted very easily, and I lose track of which variable these fields apply to. As a result, I’ve always tried to write code and mention variable names explicitly.

As you can imagine, emotions run high about this one, especially with people who like to get upset about stuff. I won’t point to specific instances of this because I don’t like to call people out and get them even more upset. I just want to share some details with you.

There is no ‘official announcement’ but Esben replied to an issue in the AL repo on github here. Microsoft put together a virtual event and created a bunch of videos where they present a lot of content about Business Central here: https://aka.ms/virtual/businesscentral/2020RW1. The details about why they are getting rid of WITH are in the “Interfaces and extensibility” video, which you can find under the Developer track in the Library.

Good Reason

Microsoft is not just implementing this change because they like making our lives difficult. There are some very serious problems related to the WITH statement, especially surrounding dependencies between apps. There are two types of WITH statements:

  • Explicit WITH – this is when you see the keyword ‘with’ in your AL code, and it is meant to make it so that you don’t have to repeat the variable name. Very handy if you want to set a bunch of field values, let’s say in a Customer record variable, you can type ‘with Customer do begin’ and now you can access fields directly without having to type the variable name for each one of them
  • Implicit WITH – this is when a record is implied, like on page objects or in a report dataitem. You can simply type field names, and its record is implied because of where the code is written. By the way, it looks like for now they are letting us keep the implicit WITH (meaning we won’t have to type out ‘Rec’) in table and tableextension objects.

Let’s say you add a procedure called ‘IsImportant’ to a codeunit in which you have a WITH statement, or to a page (doesn’t matter which table the page shows). You call the IsImportant procedure to run some business logic. Everything works great.

The problem occurs when Microsoft then adds a field or a procedure with the same name. Let’s say your IsImportant function does something in some logic about the Vendor table. If Microsoft now adds a field called IsImportant, there is an ambiguity about what this refers to. In your code, it will never reach the Vendor table, because it will find the function in your object before it gets to the Vendor table. Or vice versa, depending on how the code is written and what the scope is at that time. The presentation that I mentioned before will have a bunch of examples to explain, so come back in a few days and I will add a link to the recording to this post.

Not leaving us hanging

One thing to remember is that Microsoft is NOT going to just throw this out at us and leave us high and dry, without any help in fixing this. One of the people on the call had already done some investigating, and figured out that he will have to fix this in 21,000 places in a variety of apps for his customers. This is a LOT of work, and we all need some time to process this.

Some things to remember:

  • This is currently in preview in the AL insider build, and the target is the next major version of the AL language
  • At that time, these will still not be errors, but warnings. The statement will not actually go away until 2021. I can’t find in my notes if it will be wave 1 or 2, but at the earliest this will be spring 2021
  • You will not have to go searching for them yourself, the code analyzer will show you exactly where they are
  • Microsoft is working on tools to help us. The proof of concept that was shown to us was a first version where you have to open each page object and click on the tool and it will fix the implicit WITH on the whole page for you
  • There was also a tool to fix an explicit WITH in code. Both of these tools were still first versions, but they worked and looked like they were easy to use
  • There were already some discussions about options on maybe creating some external tools that utilize these tools. We have a wonderful community of people that are creating AL tools, and I am pretty sure that by the time this becomes really important (meaning by the time you can’t postpone this any longer) we will have really handy tools that will make this a piece of cake

Start NOW

You could include the rule in your ruleset.json file (it’s rule AL0606 for explicit WITH and AL0604 for implicit WITH) and completely ignore the issue. You would be doing yourself a disservice though. I would recommend that you stop using WITH immediately, and start fixing it in every object that you touch from now on, at the very least the explicit WITH. I might wait fixing implicit WITH statements on page objects until there is a more user friendly version of the tool, but I am absolutely going to try and see how much work it actually is.

If it’s one of those point/click fixes and it takes no effort at all, I can totally see myself just whip out 4-5 pages at a time while a container is rebuilding. If you have a ton of customers with a ton of customizations then yes it could be a big task, and you might want to wait until there is a better tool to help you through.

This is one of those things that you can’t postpone forever, you will have to address it at some point. I don’t like having to “fix” something like this either, but there are more important things in this world right now, I just can’t get upset about it.

Sign App File – part 2

Quite a while ago I wrote about signing your app file, which is a requirement for AppSource. It’s been a while since I had to do this, so I went back to my blog and found the article quite lacking. This post is an attempt to fill in the blanks and give you all the information that you need to sign your app, all in one place.

Your first stop to read about this is right here, the Learn page about signing the app file specifically for Business Central. Most of what I’m about to tell you is in there, I’ll just elaborate a little bit more.

Basically, signing an app file, or an executable file, is a way to tag that file with an attribute that certifies where the file came from. If Acme Rockets signs their rocket skate app, the file has an attribute that shows Acme indeed digitally signed it. Take a look at the properties for ‘explorer.exe’, the executable for Windows Explorer. You can check out the digital signature that verifies that this file was signed by Microsoft.

In a nutshell, you need the following:

  • A Code Signing Certificate, in ‘pfx’ format
  • A code signing tool (I’m using ‘signtool’ here)
  • The SIP from your BC container (don’t ask, I still don’t really know)
  • A script to actually sign

Code Signing Certificate

The first thing that you need is the Code Signing certificate. This is a particular type of certificate (NOT the same as an SSL certificate) that you must get from an Authenticode licensed certificate authority (there’s a link in the Docs article mentioned above) such as this one or this one or this one or this one. I’m not affiliated with either one, and GoDaddy doesn’t seem to provide code signing certificates anymore, but I’ve worked with certs from two of those companies and they both worked as advertised. For AppSource submissions, you need the regular “Code Signing”, not the extended one or the one for drivers. Go shopping, because I’ve seen prices range between $199 and $499 per year for the same thing.

In order for the signtool to be able to use the certificate, it must be in ‘pfx’ format. One of the providers that I mentioned has a page here that explains how you can create this file format. The actual file will have a password on it, and you can save it on the computer where you have NAV/BC installed, or where your container lives. I usually have a working folder right in the C root where I do this kind of thing.

The Signing Tool

You’ll need a tool to sign the app file – Microsoft recommends SignTool or SignCode. Since their sample script is for SignTool, that’s the one that I used. Now, the text in Docs describes that SignTool is automatically installed with Visual Studio, but that is only partially true. I actually downloaded Visual Studio to see if that works, but the installation configuration that I chose did not include SignTool.

Signtool is part of the Windows SDK, which probably comes in one of the standard Visual Studio configurations. I don’t know which one, so you’ll have to make sure that it is selected when you are installing it. Another way to get it installed is to install the Windows SDK directly, which you can download here. I installed the one for Windows 7 on a Windows Server 2019 Hyper-V VM, and it worked for me. I know, I should have looked a little longer and used the Windows 10 one, but by that time my app file was already signed and dinner smells were filling my office.

The SIP

If you try to sign your app file now, you will probably get an error message that the app file is not recognized. The SignTool program needs to be able to recognize the app file, and for that purpose it needs to have something called ‘the SIP’ registered on the machine where you run the SignTool command. Apparently this is some sort of hash/validation calculation package that is used to create digital signatures. Each program on your computer apparently has one of these.

One way to get ‘the SIP’ is to install NAV/BC on the computer. If you’re like me, and you use containers exclusively, you won’t want to do this. Luckily, the NavContainerHelper module has a Cmdlet to retrieve ‘the SIP’ out of the container.

 Install-NAVSipCryptoProviderFromBCContainer YourContainerName 

This Cmdlet gets ‘the SIP’ out of the container and registers it on the host. At this point, you should be all set to sign your app file.

Script to Sign

The last element is the command to actually create the digital signature. Not much to say about that, so here it is:

"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" sign 
    /f "C:\WorkFolder\CodeSignCert.pfx" 
    /p "Your Password" 
    /t http://timestamp.verisign.com/scripts/timestamp.dll "C:\YourRepo\Publisher_AppName_1.0.0.0.app"

As you can see, my SignTool is in the Windows 7 SDK folder, you may need to search around for it. Installing the SDK is supposed to register SignTool and you should be able to just use ‘signtool’ as a command. For some reason that did not work for me, which is why I specified the entire path. I split this up to make it look better in this post, the command needs to all be on one line.

One more thing – the timestamp specifies that the file was signed using a certificate that was valid at the time of signing, and the file itself will never expire. Of course if you want to submit a new file after the certificate has expires, you will need to get a new one. If you don’t specify the timestamp, your app file will expire on the same date as your certificate.

Update March 26, 2020 – The timestamping service was provided by Symantec and it looks like they are rebranding that to ‘digicert’. Here is an article that explains the situation. You will need to change the timestamp part in your script:

Replace:
/t http://timestamp.verisign.com/scripts/timestamp.dll 
With this:
/t http://timestamp.digicert.com?alg=sha1

All Set

That’s it, you should be all set to sign your app file. I have to be honest and confess that I wrote this mainly for myself, because I spent WAY too much time trying to re-trace my steps and figure out how this works again. It’s now in a single post, hope it helps you as much as it helped me.

Update – March 18, 2020

Turns out, there is a simple command for this….

$MyAppFile = "C:\ProgramData\NavContainerHelper\Extensions\Publisher_AppName_1.0.0.0.app"
$MyPfx = "C:\ProgramData\NavContainerHelper\Extensions\CodeSignCert.pfx"
$MyPassword = ConvertTo-SecureString "Your password" -AsPlainText -Force
$MyContainerName = "YourContainer"

Sign-NavContainerApp -appFile $MyAppFile -pfxFile $MyPfx -pfxPassword $MyPassword -containerName $MyContainerName

No need to install anything. All you need is the app file and your pfx file with a password, and everything else happens in the container (as Freddy puts it “without contaminating the host”). Just copy both files into a shared folder where NavContainerHelper can read the files.

Mark Down your Documentation

We all have great intentions when we start a new project that THIS TIME we are going to create the best awesomest documentation! Yet somehow, when we are knee deep in testing, when we are wrestling product owners and project management types about scope creep, documentation seems to always fall by the wayside. What if I told you that there was a relatively easy way to include documentation in your development workflow?

Over the past few months I have been working on a number of apps that one of my clients is planning to submit to AppSource. One of Microsoft’s requirements (which you can read here in the Technical Validation Checklist) is that each app submission has what they call ‘User Scenario Documents‘. Another requirement, this one from the Marketing Checklist is that the publisher provides online help. These sound awfully similar to the requirement that you must provide a test app that includes automated tests for the user scenarios that I mentioned just now. These three requirements result in an awful lot of writing, and they all essentially cover the same thing.

As I was organizing the apps that I am working on, I started looking around at how various companies have provided their documentation, with of course Microsoft as the shiny example. You see, I’ve been a big fan of the way that they have moved their documentation to docs.microsoft.com. Not only do they have away to provide direct feedback (which I’ve written about before here), as it turns out, their actual documents are in a public repository that we as members of the community can contribute to. The source documentation is written in what is called ‘Markdown’, and there is a build process that publishes the documents into the Docs website. These tools are often capable of outputting the content in multiple formats. You might be able to Generate your user documentation as well as your online help from the same source content.

Now before we get all fancy and automated, let’s go over the basics first.

Workflow

The documentation itself is done in so-called markdown files. Basically Markdown is a way to tag content with formatting. It is very similar in structure (not syntax) to HTML, which is not a coincidence because markdown was originally conceived as a text-to-HTML conversion tool.

You create *.md files with the text, plus image files for screenshots and such, and you structure the help content using the markdown formatting. It’s very simple and rudimentary, and this is totally by design. It takes a little effort to get used to it, but it’s actually quite simple to master, and it looks great very quickly.

Because you create separate *.md files for each topic, and you keep the images as separate files as well, markdown is absolutely PERFECT to be source controlled. At the moment I am simply including a ‘Documentation’ folder in my repository, so I am keeping my documentation inside my development repository. If you read my “Two Apps, One Repo” post, the documentation folder is at the same level as the other app root folders.

The Business Central documentation, for instance, is on a separate repository on its own, completely separate from the source code of the application itself. You could totally choose to track the documentation separately. There is no one way to do it, you can fit your process to your own needs.

I’ve been told that there are tools out there that will convert markdown to a number of different targets such as PDF, HTML, and even Word. I don’t really want to spend a bunch of time trying figure those out, so let me stop at covering markdown itself. As I move into creating the documentation targets, I will follow up here.

The important take-away here is that you can include documentation in source control. As such, you can make it part of your development process, and track it to work items.

Resources

Here are some useful resources for Markdown itself, and some tools that convert markdown to other formats like PDF and HTML:

  • Introduction to Markdown by the guy that invented it. Not very descriptive or a particularly good reference, but kudos to him for coming up with the simple and powerful concept
  • Since we’re working with Microsoft products, and because they have taken a lot of time to establish a good process, I must include their time to document their guide for authoring for Business Central. You will want to read the Style Guide, look for the link in that post
  • I asked the community how they create help files and documentation, and was referred to this guy that works for/with Microsoft (as far as I can tell he is an external resource who manages the Docs team). He’s put together a video about their process, included in this blog post. He is very responsive on Twitter, so ask him if you have any questions
  • I have to include this post by Eva Dupont, who is responsible for all Business Central content in Docs. I’ve mentioned this before, but it needs to be repeated as many times as I can. She also wrote a handy primer on migrating your help files.

You can also check out the many really good responses that I received when I asked the community:

I’m just collecting information at this point. There’s lots of good ideas, lots of tools to help get there, it’s just a matter of picking the ones that fit your process. For me I’ll try to keep it simple and effective, and I have a feeling that my clients will have strong opinions as well. I’ll keep you posted on what will happen.

Two Apps, One Repo

Whether you’re working on AppSource apps or Per Tenant Extensions, or even a code custom on premises extension, by now every one of your AL projects includes a test app right? This really means that every development project is really two apps. One is the app itself, the other the test app. In this post I’ll tell you an easy way to organize your workspace.

As you know, each AL workspace is essentially a folder with a file for each object, plus the necessary files to define the app itself. Since you are now also using source control, each AL workspace is also a Git repository. Logically, you would then have a repo for the app, and also a repo for the test app. What if I told you that you could have a single repo that includes both AL workspaces, all at the same time.

Start the Repo

First, you create the repo itself, whether you’re on GitHub or Azure DevOps. This is essentially an empty repository, we will create the AL workspaces in a bit. Let’s call it ‘MyRepo’. I’ll include a .gitignore and a readme.

I’ll clone this repo in VSCode. Set up the .gitignore file for AL, and VSCode is now tracking everything that happens in this repo. Normally, you’d fill the new folder with the AL workspace files, so that this single repo has one single AL workspace. To have the app as well as the test app in this repo, you simply create two AL workspaces in the same folder.

Add the App Workspaces

If you have existing apps, just copy the folders into the repository. If you are starting a new (test) app, use the ‘AL: Go!’ command, saving the project in the ‘MyRepo’ folder. Repeat for the test app. Each time that you create a new AL project, VSCode will automatically open the new workspace. To see the repo itself, re-open the MyRepo workspace and you will find the two apps in the same root folder.

Going to the Source Control tab you can see that it tracks changes in both folders, and the single gitignore file works on both of them. How to set this up as an app and test app is for another post, but you now have one single repository with two apps.

Work on Each App Separately

When you are looking at MyRepo in VSCode, at some point you will get a message that the manifest is missing, and you’ll get all sorts of messages in the problems window. You can’t even download symbols like this. In other words… you can’t really work on your app like this. I usually keep a folder with some PowerShell to create my development containers, I plan to keep app documentation in there as well, and it appears that this is the right place to store some pipeline files. To work on the apps themselves though, you will have to open the app’s folders in VSCode individually.

It gets a little tricky here because VSCode will show all the modified files for the whole repository in the Source Control tab. If you have modified files in both the app as well as in the test app, whether you are looking at one or the other, you will see them all.

The green one is a change I made to the gitignore file, which is in the root folder for the main repository. The red ones are changes in the AL workspace called ‘TheApp’, and finally the yellow ones are in ‘TheTestApp’. Whether you are looking at MyRepo or either one of the apps, you can commit any changes that you select to your repo from the App/TestApp folders.

The good part though is that you can simply open the app folder and work on that as if it were part of its own repo. Then you can commit and sync, open the test app and work on that. At the end of the day, when all objects are checked in, the single repo includes the app itself as well as the test app.

Each folder is considered its own AL workspace, so you can modify settings for the app and for the test app. What I really like about this way of working is that everything is part of a single repo. Two Apps, One Repo!

Using Workspaces

VSCode has something called ‘workspaces’. You may have noticed a selection on the File menu called ‘Add Folder to Workspace’. When you then save the workspace, VSCode will save a collection of attributes in what is called a ‘code-workspace’ file. I’ve tried to make this work, and I was wrestling with it a little bit. For instance, settings are defined inside the ‘code-workspace’ file instead of a separate settings.json file.

In addition there were some other things that confused me a little bit, so I posted a quick poll to my favorite hashtag, asking the community what they do, and there were quite a few votes.

It seems there are a few people working on posts about workspaces, so I will defer to them. I am looking forward to reading about that!

Excel Buffer for the Cloud

One of my clients asked me to help them convert an add-on that they developed in C/CIDE into an AppSource app. This add-on includes the functionality to export some data into an Excel file, using the Excel Buffer table.

The Excel Buffer table is also available in AL, but one of the issues is that as soon as you set the target of the extension to ‘Cloud’ (Which as you know is an attribute in app.json), the compiler will scream at you that you can’t use certain functions of the Excel Buffer, because their Scope has been set to on premises. So if your C/AL object uses the ‘OpenExcel’ function, for instance, you can’t use that for AppSource apps because its scope is OnPrem. This type of thing usually takes me days to figure out, so I thought I’d ask Twitter with my favorite community hashtag #bcalhelp

Within a day I received a bunch of helpful suggestions, I just love this community! The one that put me over the top was a phone call with my good friend AJ, who not only showed me, but he also sent me some sample code that he was working on. He’s working on a blog post about this topic himself, so I’ll let him share that and I’ll post a link to his blog once he puts it online. I want to mention Owen too because he had sent me essentially the same suggestions, but to an email address that I hardly ever use anymore, so I didn’t see that until days later.

As you can see by the trigger name, I had to put this into a report object (which I’ll share when I find time to put it in a repo). My main problem was that I needed to be able to provide a way for the user to open the Excel file. For this to work, you use the OpenExcel function. This actually does not open Excel, but what it does instead is it downloads the Excel file into the Downloads folder on your computer, and then you can open that file from there.

Some additional pointers:

  • CreateNewBook creates a new file, with a new sheet. If you already have the file created, and you need to add a sheet to the existing file, then you would use the SelectOrAddSheet function
  • TheWriteSheet function writes the records from the Excel Buffer table into the sheet. Each record represents a cell value
  • You will need to use the NewRow, AddColumn functions to ‘walk the grid’ of the cells in your sheet. Also very useful functions: ClearNewRow and SetCurrent. I ended up adding a GetCurrentRow function to an Excel Buffer table extension
  • The CurrentRow and CurrentCol variables in the Excel Buffer table are your friend. Forget about the letter/numbers of the Excel file itself, just use the row/column numbers
  • SetFriendlyFileName is not mandatory, but otherwise the file will be called ‘Book1’ or something

Like I said before, AJ is working on a post for this as well, and he said he was going to offer a repo with the objects as well. If I don’t forget I’ll create a sample report and offer that as a PR to AJ’s Excel repo.

Translation File Names Must Match App.json

This is a quick follow-up on my previous post about creating a container for modified Base App development, about the translation file issue. After publishing that post, I also reported the error message to the AL repo on Github and to the MicrosoftDocs repo.

As @NKarolak suggested, the names of the translation files must match the name in app.json. I was very skeptical about this, because this was never the case in any of the AppSource apps I’ve worked on, and the Doc for the translation files specifically says that there is no enforced naming of the translation files. It might be a new requirement though.

When I first created my AL workspace by exporting it from my container, the translation files were named as follows:

The name in app.json is ‘Base Application’ so the space character is replaced with ‘%20’ which is the html representation for the space character. Since the original error message did not mention the file name, I did not think that the file name itself was the problem.

I decided to try Natalie’s suggestion and replaced the ‘%20’ with a regular space, and voila, it published the app as expected.

Next, I changed the name in my app.json to ‘Super Base Application’ and it errored out again. Once I changed the translation files to match the name in app.json, it worked again.

Moral of the story: when developing a modified Base App, you have to match the translation files to the name in app.json.