PowerShell Duration

This is a really quick tip just for myself to save the script where I can easily get to it, this time a quick way to output the duration of a PowerShell script. When a script takes longer than expected, in my mind I am waiting HOURS for it to complete but it is probably just a minute or two. I’ve run into this before, where I spend WAY too much time trying to locate a good, easy way to output the duration of a script. Without further ado, here’s the PowerShell code:

$startTime = Get-Date

<insert your script here>

$myTimeSpan = New-TimeSpan -Start $startTime -End (Get-Date)

Write-Output ("Execution time was {0} minutes and {1} seconds." -f $myTimeSpan.Minutes, $myTimeSpan.Seconds)

The key elements here being

  • I was looking for a ‘Time’ function and didn’t realize that in PowerShell you actually need to use the ‘Date’ object to access the time. Get-Date returns a DateTime stamp
  • Coming from BC, I was looking for something called ‘duration’, so it took quite a bit of time to find out about New-TimeSpan. This creates a ‘TimeSpan’ object that has its own members for easy concatenation of a user-friendly message. I’m only using minutes and seconds here

Hopefully next time I need this I will remember to search my own blog 🙂

Create JSON in AL

Last year I developed an integration to an external REST API from Business Central. One of the things that I had to learn is how to deal with JSON. We now have a bunch of different JSON data types, and if you’re just getting into them, they are hard to keep apart. With this post, I’ll try to explain as easy as I can make it, how to create a JSON object in AL.

JSON Basics

First, of course, the basics.

  • JSON is short for ‘JavaScript Object Notation’, follow this link to read the actual standards but if you’re new to this don’t though, it will only confuse you
  • It’s kind of like XML in concept but much easier to read. No attributes, no formatting rules, just a collection of key/value pairs
  • The equivalent of an ‘XML document’ is called a ‘JSON Object’. The start and end of these JSON objects are marked by curly braces (these {}).
  • Keys and values are always between double quotes (unless they are in non-text data types, but you can always put values in double quotes), separated by a colon. Multiple key/value pairs are separated by commas. This is a very simple JSON object with some key/value pairs:
{
    "A Key": "Example One",
    "Another Key": "Another Value",
    "Number Value": 42,
    "Boolean Value": false
}
  • Nesting is done by adding a new JSON object as a value, like this:
{
    "A Key": "Example Two",
    "Nested thing": {
        "First nested Key": "nested value",
        "Second nested key": "some other value"
    }
}
  • A list of JSON objects is called a ‘JSON Array’. The only restriction here is that each of the objects in an array must be structured the same, otherwise it would not be an array. The start and end of a JSON array is marked by square brackets. Each object has its own start/end curlies, and they are separated by commas. Here’s a simple example:
{
    "A Key": "Example Three",
    "List of Things": [
        {
            "name": "thing 1",
            "age": 10
        },
        {
            "name": "thing 2",
            "age": 12
        }
    ]
}

Note that the objects inside the array are identical in structure. This is really it, you should now be able to decipher any JSON response from any web service.

JSON Data Types in AL

In AL, there are 4 different JSON data types. All of these data types are .NET types that are wrapped in AL data types. You don’t have to worry about constructing these variables, they take care of themselves. All you have to do is make sure that you start with a fresh one, so pay attention to the scope of your variables. If you are not sure about that, use the Clear keyword to empty it out.

  • JsonObject – this data type represents an actual JSON object. It has properties and methods, and you use those to build the object as shown in the code examples below
  • JsonArray – this data type is also an object with properties and methods, and it contains the stuff that is between the square brackets
  • JsonValue – this represents a single value of a simple data type like a Text, an Integer, or a Boolean
  • JsonToken – in the JSON world, the ‘Token’ is similar to a Variant, and it can be either one of the other three Json data types. If you are not sure about the content of a JSON element, you can always use a Token as a data type

Show Me The Code!

When I started this section I was going down a rabbit hole of long sentences and complicated explanations. As I was reading what I wrote I realized that describing these things actually was not making any sense at all. Let me just give you the code, and if you have any questions about this please leave a comment below.

Each example refers to the JSON examples in the ‘JSON Basics’ section above.

Example One

The first example is very straight forward. This object only has simple key/value pairs, so we can simply add those to the object, like this:

    procedure ExampleOne()
    var
        MyJobject: JsonObject;
    begin
        MyJobject.Add('A Key', 'Example One');
        MyJobject.Add('Another Key', 'Another Value');
        MyJobject.Add('Number Value', 42);
        MyJobject.Add('Boolean Value', false);
    end;
Example Two

The second example has another JSON object as one of its values. The ‘Add’ method of the JsonObject data type takes a string as the Key name, and a JsonToken as the value parameter. In the first sample those were simple datatypes, and for this second sample we’re going to create another object to use as the value parameter, like this:

    procedure ExampleTwo()
    var
        MyJobject: JsonObject;
        NestedJObject: JsonObject;
    begin
        MyJobject.Add('A Key', 'Example Two');

        NestedJObject.Add('First Nested Key', 'nested value');
        NestedJObject.Add('Second Nested Key', 'some other value');
        MyJobject.Add('Nested thing', NestedJObject);
    end;
Example Three

The final example includes an array of objects, so we’ll build those one step at a time. Everything just coded in there, I hope you see how the array objects should be refactored into a re-usable function.

    procedure ExampleThree()
    var
        MyJobject: JsonObject;
        ThingJObject: JsonObject;
        MyJArray: JsonArray;
    begin
        MyJobject.Add('A Key', 'Example Three');

        Clear(ThingJObject);
        ThingJObject.Add('name', 'thing 1');
        ThingJObject.Add('age', 10);
        MyJArray.Add(ThingJObject);

        Clear(ThingJObject);
        ThingJObject.Add('name', 'thing 2');
        ThingJObject.Add('age', 12);
        MyJArray.Add(ThingJObject);

        MyJobject.Add('List of Things', MyJArray);
    end;
What about JsonToken?

We did not use a variable of type JsonToken directly, because when we are creating a JSON Object we already know what we have. Indirectly though, we definitely ARE using a Token. The ‘Add’ method of both the JsonObject and the JsonArray data types use a Token as the value parameter. Since the Token can be any simple type or any other Json type, you can simply throw any of those in and it knows what to do with it.

The JsonToken type will come in very useful when you start processing incoming JSON objects, and you need to make sure you correctly convert values into compatile data types in AL.

What’s Next?

These are just some very simple examples of how to build a JSON object, but this is really all the logic you’ll ever need to create your own. The JsonObject will take care of the curlies and the commas, the JsonArray will take care of the square brackets and all the other stuff. The only slightly complex thing you’ll ever need to do beyond this is to now take the JsonObject and set it as the body of an HttpRequest.

Let me know in the comments if this is helpful. There are other super useful posts about this topic, but I wanted to explain this in my own words. I will also write one how to read an incoming JSON object, and at some point I’ll share some helper logic that I’ve created to help take care of conversions and such.

Insider Builds on Collaborate

As a Microsoft partner, you have to create a bunch of accounts to get into a bunch of systems. Microsoft is on a continuing mission to make things easier by putting most of it into a single portal called “Partner Center”. In this post I will focus on one particular thing – how to get access to insider builds of BC, which you need to validate your extension against future releases.

I’m writing this because the engagement was not listed for one of my clients. Microsoft support was no help, and I could not find this information in some of the obvious places. Just a quick post, mostly for my own reference but I can’t imagine that I am the only one that is having difficulties getting into the program. Thanks to our little Twitterverse for helping me out.

Collaborate

First, you need access to Collaborate itself. From your Partner Center portal dashboard, you should see a rectangle with the word ‘Collaborate’, the shortlink is aka.ms/collaborate. If you do not have access yet, this is where you would have a link to where you can submit a request to be admitted.

Insider Builds

To get to the insider builds, you have to be enrolled in something called an engagement called ‘Ready! for Dynamics 365 Business Central’. As a Business Central partner, this engagement should be listed and all you do is hit the ‘Join’ button and you should get access within a couple of days.

Access to the insider builds is buried a little bit. Select the Ready! engagement and click the ‘Packages’ button. The one you need is the document called “Working with Business Central Insider Builds”. The thing that you need to get the actual build is the $sasToken. This value is needed to be able to download the artifacts for the insider build. It’s the password to validate that you are allowed to use those artifacts. By the way, don’t forget to take the time to read the material in there, it’s important information.

What if I don’t see the engagement

Normally, when you register as a development partner for Business Central, you are supposed to see the “Ready to go” program in the available engagements, and getting access is as easy as clicking the ‘Join’ button. This does not always happen for some reason. The screenshot below shows the available engagements for one of my clients, and as you can see the program did not show up, so we could not click the join button.

If for whatever reason you do not see the engagement, do not submit a support ticket from Partner Center. These tickets go to a central support group that does not know about program-specific issues, and you’ll spend weeks going back and forth. You’ll ask for access, and they will tell you to click the Join button. You’ll tell them there IS no join button to click and they’ll say click the button anyway. This is not a knock on them, they just don’t seem to know the details.

Access to the BC programs is handled by the product team themselves, and their email is Dyn365BEP@microsoft.com. This will get your issue directly with the BC team, and they will know exactly what to do. Just provide your publisher name, your MPN ID, and first name, last name and email for the person that needs access. Once I did this, the issue was solved within a few days.

Adding new users

Once you have access to the engagement, you should be able to provide access to your own people by first adding them to the Partner Center account, and then by enrolling them into the engagement.

NAV on Docker in 2022

One of my clients asked me if I would be able to help them with an ‘upgrade’ of an add-on for Dynamics NAV for one of their customers. For this task I would have to get a working C/SIDE in a number of versions. It’s been years since I’ve done any C/AL development, and I thought this would be a cool task to work on. This post describes what I found is what does and does not work if you want to do this using Docker containers.

Background

Just to paint a picture… First of all, the end user is on NAV2017. They had an older version of this add-on, which was developed on NAV2013. The task at hand was to implement a newer version of the add-on, which was developed in NAV2018. Technically, this was a downgrade of the add-on objects so I had to be careful to avoid any incompatible object attributes. I won’t bore you with the details of the actual ‘upgrade’, nobody wants to read about those.

The Environments

To be able to identify the mods of the original add-on, I needed a C/SIDE environment for NAV2013R2. Since this version is not available in containers, I had to actually install this version.

The end user is on NAV2017, and the ‘new’ version of the add-on is in NAV2018. Both of these versions are available in containers, and supposedly all you need to do is put together the correct artifact URL. You can find this information on Freddy’s blog. Mind you though, the localization for the US is called ‘na’ in NAV, not ‘us’ like in BC.

How about Docker?

what does and does not work? This, to me, is a pragmatic problem. I spent quite some time trying to make NAV2017 and 2018 work in containers, because I have used them successfully in the past. I have a terrible memory though, so I always start from scratch, and what I could find was outdated. At some point I just started the NAV2017 DVD download as I was researching a problem. The d/l completed before I found the answer so I abandoned the container idea for NAV2017 and just installed it. I have plenty of VMs available to do this so one way or another, I’ll get a working instance.

After going through a bunch of troubleshooting and following obsolete download links, I was able to make the NAV 2018 container work. Freddy wrote about troubleshooting here, but not all links still work. You have to enable .NET framework 3.5 and 4 in the Windows features, you need Visual C++ redistributable for Visual Studio 2015 (“The program can’t start because MSVCP120.dll is missing”), and you have to get working SQL bits (weird – the Windows client did work but C/SIDE did not until I installed those). What made the ODBC errors go away for me is the SQL Server 2012 Native Client. The SQL link in Freddy’s blog did not work for me.

I could not get NAV2017 to work at all, it would not even start. As I said I got the DVD to download before figuring out the problem and I installed it from there. It’s not like I have NAV2017 clients lining up, so I did not want to spend a second more than I had to on this.

Lesson Learned

So in the end I ended up installing NAV2013R2 and NAV2017, and was able to get a NAV2018 container up and running. The lesson learned though? I bet it is still possible to get the NAV2017 container to run right, but just as a safety I have downloaded all versions of the DVD going back to NAV 5.0. To Microsoft’s credit, they still have most of those as downloads, but you never know when they will remove them.

This post is mostly for my own benefit, but I wanted to share it if anyone out there also needs these. Let me know in the comments how you’ve made NAV and C/SIDE work in containers.

THE Book on Automated Testing in BC

It has taken well over a year to write, and a good 8 months to review (and revise, and rewrite certain parts), and it has been delayed almost 3 months. I am SUPER proud to say though that the second edition of THE BOOK on automated testing in Business Central has been published!

As a reviewer of course I knew this was coming, and Luc finally shared the news

What if I Already Have the First Edition?

Of course you do! You are a BC professional, so therefore you’ve been adding automated testing to all of your projects right from the start, and you purchased Luc’s first book right when he wrote that. There are a few reasons why you should purchase the second edition

First of all, at 387 pages the second edition is almost twice the book as the first edition, which counts *only* 206 pages. Luc has added about a metric ton of stuff to the second edition. Not just expanded information on existing topics, but chapters about completely new topics altogether.

Major Improvements

One of the things that I thought was lacking in the first edition was more in-depth information about Test Driven Development (TDD for short). The mechanics of automated testing were solidly covered, and I was able to apply this knowledge in my work. What I was missing was how to take this to the next level. I knew there was a large methodological body of work out there about TDD, and I did not know where to start looking how that would be relevant for ME.

The second edition has filled that gap. Not only does Luc write eloquently about the methodology itself (there’s a whole chapter on TDD itself now), he puts it into the context of Business Central development specifically. He explains HOW you can use TDD in Business Central, he shows you step by step how to approach this, and he even provides a handy set of tools to support this methodology.

Luc spends a LOT of time writing about all aspects of TDD. More than just the nuts and bolts of creating test apps, he covers how to integrate automated tests in your daily development practice.

Advanced Topics

The brand new section called ‘Advanced Topics’ addresses some lesser known things such as refactoring your code to create more re-usable components, utilize standard components in more complex scenarios, the approach to testing web services and even how to make YOUR code more trestable.

In short, this second edition goes much more in-depth in just about every aspect of the book, plus it provides a wealth of information into a number of valuable topics that were not addressed in the first edition.

I am VERY proud to have played a part in writing this book, Luc did a phenomenal job in making the second edition a much more mature volume of THE book on automated testing in Business Central. Even if you already own the first edition, your money will not go to waste if you buy the second one.

Where can I get it?

Two places that I know of that you can get it:

  • The Packt Publishing website. Some people complain about delivery delays and such, but I don’t mind waiting a few days. When you get it from Packt directly, you have the option to have the book in print as well as e-book. The online reader on the Packt website is on of the best online readers I know. Plus with online access you can start reading right away, so to me well worth the handful of extra days for delivering the print book.
  • Amazon of course. Can’t beat delivery time, but Amazon does not bundle print + eBook and Packt does.

Import Media Files for SaaS

One of the standard ‘Problems’ when you’re in an AL workspace in VSCode is a warning that you are no longer allowed to use BLOB as a datatype for images. This has been at the bottom of my priorities list until I had a request to create a new image for a standard field. With this post I’ll show you how easy it is.

Media Field

The first element that you need is a field in the table. Instead of a BLOB field with subtype Bitmap, you now need a field of type ‘Media’. There is also a data type called ‘MediaSet’ but that’s not what we are going to use. Go to Docs to read about the difference between Media and MediaSets. The field is not editable directly because we will be importing the image through a function.

In addition to the field itself, you need a function to import an image file into the field. In the object below I have a simple table called ‘Book’ with a number, a title and a cover. We use the ImportCover function to do the import, and implement that as an internal procedure, so it can only be used internal to the app. You can of course set the scope as you see fit.

table 50100 BookDnStr
{
    Caption = 'Book';
    DataClassification = CustomerContent;

    fields
    {
        field(10; "No."; Code[20])
        {
            Caption = 'No.';
            DataClassification = CustomerContent;
        }
        field(20; "Title"; Text[100])
        {
            Caption = 'Title';
            DataClassification = CustomerContent;
        }
        field(30; Cover; Media)
        {
            Caption = 'Cover';
            Editable = false;
        }
    }

    keys
    {
        key(PK; "No.")
        {
            Clustered = true;
        }
    }

    internal procedure ImportCover()
    var
        CoverInStream: InStream;
        FileName: Text;
        ReplaceCoverQst: Label 'The existing Cover will be replaced. Do you want to continue?';
    begin
        Rec.TestField("No.");
        if Rec.Cover.HasValue then
            if not Confirm(ReplaceCoverQst, true) then exit;
        if UploadIntoStream('Import', '', 'All Files (*.*)|*.*', FileName, CoverInStream) then begin
            Rec.Cover.ImportStream(CoverInStream, FileName);
            Rec.Modify(true);
        end;
    end;
}

Factbox for the image

Similar to how Item images have been implemented, you can create a factbox to show the book cover and add that to the Book Card. Using a factbox also makes it easy to keep the related actions close to the control.

page 50100 BookCoverDnStr
{
    Caption = 'Book Cover';
    DeleteAllowed = false;
    InsertAllowed = false;
    LinksAllowed = false;
    PageType = CardPart;
    SourceTable = BookDnStr;

    layout
    {
        area(content)
        {
            field(Cover; Rec.Cover)
            {
                ApplicationArea = All;
                ShowCaption = false;
                ToolTip = 'Specifies the cover art for the current book';
            }
        }
    }
    actions
    {
        area(processing)
        {
            action(ImportCoverDnStr)
            {
                ApplicationArea = All;
                Caption = 'Import';
                Image = Import;
                ToolTip = 'Import a picture file for the Book''s cover art.';

                trigger OnAction()
                begin
                    Rec.ImportCover();
                end;
            }
            action(DeleteCoverDnStr)
            {
                ApplicationArea = All;
                Caption = 'Delete';
                Enabled = DeleteEnabled;
                Image = Delete;
                ToolTip = 'Delete the cover.';

                trigger OnAction()
                begin
                    if not Confirm(DeleteImageQst) then
                        exit;
                    Clear(Rec.Cover);
                    Rec.Modify(true);
                end;
            }
        }
    }
    trigger OnAfterGetCurrRecord()
    begin
        SetEditableOnPictureActions();
    end;

    var
        DeleteImageQst: Label 'Are you sure you want to delete the cover art?';
        DeleteEnabled: Boolean;

    local procedure SetEditableOnPictureActions()
    begin
        DeleteEnabled := Rec.Cover.HasValue;
    end;
}

Add to the Page

All that is left is to add the factbox to the page where you have the import action. In this case I have a very simple Card page for the book, and the factbox is show to the side.

page 50101 BookCardDnStr
{
    Caption = 'Book Card';
    PageType = Card;
    ApplicationArea = All;
    UsageCategory = Administration;
    SourceTable = BookDnStr;

    layout
    {
        area(Content)
        {
            group(General)
            {
                field("No."; Rec."No.")
                {
                    ToolTip = 'Specifies the value of the No. field.';
                    ApplicationArea = All;
                }
                field(Title; Rec.Title)
                {
                    ToolTip = 'Specifies the value of the Title field.';
                    ApplicationArea = All;
                }
            }
        }
        area(FactBoxes)
        {
            part(BookCover; BookCoverDnStr)
            {
                ApplicationArea = All;
                SubPageLink = "No." = field("No.");
            }
        }
    }
    actions
    {
        area(Processing)
        {
            group(Book)
            {
                action(ImportCover)
                {
                    Caption = 'Import Cover Art';
                    ApplicationArea = All;
                    ToolTip = 'Executes the Import Cover action';
                    Image = Import;
                    Promoted = true;
                    PromotedCategory = Process;
                    PromotedOnly = true;

                    trigger OnAction()
                    begin
                        Rec.ImportCover();
                    end;
                }
            }
        }
    }
}

This was a fun one to figure out. Let me know in the comments if it was useful to you

Commitment Issues

So you’re evolving your source control practice. You have created a ‘dev’ branch to keep work in progress away from the ‘main’ branch, and maybe you are even using feature branches or branches per developer. As you are putting in place branch policies to prevent just anybody from committing changes, you notice a pattern of commits being created just by merging a change into ‘main’. In this post I will show you how to limit the number of commits.

The Setup

I have a simple project in Azure DevOps, with a repo that has a default branch called ‘main’ and a ‘dev’ branch. The main branch requires a reviewer, which in Azure DevOps means that changes can only be made through pull requests. I’ve committed a couple of changes in dev and I have created a pull request to move that change into the main branch to be released. The PR has been approved and I have clicked the ‘Complete’ button on the PR. The Azure DevOps default Merge Type is set to ‘Merge’ and all looks well, so then I click the ‘Complete Merge’ button.

The Issue

When the PR completes, you go back to the Branches view in Azure DevOps, and you notice that the dev branch is 1 commit behind on Main….

Wait, what? Didn’t I just merge 2 commits from dev into Main? The purpose of the PR was to synchronize the branches, but it appears that dev now a commit behind main. The Merge Type of ‘Merge’ created its own commit in the target branch, main in this case. When you go back to VSCode and merge main back into dev, you would notice that the source files themselves have not changed, even though Git has created two extra commits just from the Merge actions. You see, any time that you do a Merge in Git, it will generate a new commit.

If you were to do a file compare between the two branches, there would be no differences in the source files. The not so good part of using the Merge Type ‘Merge’ is that you will have to refresh your local main branch and merge that into your dev branch BEFORE you can continue developing. In itself maybe this works for many people but I prefer it if I can push out my dev work and continue developing without having to worry about being in synch with main.

A Better Way

There is another (I would argue a ‘better’) way to complete PRs. Another setup: I’ve updated my local main and merged that into my local dev branch; I have made another code change, which I have pushed that to the remote from my VSCode. The Branches view in Azure DevOps now shows dev is one commit ahead of main.

This time when you complete the PR, you select ‘Rebase and fast-forward’ as the Merge Type.

Because we are not merging, Git will not create a new commit. The result of this Merge Type is that the branches are in synch after completing the PR. There is no need for me to go back to VSCode and update my main branch.

So What is the Difference?

I’ll be totally honest here and tell you that I don’t really understand the difference between ‘Merge’ and ‘Rebase’. Something about where in the first the differences are actually merged and the changes are preserved in a new commit, and in the second the branch is rebased and the incoming changes are applied to that new base. I am sure that there is a distinct difference, and that this has an impact on the way you manage branches. I’ve looked for but could not find any posts that clearly explain the two with proper examples. I’m still looking, and I still want to know, but I have other more important things to worry about right now. Maybe a good topic to write about in the future :).

For practical purposes though, for the type of projects that I am involved in, I prefer the Rebase option. Personally, I prefer the branches to be synchronized when a PR is completed. Having to go back and pull the merge commit back into my local dev branch feels like a redundant step.

How to Populate a Test Suite

You’ve spent a bunch of time developing test codeunits, and you’ve figured out how to manually pull those into a Test Suite in the Business Central test toolkit. In this post, I will show you how you can automatically populate the test suite, which is especially useful for automatically testing your app in a build pipeline.

What Are We Talking About?

To keep the sample as simple as possible, I started with a Hello World app that I created with the “AL: Go!” command, plus a test app that has a dependency on it. In this test app, I have two test codeunits that don’t do anything. They are completely useless, just meant to show you how to get them into the test tool.

Those test codeunits are deployed into a BC container that has the test toolkit installed. This test toolkit (search for the ‘AL Test Tool’ in Alt+Q) is a UI that allows you to manually run these tests. Just like a standard BC journal page, when you first open the tool, it will create an empty record called ‘DEFAULT’ into which you must then get the test codeunits. Click on the “Get Test Codeunits” action, and only select the two useless codeunits. You should now see the codeunits and their functions in the Test Tool.

It’s kind of a drag to have to manually import the test codeunits into a Test Suite every time you modify something. To make it much easier on you, you can actually write code to do it for you. Put that code into an Install codeunit, and you never have to worry about manually creating a test suite again.

Show Me The Code!!

First we need an Install codeunit with an OnInstallAppPerCompany trigger, which is executed when the app is installed, both during an initial installation and also when performing an update or re-installation. You could probably create a separate “Initialize Test Suite” codeunit so you can run this logic in other places as well, but we are going to just write the code in our trigger directly.

The code below speaks mostly for itself. I like completely recreating the whole suite, but you can of course modify to your requirements. The important part of this example is a codeunit that Microsoft has given to us for this purpose called “Test Suite Mgt.”. This codeunit gives you several functions that you can use to make this possible.

codeunit 50202 InstallDnStr
{
    Subtype = Install;

    trigger OnInstallAppPerCompany()
    var
        TestSuite: Record "AL Test Suite";
        TestMethodLine: Record "Test Method Line";
        MyObject: Record AllObjWithCaption;
        TestSuiteMgt: Codeunit "Test Suite Mgt.";
        TestSuiteName: Code[10];
    begin
        TestSuiteName := 'SOME-NAME';

        // First, create a new Test Suite
        if TestSuite.Get(TestSuiteName) then begin
            TestSuiteMgt.DeleteAllMethods(TestSuite);
        end else begin
            TestSuiteMgt.CreateTestSuite(TestSuiteName);
            TestSuite.Get(TestSuiteName);
        end;

        // Second, pull in the test codeunits
        MyObject.SetRange("Object Type", MyObject."Object Type"::Codeunit);
        MyObject.SetFilter("Object ID", '50200..50249');
        MyObject.SetRange("Object Subtype", 'Test');
        if MyObject.FindSet() then begin
            repeat
                TestSuiteMgt.GetTestMethods(TestSuite, MyObject);
            until MyObject.Next() = 0;
        end;

        // Third, run the tests. This is of course an optional step
        TestMethodLine.SetRange("Test Suite", TestSuiteName);
        TestSuiteMgt.RunSelectedTests(TestMethodLine);
    end;
}

When you deploy your test app, it will now create a new Test Suite called ‘SOME-NAME’, it will pull your test codeunits with their test functions into the Test Suite, and it will execute all tests as part of the installation.

This code is very useful when you are developing the test code, because you won’t ever have to pull in any test codeunits into your test suite manually. Not only that, it will prove very useful when you start using pipelines, and you will be able to have precise control over which codeunits run at what point.

Dependencies

Here are the dependencies that I’m using :

  "dependencies": [
    {
      "id": "23de40a6-dfe8-4f80-80db-d70f83ce8caf",
      "name": "Test Runner",
      "publisher": "Microsoft",
      "version": "18.0.0.0"
    },
    {
      "id":  "5d86850b-0d76-4eca-bd7b-951ad998e997",
      "name":  "Tests-TestLibraries",
      "publisher":  "Microsoft",
      "version": "18.0.0.0"
    }
  ]

Credits

This post has been in my drafts for a while now, based on a question I posted to my Twitter, click on the Twee below to see the replies. The code in this post was copied almost verbatim from Krzysztof’s repo, he has a link in one of the replies. I had worked out my own example based on his code but I lost that when I had to clean up my VM’s.

Use SystemId

If you’re like me, and you use the ‘Page Inspector’ a lot, you have undoubtedly scrolled all the way down, and noticed this group of fields at the bottom. One of those fields is called ‘SystemId’. Today’s post describes some of the important things that you need to know about this field.

The Basics

Toward the end of the ‘NAV’ era, you may have noticed a field called ‘id’ in a lot of tables. This field was meant to have a single-field unique attribute for each record in the database. The original purpose was to provide a more industry standard way to get to records from a webservice, and it was implemented as an actual field of each table for which such functionality was provided.

In BC, the ‘id’ field has been obsoleted and replaced by a new system field called “$systemId”, an attribute that is assigned at the system level. The logic that assigns the values is not accessible to us, it all happens behind the scenes. You can read more about the field itself in Docs.

Some important things to remember:

  • Each SystemId value is unique, you will never see any SystemId value repeat across any table. You could cheat the system and manually assign a value to the field, but if you let BC assign its value it will always be unique
  • Its value cannot be renamed. Unlike PK fields in the database, the SystemId is not editable

So How Does It Work?

You will not see the SystemId as a field anywhere. Drill down into a table’s design and it is not there as a field. You can see the value when using the Page Inspector, but none of the pages have the field visible to the user. That would be pointless, because the SystemId does not have a functional purpose. It’s just a random GUID value that holds no meaning. However, EVERY table has a SystemId field, and since its value is always unique, you could theoretically use the field as its primary key.

You can leave the field alone altogether and just continue using the regular PK fields, and BC will automatically take care of the SystemId. If you are just using or developing ‘normal’ functionality, you will probably never even be aware of the presence of the SystemId.

One really handy thing though is that this is a single field unique value! You could capture a tablerelationship in a single field regardless of the target table’s primary key! You could link to the Sales Line’s SystemId field with a single field foreign key, and not have to worry about document type or line number. All you need to do is store the SystemId as a GUID field, with a tablerelationship to the target table’s SystemId field.

To retrieve a record using its SystemId, you use the GetBySystemId method instead of the regular Get method that you are used to.

How Is It Used?

The main purpose of the SystemId is to facilitate the API. The BC API endpoints are formatted in such a way that you specify the systemId as the unique value for the record that you want to retrieve. Yes you can still do a filter on the PK fields, but the standard way to get a particular record from an API endpoint is to provide its SystemId field value. When posting a new record, the response will return the new record’s SystemId.

You can look at a bunch of tables that are part of the API and see that a lot of tables have new fields for these SystemId fields. Take for instance the Customer table. You can see a bunch of new fields like “Currency Id” and “Payment Terms Id” with logic in OnValidate to update the ‘normal’ fields that are all still there. The new SystemId fields do not replace the existing related fields, they live side by side.

Still Confused

Yes it is still confusing, well maybe ‘cumbersome’ is a better word. The mechanism is fairly straightforward , but it has not made our job easier. Not every tablerelationship has been updated with the SystemId, and where you do see those ‘new’ ones, there is a double field relationship that has to be maintained with validation code. Especially when you need to expand standard APIs, you will have to create those additional SystemId fields on top of existing foreign key fields to ensure data integrity.

Containers And Bacpacs

A while ago an ISV client of mine was working on getting their app into the Embed program. Part of this process was to upload a bacpac with certain characteristics. The characteristics themselves are not relevant for this post but as I was helping them, but I thought I’d write this quick post to share how you can extract your bacpac files from a container, and how to use those bacpac files to create another one.

The setup

I’m starting out with a standard BC container, which was created using BcContainerHelper, and it is called DenSterDev. Coincidentally, I am also using BcContainerHelper to extract the bacpacs and to create the new container. I am using the “C:\ProgramData\BcContainerHelper folder to store the bacpacs, because that folder is recognized both inside and outside of the container.

Extract Bacpac Files

The container is multi-tenant, so there are two databases that we care about: one is the app database, and the other is the tenant database. Both of those are necessary to create the new container. If you have any apps installed on top of the standard container, those will be included in the bacpac file for the app database, and the bacpac for the tenant database contains the data itself.

The benefit of using BcContainerHelper is that we have very handy Cmdlets to get all this stuff in and out of containers, and the bacpacs is no exception. The command is very easy:

Export-BcContainerDatabasesAsBacpac `
    -containerName 'DenSterDev' `
    -tenant default `
    -sqlCredential $Credential `
    -bacpacFolder C:\ProgramData\BcContainerHelper `
    -doNotCheckEntitlements

The tenant name is the default name of ‘default’ that is created in each standard BC container. The sqlCredential is a PSCredential object that was created during the container generation, using a username and a secure string password. As stated above, the bacpacFolder is a folder that can be accessed both in and out of the container. The entitlement flag is to bypass the check and prevent an error. When you execute this script, the bacpac files will show up in the bacpacFolder:

Create New Container from Bacpac

We are going to use these same bacpac files to create a new container. I’ll use the same container name:

New-BCContainer `
    -accept_eula `
    -containerName 'DenSterDev' `
    -artifactUrl '<ProperArtifactURL>' `
    -auth NavUserPassword `
    -assignPremiumPlan `
    -updateHosts `
    -accept_outdated `
    -Credential $Credential `
    -additionalParameters @('--env appbacpac=C:\ProgramData\BcContainerHelper\app.bacpac','--env tenantbacpac=C:\ProgramData\BcContainerHelper\default.bacpac')

Same as before, the -Credential parameter contains a PSCredential object. Note that the -additionalParameters spans across multiple lines here, but that should go on the same line in your PowerShell editor.

This command will download all the necessary artifacts and create the same container as the standard. The only difference will be that the app and tenant databases will be created from the bacpac files in your folder, instead of the standard database from the artifact. You can follow along with the script in the terminal window.

Nothing earth shattering, and made super easy by BcContainerHelper, but it took me a while to find the information and make this work. Hat’s off to Dmitry in the BC team, he was very patient with me as I got familiar with this process. Let me know in the comments if this was helpful or if you want to add anything.