Dynamic Enums

Although enums are static lists of values, there is a way to restrict which values can be used. With this post I will show you how to do that, and how you can dynamically set up how this happens.

Didn’t Think it was Possible

I didn’t think dynamic enums were possible but I asked the question on Twitter anyway. The golden tip was a page property called ‘ValuesAllowed’. As I was figuring out how to use this property I thought I’d write this blog post. When I returned to Twitter to post my findings, there were two more links to some other people’s articles in the replies. I’ve since removed much of the details from this post, since they are essentially the same. Go and follow the links in the Tweet replies to read those details.

Both replies cover an essentially hard coded way to restrict option values. I want to take this one step further, where we provide a setup field that is used to manage the choices that you see. Now… I have to say I do NOT like using a static list of options for this purpose. We are still looking at a static list of values, and we are hardcoding what is visible. In my real-world scenario we had to put something in place quickly, and this was indeed a very quick ‘fix’.

Scenario

My actual scenario involved a rather controversial topic, so let’s use a silly scenario instead. We add a field to the Customer table called ‘Dessert Choice’, with an enum type that has 4 values: <blank>, Icecream, Cookies, and ‘Choice Declined’. You need an enum object, a table extension with a field based on the enum, and a page extension to add the field to the Customer Card. Let’s say you want to restrict the ‘Choice Declined’ option. Easy peasy, lemon squeezy, you add a ‘ValuesAllowed’ property to the field on the page extension, and you specify the values that you do want to allow.

In my real-world scenario, my client needed a way to restrict the available options for one company, and provide all of them in others. What we ended up doing was add a toggle to a setup table to turn this restriction on or off.

Show Me The Code

As per usual I was writing and writing, using SO MANY words to describe the situation, and decided to just give you the page extension itself, assuming that you can figure out the fields that I am using.

pageextension 60000 CustomerCardDnStr extends "Customer Card"
{
    layout
    {
        addafter(Name)
        {
            field(DessertChoice; Rec.DessertChoice)
            {
                ApplicationArea = All;
                ToolTip = 'Specifies...';
                Visible = AllVisible;
            }
            field(RestrictedDessertChoice; Rec.DessertChoice)
            {
                ApplicationArea = All;
                ToolTip = 'Specifies...';
                ValuesAllowed = Blank, Icecream, Cookies;
                Visible = (not AllVisible);
            }
        }
    }

    var
        MySetup: Record MySetupDnStr;
        AllVisible: Boolean;

    trigger OnOpenPage()
    begin
        MySetup.GetRecordOnce();
        AllVisible := MySetup.AllowDecline;
    end;
}

Basically you create multiple controls in the page extension for the same field, and you toggle visibility based on a field in a setup table. You could even do this at a record level in a list, by using an InDataSet variable, and putting the code in the OnAfterGetCurrRecord trigger. Again, I’m thinking this should have been done with a table with actual functionality, but this way uses very little code and we had to put something in very quickly.

That’s it, nothing fancy. Not very clever, but useful in my client’s scenario.

Partial Records with SetLoadFields

Fetching and updating records has historically been the greatest culprit of performance problems. The standard way that BC retrieves records is very expensive, since it will always get ALL the fields of a table (and its brethren companion tables). This post covers a (relatively) new option called SetLoadFields, which is used to specify the fields that you want to retrieve.

So What’s the Problem?

The database engine for BC is SQL Server for OnPrem and Azure SQL for SaaS; the business logic translates database operations into T-SQL statements at run time. By default, it issues a SELECT * and that means that for every standard database call, BC retrieves ALL fields from the table. Good for us developers because we never have to think about which fields to fetch. From a performance point of view though this causes a MASSIVE superfluous overhead in data traffic. Some of the most used tables in BC have bazillions of fields, and in any business logic scenario you never need more than a handful of fields.

The problem is exacerbated by the presence of table extensions. Each table extension is represented in the SQL database by a companion table that shares the primary key with the main table. Every time that you retrieve records from the main table, the system also retrieves the fields from the companion tables by issuing a JOIN on the PK fields. You can imagine a popular table like the Sales Line with a dozen table extensions; a SalesLine.FindSet command generates a SQL statement that includes a dozen JOINs.

The problem is that the number of fields that are included in SQL statements has a disproportionate effect on query performance. Read the post that I link to below for more details, but what you need to know is that the same query with all fields can take hundreds of times more than retrieving just half a dozen fields.

Only Get What you Need

To eliminate this overhead, we now have a SetLoadFields command. Basically, what you can do with this command is to define the fields that are included in the SQL statement. Instead of getting all fields and get data that you will never use, you tell BC that you only want your handful of fields.

Need an address from a Vendor? The external document number from an invoice? An Inventory Posting Group from an Item? You don’t have to read 6 million fields to do that anymore.

procedure ShowSomeCode()
var
    Vendor: Record Vendor;
begin
    Vendor.SetLoadFields(Address,"Address 2",City,"Post Code");
    Vendor.Get('10000');
    // do stuff with the address fields and ignore the rest
end;

This code sample generates a SQL statement with a SELECT on just those few fields that are defined in the SetLoadFields command, and it should ignore the companion tables altogether since these are all main table fields.

Read the documentation here and make sure you read how to use it here. For more technical in-depth information on what to do and what happens under the hood (way above my head there), read Mads’ posts here and here.

New Habits

In my day-to-day life as a BC developer I don’t normally see SetLoadFields commands. I even checked the standard objects and it’s actually quite surprising how little it is used there. In a previous life I did a LOT of performance troubleshooting, and this would have been a tremendous help in solving lots of performance problems. I know I will try to make using this command a habit.

Create JSON in AL

Last year I developed an integration to an external REST API from Business Central. One of the things that I had to learn is how to deal with JSON. We now have a bunch of different JSON data types, and if you’re just getting into them, they are hard to keep apart. With this post, I’ll try to explain as easy as I can make it, how to create a JSON object in AL.

JSON Basics

First, of course, the basics.

  • JSON is short for ‘JavaScript Object Notation’, follow this link to read the actual standards but if you’re new to this don’t though, it will only confuse you
  • It’s kind of like XML in concept but much easier to read. No attributes, no formatting rules, just a collection of key/value pairs
  • The equivalent of an ‘XML document’ is called a ‘JSON Object’. The start and end of these JSON objects are marked by curly braces (these {}).
  • Keys and values are always between double quotes (unless they are in non-text data types, but you can always put values in double quotes), separated by a colon. Multiple key/value pairs are separated by commas. This is a very simple JSON object with some key/value pairs:
{
    "A Key": "Example One",
    "Another Key": "Another Value",
    "Number Value": 42,
    "Boolean Value": false
}
  • Nesting is done by adding a new JSON object as a value, like this:
{
    "A Key": "Example Two",
    "Nested thing": {
        "First nested Key": "nested value",
        "Second nested key": "some other value"
    }
}
  • A list of JSON objects is called a ‘JSON Array’. The only restriction here is that each of the objects in an array must be structured the same, otherwise it would not be an array. The start and end of a JSON array is marked by square brackets. Each object has its own start/end curlies, and they are separated by commas. Here’s a simple example:
{
    "A Key": "Example Three",
    "List of Things": [
        {
            "name": "thing 1",
            "age": 10
        },
        {
            "name": "thing 2",
            "age": 12
        }
    ]
}

Note that the objects inside the array are identical in structure. This is really it, you should now be able to decipher any JSON response from any web service.

JSON Data Types in AL

In AL, there are 4 different JSON data types. All of these data types are .NET types that are wrapped in AL data types. You don’t have to worry about constructing these variables, they take care of themselves. All you have to do is make sure that you start with a fresh one, so pay attention to the scope of your variables. If you are not sure about that, use the Clear keyword to empty it out.

  • JsonObject – this data type represents an actual JSON object. It has properties and methods, and you use those to build the object as shown in the code examples below
  • JsonArray – this data type is also an object with properties and methods, and it contains the stuff that is between the square brackets
  • JsonValue – this represents a single value of a simple data type like a Text, an Integer, or a Boolean
  • JsonToken – in the JSON world, the ‘Token’ is similar to a Variant, and it can be either one of the other three Json data types. If you are not sure about the content of a JSON element, you can always use a Token as a data type

Show Me The Code!

When I started this section I was going down a rabbit hole of long sentences and complicated explanations. As I was reading what I wrote I realized that describing these things actually was not making any sense at all. Let me just give you the code, and if you have any questions about this please leave a comment below.

Each example refers to the JSON examples in the ‘JSON Basics’ section above.

Example One

The first example is very straight forward. This object only has simple key/value pairs, so we can simply add those to the object, like this:

    procedure ExampleOne()
    var
        MyJobject: JsonObject;
    begin
        MyJobject.Add('A Key', 'Example One');
        MyJobject.Add('Another Key', 'Another Value');
        MyJobject.Add('Number Value', 42);
        MyJobject.Add('Boolean Value', false);
    end;
Example Two

The second example has another JSON object as one of its values. The ‘Add’ method of the JsonObject data type takes a string as the Key name, and a JsonToken as the value parameter. In the first sample those were simple datatypes, and for this second sample we’re going to create another object to use as the value parameter, like this:

    procedure ExampleTwo()
    var
        MyJobject: JsonObject;
        NestedJObject: JsonObject;
    begin
        MyJobject.Add('A Key', 'Example Two');

        NestedJObject.Add('First Nested Key', 'nested value');
        NestedJObject.Add('Second Nested Key', 'some other value');
        MyJobject.Add('Nested thing', NestedJObject);
    end;
Example Three

The final example includes an array of objects, so we’ll build those one step at a time. Everything just coded in there, I hope you see how the array objects should be refactored into a re-usable function.

    procedure ExampleThree()
    var
        MyJobject: JsonObject;
        ThingJObject: JsonObject;
        MyJArray: JsonArray;
    begin
        MyJobject.Add('A Key', 'Example Three');

        Clear(ThingJObject);
        ThingJObject.Add('name', 'thing 1');
        ThingJObject.Add('age', 10);
        MyJArray.Add(ThingJObject);

        Clear(ThingJObject);
        ThingJObject.Add('name', 'thing 2');
        ThingJObject.Add('age', 12);
        MyJArray.Add(ThingJObject);

        MyJobject.Add('List of Things', MyJArray);
    end;
What about JsonToken?

We did not use a variable of type JsonToken directly, because when we are creating a JSON Object we already know what we have. Indirectly though, we definitely ARE using a Token. The ‘Add’ method of both the JsonObject and the JsonArray data types use a Token as the value parameter. Since the Token can be any simple type or any other Json type, you can simply throw any of those in and it knows what to do with it.

The JsonToken type will come in very useful when you start processing incoming JSON objects, and you need to make sure you correctly convert values into compatile data types in AL.

What’s Next?

These are just some very simple examples of how to build a JSON object, but this is really all the logic you’ll ever need to create your own. The JsonObject will take care of the curlies and the commas, the JsonArray will take care of the square brackets and all the other stuff. The only slightly complex thing you’ll ever need to do beyond this is to now take the JsonObject and set it as the body of an HttpRequest.

Let me know in the comments if this is helpful. There are other super useful posts about this topic, but I wanted to explain this in my own words. I will also write one how to read an incoming JSON object, and at some point I’ll share some helper logic that I’ve created to help take care of conversions and such.

Insider Builds on Collaborate

As a Microsoft partner, you have to create a bunch of accounts to get into a bunch of systems. Microsoft is on a continuing mission to make things easier by putting most of it into a single portal called “Partner Center”. In this post I will focus on one particular thing – how to get access to insider builds of BC, which you need to validate your extension against future releases.

I’m writing this because the engagement was not listed for one of my clients. Microsoft support was no help, and I could not find this information in some of the obvious places. Just a quick post, mostly for my own reference but I can’t imagine that I am the only one that is having difficulties getting into the program. Thanks to our little Twitterverse for helping me out.

Collaborate

First, you need access to Collaborate itself. From your Partner Center portal dashboard, you should see a rectangle with the word ‘Collaborate’, the shortlink is aka.ms/collaborate. If you do not have access yet, this is where you would have a link to where you can submit a request to be admitted.

Insider Builds

To get to the insider builds, you have to be enrolled in something called an engagement called ‘Ready! for Dynamics 365 Business Central’. As a Business Central partner, this engagement should be listed and all you do is hit the ‘Join’ button and you should get access within a couple of days.

Access to the insider builds is buried a little bit. Select the Ready! engagement and click the ‘Packages’ button. The one you need is the document called “Working with Business Central Insider Builds”. The thing that you need to get the actual build is the $sasToken. This value is needed to be able to download the artifacts for the insider build. It’s the password to validate that you are allowed to use those artifacts. By the way, don’t forget to take the time to read the material in there, it’s important information.

What if I don’t see the engagement

Normally, when you register as a development partner for Business Central, you are supposed to see the “Ready to go” program in the available engagements, and getting access is as easy as clicking the ‘Join’ button. This does not always happen for some reason. The screenshot below shows the available engagements for one of my clients, and as you can see the program did not show up, so we could not click the join button.

If for whatever reason you do not see the engagement, do not submit a support ticket from Partner Center. These tickets go to a central support group that does not know about program-specific issues, and you’ll spend weeks going back and forth. You’ll ask for access, and they will tell you to click the Join button. You’ll tell them there IS no join button to click and they’ll say click the button anyway. This is not a knock on them, they just don’t seem to know the details.

Access to the BC programs is handled by the product team themselves, and their email is Dyn365BEP@microsoft.com. This will get your issue directly with the BC team, and they will know exactly what to do. Just provide your publisher name, your MPN ID, and first name, last name and email for the person that needs access. Once I did this, the issue was solved within a few days.

Adding new users

Once you have access to the engagement, you should be able to provide access to your own people by first adding them to the Partner Center account, and then by enrolling them into the engagement.

THE Book on Automated Testing in BC

It has taken well over a year to write, and a good 8 months to review (and revise, and rewrite certain parts), and it has been delayed almost 3 months. I am SUPER proud to say though that the second edition of THE BOOK on automated testing in Business Central has been published!

As a reviewer of course I knew this was coming, and Luc finally shared the news

What if I Already Have the First Edition?

Of course you do! You are a BC professional, so therefore you’ve been adding automated testing to all of your projects right from the start, and you purchased Luc’s first book right when he wrote that. There are a few reasons why you should purchase the second edition

First of all, at 387 pages the second edition is almost twice the book as the first edition, which counts *only* 206 pages. Luc has added about a metric ton of stuff to the second edition. Not just expanded information on existing topics, but chapters about completely new topics altogether.

Major Improvements

One of the things that I thought was lacking in the first edition was more in-depth information about Test Driven Development (TDD for short). The mechanics of automated testing were solidly covered, and I was able to apply this knowledge in my work. What I was missing was how to take this to the next level. I knew there was a large methodological body of work out there about TDD, and I did not know where to start looking how that would be relevant for ME.

The second edition has filled that gap. Not only does Luc write eloquently about the methodology itself (there’s a whole chapter on TDD itself now), he puts it into the context of Business Central development specifically. He explains HOW you can use TDD in Business Central, he shows you step by step how to approach this, and he even provides a handy set of tools to support this methodology.

Luc spends a LOT of time writing about all aspects of TDD. More than just the nuts and bolts of creating test apps, he covers how to integrate automated tests in your daily development practice.

Advanced Topics

The brand new section called ‘Advanced Topics’ addresses some lesser known things such as refactoring your code to create more re-usable components, utilize standard components in more complex scenarios, the approach to testing web services and even how to make YOUR code more trestable.

In short, this second edition goes much more in-depth in just about every aspect of the book, plus it provides a wealth of information into a number of valuable topics that were not addressed in the first edition.

I am VERY proud to have played a part in writing this book, Luc did a phenomenal job in making the second edition a much more mature volume of THE book on automated testing in Business Central. Even if you already own the first edition, your money will not go to waste if you buy the second one.

Where can I get it?

Two places that I know of that you can get it:

  • The Packt Publishing website. Some people complain about delivery delays and such, but I don’t mind waiting a few days. When you get it from Packt directly, you have the option to have the book in print as well as e-book. The online reader on the Packt website is on of the best online readers I know. Plus with online access you can start reading right away, so to me well worth the handful of extra days for delivering the print book.
  • Amazon of course. Can’t beat delivery time, but Amazon does not bundle print + eBook and Packt does.

Import Media Files for SaaS

One of the standard ‘Problems’ when you’re in an AL workspace in VSCode is a warning that you are no longer allowed to use BLOB as a datatype for images. This has been at the bottom of my priorities list until I had a request to create a new image for a standard field. With this post I’ll show you how easy it is.

Media Field

The first element that you need is a field in the table. Instead of a BLOB field with subtype Bitmap, you now need a field of type ‘Media’. There is also a data type called ‘MediaSet’ but that’s not what we are going to use. Go to Docs to read about the difference between Media and MediaSets. The field is not editable directly because we will be importing the image through a function.

In addition to the field itself, you need a function to import an image file into the field. In the object below I have a simple table called ‘Book’ with a number, a title and a cover. We use the ImportCover function to do the import, and implement that as an internal procedure, so it can only be used internal to the app. You can of course set the scope as you see fit.

table 50100 BookDnStr
{
    Caption = 'Book';
    DataClassification = CustomerContent;

    fields
    {
        field(10; "No."; Code[20])
        {
            Caption = 'No.';
            DataClassification = CustomerContent;
        }
        field(20; "Title"; Text[100])
        {
            Caption = 'Title';
            DataClassification = CustomerContent;
        }
        field(30; Cover; Media)
        {
            Caption = 'Cover';
            Editable = false;
        }
    }

    keys
    {
        key(PK; "No.")
        {
            Clustered = true;
        }
    }

    internal procedure ImportCover()
    var
        CoverInStream: InStream;
        FileName: Text;
        ReplaceCoverQst: Label 'The existing Cover will be replaced. Do you want to continue?';
    begin
        Rec.TestField("No.");
        if Rec.Cover.HasValue then
            if not Confirm(ReplaceCoverQst, true) then exit;
        if UploadIntoStream('Import', '', 'All Files (*.*)|*.*', FileName, CoverInStream) then begin
            Rec.Cover.ImportStream(CoverInStream, FileName);
            Rec.Modify(true);
        end;
    end;
}

Factbox for the image

Similar to how Item images have been implemented, you can create a factbox to show the book cover and add that to the Book Card. Using a factbox also makes it easy to keep the related actions close to the control.

page 50100 BookCoverDnStr
{
    Caption = 'Book Cover';
    DeleteAllowed = false;
    InsertAllowed = false;
    LinksAllowed = false;
    PageType = CardPart;
    SourceTable = BookDnStr;

    layout
    {
        area(content)
        {
            field(Cover; Rec.Cover)
            {
                ApplicationArea = All;
                ShowCaption = false;
                ToolTip = 'Specifies the cover art for the current book';
            }
        }
    }
    actions
    {
        area(processing)
        {
            action(ImportCoverDnStr)
            {
                ApplicationArea = All;
                Caption = 'Import';
                Image = Import;
                ToolTip = 'Import a picture file for the Book''s cover art.';

                trigger OnAction()
                begin
                    Rec.ImportCover();
                end;
            }
            action(DeleteCoverDnStr)
            {
                ApplicationArea = All;
                Caption = 'Delete';
                Enabled = DeleteEnabled;
                Image = Delete;
                ToolTip = 'Delete the cover.';

                trigger OnAction()
                begin
                    if not Confirm(DeleteImageQst) then
                        exit;
                    Clear(Rec.Cover);
                    Rec.Modify(true);
                end;
            }
        }
    }
    trigger OnAfterGetCurrRecord()
    begin
        SetEditableOnPictureActions();
    end;

    var
        DeleteImageQst: Label 'Are you sure you want to delete the cover art?';
        DeleteEnabled: Boolean;

    local procedure SetEditableOnPictureActions()
    begin
        DeleteEnabled := Rec.Cover.HasValue;
    end;
}

Add to the Page

All that is left is to add the factbox to the page where you have the import action. In this case I have a very simple Card page for the book, and the factbox is show to the side.

page 50101 BookCardDnStr
{
    Caption = 'Book Card';
    PageType = Card;
    ApplicationArea = All;
    UsageCategory = Administration;
    SourceTable = BookDnStr;

    layout
    {
        area(Content)
        {
            group(General)
            {
                field("No."; Rec."No.")
                {
                    ToolTip = 'Specifies the value of the No. field.';
                    ApplicationArea = All;
                }
                field(Title; Rec.Title)
                {
                    ToolTip = 'Specifies the value of the Title field.';
                    ApplicationArea = All;
                }
            }
        }
        area(FactBoxes)
        {
            part(BookCover; BookCoverDnStr)
            {
                ApplicationArea = All;
                SubPageLink = "No." = field("No.");
            }
        }
    }
    actions
    {
        area(Processing)
        {
            group(Book)
            {
                action(ImportCover)
                {
                    Caption = 'Import Cover Art';
                    ApplicationArea = All;
                    ToolTip = 'Executes the Import Cover action';
                    Image = Import;
                    Promoted = true;
                    PromotedCategory = Process;
                    PromotedOnly = true;

                    trigger OnAction()
                    begin
                        Rec.ImportCover();
                    end;
                }
            }
        }
    }
}

This was a fun one to figure out. Let me know in the comments if it was useful to you

Commitment Issues

So you’re evolving your source control practice. You have created a ‘dev’ branch to keep work in progress away from the ‘main’ branch, and maybe you are even using feature branches or branches per developer. As you are putting in place branch policies to prevent just anybody from committing changes, you notice a pattern of commits being created just by merging a change into ‘main’. In this post I will show you how to limit the number of commits.

The Setup

I have a simple project in Azure DevOps, with a repo that has a default branch called ‘main’ and a ‘dev’ branch. The main branch requires a reviewer, which in Azure DevOps means that changes can only be made through pull requests. I’ve committed a couple of changes in dev and I have created a pull request to move that change into the main branch to be released. The PR has been approved and I have clicked the ‘Complete’ button on the PR. The Azure DevOps default Merge Type is set to ‘Merge’ and all looks well, so then I click the ‘Complete Merge’ button.

The Issue

When the PR completes, you go back to the Branches view in Azure DevOps, and you notice that the dev branch is 1 commit behind on Main….

Wait, what? Didn’t I just merge 2 commits from dev into Main? The purpose of the PR was to synchronize the branches, but it appears that dev now a commit behind main. The Merge Type of ‘Merge’ created its own commit in the target branch, main in this case. When you go back to VSCode and merge main back into dev, you would notice that the source files themselves have not changed, even though Git has created two extra commits just from the Merge actions. You see, any time that you do a Merge in Git, it will generate a new commit.

If you were to do a file compare between the two branches, there would be no differences in the source files. The not so good part of using the Merge Type ‘Merge’ is that you will have to refresh your local main branch and merge that into your dev branch BEFORE you can continue developing. In itself maybe this works for many people but I prefer it if I can push out my dev work and continue developing without having to worry about being in synch with main.

A Better Way

There is another (I would argue a ‘better’) way to complete PRs. Another setup: I’ve updated my local main and merged that into my local dev branch; I have made another code change, which I have pushed that to the remote from my VSCode. The Branches view in Azure DevOps now shows dev is one commit ahead of main.

This time when you complete the PR, you select ‘Rebase and fast-forward’ as the Merge Type.

Because we are not merging, Git will not create a new commit. The result of this Merge Type is that the branches are in synch after completing the PR. There is no need for me to go back to VSCode and update my main branch.

So What is the Difference?

I’ll be totally honest here and tell you that I don’t really understand the difference between ‘Merge’ and ‘Rebase’. Something about where in the first the differences are actually merged and the changes are preserved in a new commit, and in the second the branch is rebased and the incoming changes are applied to that new base. I am sure that there is a distinct difference, and that this has an impact on the way you manage branches. I’ve looked for but could not find any posts that clearly explain the two with proper examples. I’m still looking, and I still want to know, but I have other more important things to worry about right now. Maybe a good topic to write about in the future :).

For practical purposes though, for the type of projects that I am involved in, I prefer the Rebase option. Personally, I prefer the branches to be synchronized when a PR is completed. Having to go back and pull the merge commit back into my local dev branch feels like a redundant step.

Use SystemId

If you’re like me, and you use the ‘Page Inspector’ a lot, you have undoubtedly scrolled all the way down, and noticed this group of fields at the bottom. One of those fields is called ‘SystemId’. Today’s post describes some of the important things that you need to know about this field.

The Basics

Toward the end of the ‘NAV’ era, you may have noticed a field called ‘id’ in a lot of tables. This field was meant to have a single-field unique attribute for each record in the database. The original purpose was to provide a more industry standard way to get to records from a webservice, and it was implemented as an actual field of each table for which such functionality was provided.

In BC, the ‘id’ field has been obsoleted and replaced by a new system field called “$systemId”, an attribute that is assigned at the system level. The logic that assigns the values is not accessible to us, it all happens behind the scenes. You can read more about the field itself in Docs.

Some important things to remember:

  • Each SystemId value is unique, you will never see any SystemId value repeat across any table. You could cheat the system and manually assign a value to the field, but if you let BC assign its value it will always be unique
  • Its value cannot be renamed. Unlike PK fields in the database, the SystemId is not editable

So How Does It Work?

You will not see the SystemId as a field anywhere. Drill down into a table’s design and it is not there as a field. You can see the value when using the Page Inspector, but none of the pages have the field visible to the user. That would be pointless, because the SystemId does not have a functional purpose. It’s just a random GUID value that holds no meaning. However, EVERY table has a SystemId field, and since its value is always unique, you could theoretically use the field as its primary key.

You can leave the field alone altogether and just continue using the regular PK fields, and BC will automatically take care of the SystemId. If you are just using or developing ‘normal’ functionality, you will probably never even be aware of the presence of the SystemId.

One really handy thing though is that this is a single field unique value! You could capture a tablerelationship in a single field regardless of the target table’s primary key! You could link to the Sales Line’s SystemId field with a single field foreign key, and not have to worry about document type or line number. All you need to do is store the SystemId as a GUID field, with a tablerelationship to the target table’s SystemId field.

To retrieve a record using its SystemId, you use the GetBySystemId method instead of the regular Get method that you are used to.

How Is It Used?

The main purpose of the SystemId is to facilitate the API. The BC API endpoints are formatted in such a way that you specify the systemId as the unique value for the record that you want to retrieve. Yes you can still do a filter on the PK fields, but the standard way to get a particular record from an API endpoint is to provide its SystemId field value. When posting a new record, the response will return the new record’s SystemId.

You can look at a bunch of tables that are part of the API and see that a lot of tables have new fields for these SystemId fields. Take for instance the Customer table. You can see a bunch of new fields like “Currency Id” and “Payment Terms Id” with logic in OnValidate to update the ‘normal’ fields that are all still there. The new SystemId fields do not replace the existing related fields, they live side by side.

Still Confused

Yes it is still confusing, well maybe ‘cumbersome’ is a better word. The mechanism is fairly straightforward , but it has not made our job easier. Not every tablerelationship has been updated with the SystemId, and where you do see those ‘new’ ones, there is a double field relationship that has to be maintained with validation code. Especially when you need to expand standard APIs, you will have to create those additional SystemId fields on top of existing foreign key fields to ensure data integrity.

Containers And Bacpacs

A while ago an ISV client of mine was working on getting their app into the Embed program. Part of this process was to upload a bacpac with certain characteristics. The characteristics themselves are not relevant for this post but as I was helping them, but I thought I’d write this quick post to share how you can extract your bacpac files from a container, and how to use those bacpac files to create another one.

The setup

I’m starting out with a standard BC container, which was created using BcContainerHelper, and it is called DenSterDev. Coincidentally, I am also using BcContainerHelper to extract the bacpacs and to create the new container. I am using the “C:\ProgramData\BcContainerHelper folder to store the bacpacs, because that folder is recognized both inside and outside of the container.

Extract Bacpac Files

The container is multi-tenant, so there are two databases that we care about: one is the app database, and the other is the tenant database. Both of those are necessary to create the new container. If you have any apps installed on top of the standard container, those will be included in the bacpac file for the app database, and the bacpac for the tenant database contains the data itself.

The benefit of using BcContainerHelper is that we have very handy Cmdlets to get all this stuff in and out of containers, and the bacpacs is no exception. The command is very easy:

Export-BcContainerDatabasesAsBacpac `
    -containerName 'DenSterDev' `
    -tenant default `
    -sqlCredential $Credential `
    -bacpacFolder C:\ProgramData\BcContainerHelper `
    -doNotCheckEntitlements

The tenant name is the default name of ‘default’ that is created in each standard BC container. The sqlCredential is a PSCredential object that was created during the container generation, using a username and a secure string password. As stated above, the bacpacFolder is a folder that can be accessed both in and out of the container. The entitlement flag is to bypass the check and prevent an error. When you execute this script, the bacpac files will show up in the bacpacFolder:

Create New Container from Bacpac

We are going to use these same bacpac files to create a new container. I’ll use the same container name:

New-BCContainer `
    -accept_eula `
    -containerName 'DenSterDev' `
    -artifactUrl '<ProperArtifactURL>' `
    -auth NavUserPassword `
    -assignPremiumPlan `
    -updateHosts `
    -accept_outdated `
    -Credential $Credential `
    -additionalParameters @('--env appbacpac=C:\ProgramData\BcContainerHelper\app.bacpac','--env tenantbacpac=C:\ProgramData\BcContainerHelper\default.bacpac')

Same as before, the -Credential parameter contains a PSCredential object. Note that the -additionalParameters spans across multiple lines here, but that should go on the same line in your PowerShell editor.

This command will download all the necessary artifacts and create the same container as the standard. The only difference will be that the app and tenant databases will be created from the bacpac files in your folder, instead of the standard database from the artifact. You can follow along with the script in the terminal window.

Nothing earth shattering, and made super easy by BcContainerHelper, but it took me a while to find the information and make this work. Hat’s off to Dmitry in the BC team, he was very patient with me as I got familiar with this process. Let me know in the comments if this was helpful or if you want to add anything.

Sign App File – part 2

Quite a while ago I wrote about signing your app file, which is a requirement for AppSource. It’s been a while since I had to do this, so I went back to my blog and found the article quite lacking. This post is an attempt to fill in the blanks and give you all the information that you need to sign your app, all in one place.

Your first stop to read about this is right here, the Learn page about signing the app file specifically for Business Central. Most of what I’m about to tell you is in there, I’ll just elaborate a little bit more.

Basically, signing an app file, or an executable file, is a way to tag that file with an attribute that certifies where the file came from. If Acme Rockets signs their rocket skate app, the file has an attribute that shows Acme indeed digitally signed it. Take a look at the properties for ‘explorer.exe’, the executable for Windows Explorer. You can check out the digital signature that verifies that this file was signed by Microsoft.

In a nutshell, you need the following:

  • A Code Signing Certificate, in ‘pfx’ format
  • A code signing tool (I’m using ‘signtool’ here)
  • The SIP from your BC container (don’t ask, I still don’t really know)
  • A script to actually sign

Code Signing Certificate

The first thing that you need is the Code Signing certificate. This is a particular type of certificate (NOT the same as an SSL certificate) that you must get from an Authenticode licensed certificate authority (there’s a link in the Docs article mentioned above) such as this one or this one or this one or this one. I’m not affiliated with either one, and GoDaddy doesn’t seem to provide code signing certificates anymore, but I’ve worked with certs from two of those companies and they both worked as advertised. For AppSource submissions, you need the regular “Code Signing”, not the extended one or the one for drivers. Go shopping, because I’ve seen prices range between $199 and $499 per year for the same thing.

In order for the signtool to be able to use the certificate, it must be in ‘pfx’ format. One of the providers that I mentioned has a page here that explains how you can create this file format. The actual file will have a password on it, and you can save it on the computer where you have NAV/BC installed, or where your container lives. I usually have a working folder right in the C root where I do this kind of thing.

The Signing Tool

You’ll need a tool to sign the app file – Microsoft recommends SignTool or SignCode. Since their sample script is for SignTool, that’s the one that I used. Now, the text in Docs describes that SignTool is automatically installed with Visual Studio, but that is only partially true. I actually downloaded Visual Studio to see if that works, but the installation configuration that I chose did not include SignTool.

Signtool is part of the Windows SDK, which probably comes in one of the standard Visual Studio configurations. I don’t know which one, so you’ll have to make sure that it is selected when you are installing it. Another way to get it installed is to install the Windows SDK directly, which you can download here. I installed the one for Windows 7 on a Windows Server 2019 Hyper-V VM, and it worked for me. I know, I should have looked a little longer and used the Windows 10 one, but by that time my app file was already signed and dinner smells were filling my office.

The SIP

If you try to sign your app file now, you will probably get an error message that the app file is not recognized. The SignTool program needs to be able to recognize the app file, and for that purpose it needs to have something called ‘the SIP’ registered on the machine where you run the SignTool command. Apparently this is some sort of hash/validation calculation package that is used to create digital signatures. Each program on your computer apparently has one of these.

One way to get ‘the SIP’ is to install NAV/BC on the computer. If you’re like me, and you use containers exclusively, you won’t want to do this. Luckily, the NavContainerHelper module has a Cmdlet to retrieve ‘the SIP’ out of the container.

 Install-NAVSipCryptoProviderFromBCContainer YourContainerName 

This Cmdlet gets ‘the SIP’ out of the container and registers it on the host. At this point, you should be all set to sign your app file.

Script to Sign

The last element is the command to actually create the digital signature. Not much to say about that, so here it is:

"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" sign 
    /f "C:\WorkFolder\CodeSignCert.pfx" 
    /p "Your Password" 
    /t http://timestamp.verisign.com/scripts/timestamp.dll "C:\YourRepo\Publisher_AppName_1.0.0.0.app"

As you can see, my SignTool is in the Windows 7 SDK folder, you may need to search around for it. Installing the SDK is supposed to register SignTool and you should be able to just use ‘signtool’ as a command. For some reason that did not work for me, which is why I specified the entire path. I split this up to make it look better in this post, the command needs to all be on one line.

One more thing – the timestamp specifies that the file was signed using a certificate that was valid at the time of signing, and the file itself will never expire. Of course if you want to submit a new file after the certificate has expires, you will need to get a new one. If you don’t specify the timestamp, your app file will expire on the same date as your certificate.

Update March 26, 2020 – The timestamping service was provided by Symantec and it looks like they are rebranding that to ‘digicert’. Here is an article that explains the situation. You will need to change the timestamp part in your script:

Replace:
/t http://timestamp.verisign.com/scripts/timestamp.dll 
With this:
/t http://timestamp.digicert.com?alg=sha1

All Set

That’s it, you should be all set to sign your app file. I have to be honest and confess that I wrote this mainly for myself, because I spent WAY too much time trying to re-trace my steps and figure out how this works again. It’s now in a single post, hope it helps you as much as it helped me.

Update – March 18, 2020

Turns out, there is a simple command for this….

$MyAppFile = "C:\ProgramData\NavContainerHelper\Extensions\Publisher_AppName_1.0.0.0.app"
$MyPfx = "C:\ProgramData\NavContainerHelper\Extensions\CodeSignCert.pfx"
$MyPassword = ConvertTo-SecureString "Your password" -AsPlainText -Force
$MyContainerName = "YourContainer"

Sign-NavContainerApp -appFile $MyAppFile -pfxFile $MyPfx -pfxPassword $MyPassword -containerName $MyContainerName

No need to install anything. All you need is the app file and your pfx file with a password, and everything else happens in the container (as Freddy puts it “without contaminating the host”). Just copy both files into a shared folder where NavContainerHelper can read the files.