Document Attachments

One of my clients had asked me to add Document Attachments to Bank Deposits. Thinking this was a quick and easy one I added the factbox, set the link, and went on with my day. When my client said they see all attachments for all records, I realized it is a little more involved. It took some time to figure out how it actually works, and this post explains the whole thing, including how to get document attachments through the posting process.

How Does it Really Work?

The reason why it’s not so simple is because the Document Attachment works with RecRef instead of a hardcoded table relationship. Take a look at the Document Attachment table (table number 1173 in the base app) for the field definitions. I’ll focus on single field PK records in this post, so tables where a single Code20 field has the unique identifier of the record. The more complex compound PK works the same, you just need to set more fields.

Standard BC has a limited number of tables that have document attachment capabilities. If you want to add another table, whether a custom table or another standard table, you will need to subscribe to some events to make that work. Let’s first look at how standard document attachments work.

Open Document Attachments

Let’s take a look at the Customer Card page in standard BC. The “Attachments” action has the following OnAction trigger code:

trigger OnAction()
var
    DocumentAttachmentDetails: Page "Document Attachment Details";
    RecRef: RecordRef;
begin
    RecRef.GetTable(Rec);
    DocumentAttachmentDetails.OpenForRecRef(RecRef);
    DocumentAttachmentDetails.RunModal();
end;

The “Document Attachment Details” page is where the magic happens. If you drill down into the ‘OpenForRecRef’ function, you will see that it takes a RecRef variable (which in this example is looking at the current Customer record) and sets filter on the table ID of the RecRef and the value of the PK field of the record itself. This is the function that defines all the tables in the standard BC app that have Document Attachments.

The important thing in this function is the call to OnAfterOpenForRecRef, which is an event publisher that we will subscribe to later. This event gives you the capability to set a filter to any table. Note that this is just a filter on a page. All that this does is make sure that you only see the document attachments for this particular record.

The Document Attachments Factbox

Another way to give the user access to the document attachments is the “Document Attachment Factbox” that you can see in the factboxes area. In the standard app on the Customer Card, the SubPageLink property links the “Table ID” field to a hardcoded value 18 (which is Customer table’s object id). If you want to create your own link, you should use the table name instead of its number. So we will be linking to the “Bank Deposit Header” table, so our constant value will be Database::”Bank Deposit Header”.

Take a look at the OnDrillDown trigger of the NumberOfRecords field in the factbox. It first sets up the RecRef, which is again hardcoded for the standard range of tables that have document attachments. Then it executes the same logic on the Document Attachment Details page as the action mentioned above.

Note the OnBeforeDrillDown function call in the ‘else’ leg of the ‘case’ statement. This is the event that you need to subscribe to in order to properly filter the document attachments for non-standard tables.

What is important to understand about this particular factbox is that records are not entered directly into the factbox. You have to click an action that adds the record, and as a result the “No.” field in the factbox is NOT populated by the page link.

Creating New Document Attachments

So far we’ve only looked at how to display the proper document attachments to the user. What’s left is how BC actually stores these records. The part that is tricky is not about getting the file itself, but how BC gets the value from the RecRef. The function that stores the values from the RecRef into the document attachment is called InitFieldsFromRecRef. You can see that this function again goes through all of the same hard coded standard BC tables that we’ve seen before, and it provides an event publisher called OnAfterInitFieldsFromRecRef that you can use for additional tables.

Document Attachments for Additional Tables

Alright, now let’s put it all together for a new table. I recently added this for Bank Deposits for a client of mine, so I’ll use the same table to illustrate. I’ll focus on the new ‘Bank Deposit Header’ and its Posted sibling. The new implementation of bank deposits can be found in the “_Exclude_Bank Deposits” app that is part of standard BC.

To get started, create a page extension for the Bank Deposit and the Posted Bank Deposit pages, and add the Document Attachment factbox. You can copy this from the Customer Card and change the links appropriately. Note that the records that you will see when you drill down into the details are filtered on the table but not by the PK value of the record that you are looking at. In other words, just adding this factbox will only give you a list of all the Document Attachments that are linked to ALL (Posted) Bank Deposits.

Finally, create a new codeunit, I called mine ‘DocAttachmentSubs’.

Filter the Details

First, we need to make sure that the “Document Attachment Details” page is filtered on the correct Bank Deposit number. For this we subscribe to the OnAfterOpenForRecRef event.

[EventSubscriber(ObjectType::Page, Page::"Document Attachment Details", 'OnAfterOpenForRecRef', '', true, true)]
local procedure DocAttDetailsPageOnAfterOpenForRecRef(var DocumentAttachment: Record "Document Attachment"; var RecRef: RecordRef)
var
    MyFieldRef: FieldRef;
    RecNo: Code[20];
begin
    if RecRef.Number in [Database::"Bank Deposit Header", Database::"Posted Bank Deposit Header"] then begin
        MyFieldRef := RecRef.Field(1); // field 1 is the "No." field in both tables
        RecNo := MyFieldRef.Value();
        DocumentAttachment.SetRange("No.", RecNo);
    end;
end;

Filter the Factbox

Next, we need to make sure that the details are filtered properly when the user clicks the DrillDown from the document attachment factbox. For this we subscribe to the OnBeforeDrillDown event in the factbox.

[EventSubscriber(ObjectType::Page, Page::"Document Attachment Factbox", 'OnBeforeDrillDown', '', true, true)]
local procedure DocAttFactboxOnBeforeDrillDown(DocumentAttachment: Record "Document Attachment"; var RecRef: RecordRef)
var
    BankDepositHeader: Record "Bank Deposit Header";
    PostedBankDepositHeader: Record "Posted Bank Deposit Header";
begin
    case DocumentAttachment."Table ID" of
        Database::"Bank Deposit Header":
            begin
                RecRef.Open(Database::"Bank Deposit Header");
                if BankDepositHeader.Get(DocumentAttachment."No.") then
                    RecRef.GetTable(BankDepositHeader);
            end;
        Database::"Posted Bank Deposit Header":
            begin
                RecRef.Open(Database::"Posted Bank Deposit Header");
                if PostedBankDepositHeader.Get(DocumentAttachment."No.") then
                    RecRef.GetTable(PostedBankDepositHeader);
            end;
    end;
end;

So now when the user clicks the drilldown, BC will set the RecRef to look at the (Posted) Bank Deposit, which is then sent into the details page, which then knows how to properly filter.

Set the Right Link

The last thing we need is to make sure that new document attachments have the (Posted) Bank Deposit number, by subscribing to the OnAfterInitFieldsFromRecRef event of the Document Attachment table itself.

[EventSubscriber(ObjectType::Table, Database::"Document Attachment", 'OnAfterInitFieldsFromRecRef', '', true, true)]
local procedure DocAttTableOnAfterInitFieldsFromRecRef(var DocumentAttachment: Record "Document Attachment"; var RecRef: RecordRef)
var
    MyFieldRef: FieldRef;
    RecNo: Code[20];
begin
    if RecRef.Number in [Database::"Bank Deposit Header", Database::"Posted Bank Deposit Header"] then begin
        MyFieldRef := RecRef.Field(1); // field 1 is the "No." field in both tables
        RecNo := MyFieldRef.Value();
        DocumentAttachment.Validate("No.", RecNo);
    end;
end;

You should now be able to create new document attachments for unposted and posted bank deposits. All that’s left is to get document attachments to flow through the posting process.

Posting

Document Attachments are not intrinsically difficult. In the end they are just records in the database. The records are identified by their Table ID and their PK values. For Bank Deposits the PK is a single Code20 field, and there are two versions of the table. All we need to do is write a little loopyloopy that reads the records for the unposted record, copy it into the posted record, and get rid of the old ones. Would be cool to just change the records, but since both the Table ID and the record identifier are all part of the PK, you can’t do a ‘Rename’ because then you’d have to sit there and click confirmations all day long.

For Bank Deposits, you can use the OnAfterBankDepositPost event in the “Bank Deposit-Post” codeunit.

[EventSubscriber(ObjectType::Codeunit, Codeunit::"Bank Deposit-Post", 'OnAfterBankDepositPost', '', true, true)]
local procedure BankDepositPostOnAfterBankDepositPost(BankDepositHeader: Record "Bank Deposit Header"; var PostedBankDepositHeader: Record "Posted Bank Deposit Header")
begin
    MoveAttachmentsToPostedDeposit(Database::"Bank Deposit Header", BankDepositHeader."No.",
                                    Database::"Posted Bank Deposit Header", PostedBankDepositHeader."No.");
end;

local procedure MoveAttachmentsToPostedDeposit(FromTableId: Integer; FromNo: Code[20]; ToTableId: Integer; ToNo: Code[20])
var
    FromDocumentAttachment: Record "Document Attachment";
    ToDocumentAttachment: Record "Document Attachment";
begin
    FromDocumentAttachment.SetRange("Table ID", FromTableId);
    FromDocumentAttachment.SetRange("No.", FromNo);

    if FromDocumentAttachment.FindSet() then begin
        repeat
            Clear(ToDocumentAttachment);
            ToDocumentAttachment.Init();
            ToDocumentAttachment.TransferFields(FromDocumentAttachment);
            ToDocumentAttachment.Validate("Table ID", ToTableId);
            ToDocumentAttachment.Validate("No.", ToNo);
            ToDocumentAttachment.Insert(true);
        until FromDocumentAttachment.Next() = 0;
        FromDocumentAttachment.DeleteAll();
    end;
end;

I have this in a separate function because I also had to make it work for the old Deposit implementation. You could totally combine this into a single event subscriber.

I’m thinking about creating a video about this topic. Let me know if this was useful in the comments and if you’d like to see the video.

Formatting Booleans

As you know, boolean fields are presented as a checkbox in BC. The issue with that is that checkboxes themselves have no formatting properties. There are a couple of different ways to change the appearance of boolean fields, read on if you want to know what they are.

One of the apps I work on has a custom journal, in which we use the StyleExpr to color fields based on a number of checks on the record. Over time, we’ve added this functionality to a bunch of fields. You have to scroll to the right to check all the fields, so we added a field called ‘Has Alerts’ to tell the user that something is up with this record. My client didn’t like the standard checkbox and asked if we can show the field as ‘Yes’ or ‘No’.

I asked Twitter for help and got a few responses:

Use the OnDrillDown Trigger

Some of the old quirks of NAV made their way into BC, and one of them is that the simple act of adding a trigger can modify the behavior of page controls.

Image
Using the OnDrillDown trigger

As you can see, the code in the trigger is absolutely meaningless. All we are doing is setting a dummy variable, and the mere presence of this trigger causes the boolean to be presented as a text. The StyleExpr property is tied to a variable that is set in the OnAfterGetCurrentRecord trigger of the page itself. I don’t like this solution because the trigger has no meaning. The field values become clickable, but because the code is bogus, the user will want to click something that does nothing. Of course there could be situations where you actually want to do something in the drilldown, in which case this is perfectly acceptable.

Format the Field

The winning suggestion was to include the Format command in the SourceExpr of the control. So elegant, and makes sense right. The page recognizes that the source is a text, and presents the Yes/No variation of the boolean.

Image
Formatting the SourceExpr

I like this because the field value is not clickable, just a text value. Just a quick tip, nothing earth shattering, but something that I know I’ll want to use some time in the future.

Obsolete But Not Gone

This post was born out of a bit of an embarrassing situation, in which I had to obsolete some fields from an AppSource app. Just a quick one to let you know what the official process is, and also that you don’t have to wait for the next major version of BC to fully obsolete a field.

The ‘Situation’

This app has table extensions for the Sales Line table and the posted sales line tables (invoice and credit memo). Not until their first live implementation did we find out that the field numbers in the credit memo line table were not aligned, so at that point BC will yell at you that field types are not compatible. Oops….

For those that don’t understand, let me explain. Field values in the Sales Line table are copied to their posted counterparts by a command called ‘TransferFields’. This command basically copies all the field values from the source table into the target table. Couple things to note: the fields must have the same field numbers, and they must have the same data types. In our case, we had fields 1 and 2 in the Sales Line table, and their corresponding fields in the posted credit memo line had field numbers 2 and 3. The posting process tried to put a number 2 value into field number 1, with the predictable result that there was a data type mismatch.

Why is this a problem? Can’t you just change the field number? Well no, because once an app is published on AppSource, you are not allowed to make any ‘breaking changes’, and renumbering a field is considered a breaking change. Doesn’t matter how I personally feel about that (I don’t agree that field numbers are breaking) and so we did not have any other choice but to use the obsoletion process.

As a side note – yes this exposed a serious issue in our process. Posting a credit memo had clearly not been tested, and matching field numbers is something that a BC developer with my experience should never have a problem with. To my client’s credit: no fingers were pointed, we addressed the issue, and we shared the cost of fixing it.

Obsoleting a Field

Alright, so the official process of obsoleting a field is described in the ‘deprecation guidelines‘. Skip the preprocessing piece if you have not seen that before, and focus on the steps in making code obsolete. There are plenty of blog posts that explain the process itself, so I will not go into any detail.

My main concern was how fast can we get this into AppSource. The process to make a field obsolete has two stages:

  • First, the ObsoleteState of the field is set to ‘Pending’.
    • This means that the field has been marked, but it can still be used in the app
    • The purpose of this stage is to flag the code to any party that has an extension of the code, so that they can take steps to address the change
    • All references to the field will be shown in the VSCode problems window with a warning. There is a reason and a tag property that is used to define which in BC version the change will become permanent
  • Second, the ObsoleteState of the field is set to ‘Removed’.
    • In this state, the field still exists but it cannot be used any longer
    • All references to the field will show up as errors in the problems list

There is a perfectly valid reason why there are two steps. My concern was how long it would take us to get the fields to be removed. The documentation does not address the required AppSource timeline, and I could not find any definite answers in Yammer. The only timeline reference that I could find in any Microsoft documentation was that code must be in ‘Pending’ state for one entire major version. This issue came up in early October, days after 2022 wave 2 was released. If this was true, we would have to wait until 2023 wave 1 (April next year!!) to get the fields obsoleted.

What We Did

Some partners assured me that there is no mandatory wait time for a whole major cycle. The intention of that timeline is not to limit partners but a practice that Microsoft uses. This is to make sure that the partners always have at least one full release cycle to address any compatibility issues due to obsolete code in the base app.

So, with that in mind we went to work. What saved us is that this particular app only had one implementation live, so all we had to do is make sure that they upgrade the app to the ‘Pending’ version as soon as humanly possible. Depending on the number of live implementations you have, this could take longer. I actually don’t even know if there is a dashboard where you can see the live versions that are in use.

I have to say that I was very impressed with how smooth the process of pushing a new version of an app into AppSource is now. After properly testing the change, we pushed the ‘Pending’ app to AppSource, and that was done and ready to publish within a day. The end user then upgraded the app, and we pushed the ‘Removed’ app to AppSource right away. We were able to address the issue within days, and the client lived happily ever after.

Reports to Excel

Just a quick post about the new reporting capability to use Excel for the layout. It’s been out for a while now (it came out in v20 or 2022 wave 1), but there was a tweet earlier this year to point out some FAQs and some samples to get you started on it. Personally I haven’t had the time to really get into it but I wanted to share this so it doesn’t get lost.

Kind of a Big Deal

I have a feeling that this new capability is being underrated, to me this is really kind of a big deal. We all know how difficult it is to use RDLC, and I’ve been able to keep report development at a minimum. Now that we have the Excel layout as a new target, I can’t help but feel like this is the main direction that reporting will go (that and PowerBI of course, but that’s a whole other story). I would not be surprised if Excel will be playing a much more important role now that we can use it as the target for report objects.

OK so here’s the tweet, it was posted back in March and I had bookmarked it at the time to look at later.

What I think is really cool about the way some people in the BC team interact with the community, is that they will keep track of what the community is sharing, and actually promote these posts in their own documentation. Granted, the GitHub repo is no official documentation, but they DO link to community blog posts in there.

Like I said I still haven’t taken the time to dig in, but I wanted to get this posted not to lose the link. This will surely be enough to get started and get your feet wet with Excel Layouts.

Remove BC Bloatware

The number of apps that come with a standard container has exploded over the past handful of releases. My local setup involves doing development in Hyper-V virtual machines, so local system resources are at a premium. It is very easy to trim the fat so to speak, so read on if you want to know how.

What’s with these apps?

From foreign languages to IRS reports. From integration and migration tools to email functionality and even connectors to external systems like the Spotify app. Having these in separate apps is great, because it is very easy to get rid of them.

The downside is that each app comes with its own set of table extensions, which are implemented as companion tables. Each time that a database action is taken against the main table, the system also has to maintain each companion table. Multiply this issue with the number of companies in your system and you can imagine the performance hit this could take.

Funny as it seems if you know what my desk looks like, but I HATE clutter in the extension list, especially if it is functionality that I just don’t ever use. I will NEVER have a need to use the Norwegian language, nor will I EVER need to use the Paypal links in a local container. In other words, I will never need to use the vast majority of these apps. Most importantly though I had started noticing a real slowing down of the performance in my local containers. I had even started wondering if it is time to replace my machine.

Get rid of these apps!

Lucky for us, it is VERY easy to get rid of apps. The BCContainerHelper module has two commands for you. One is UnInstall-BcContainerApp, which we will use in today’s post. The second one is UnPublish-BcContainerApp, which is useful in case you want to completely get rid of the app altogether. If you want to be a real ninja about it, follow the links to see the underlying PowerShell logic that you can use for inspiration. Me, I like to keep it simple, so I’ll use the BCContainerHelper Cmdlets.

If you were thinking ‘never say never’ when you were reading the previous paragraph, you were absolutely right. What if I get a Norwegian customer tomorrow? Let’s use the uninstall Cmdlet. This will leave the app in the system, ready to be installed again at a later date

UnInstall-BcContainerApp `
    -containerName 'MyContainer' `
    -name 'Shopify Connector' `
    -doNotSaveData `
    -doNotSaveSchema `
    -force
  • The name of the app is enough to uninstall the app. You could also specify the publisher and version if you want but for our purposes the name is enough since all apps in the standard container are published by Microsoft
  • The ‘doNotSaveData’ parameter is used to make sure that the data is deleted from the companion tables, important because we are going to get rid of those with the next parameter
  • The ‘doNotSaveSchema’ parameter is used to remove the companion tables from the system. If you do not set this parameter, the schema will remain in the app database

If you are absolutely certain that you will never use the app, you can use the UnPublish Cmdlet instead and REALLY clean up that app list.

Bonus Company Removal

The standard container also comes with a pre-configured company called ‘My Company’ that I personally never use, and we have a command to remove that too:

Remove-CompanyInBcContainer `
    -containerName 'MyContainer' `
    -companyName 'My Company'

Get rid of half the companies, get rid of 50% of the unnecessary companion tables.

Put it All Together

In my personal ‘arsenal’ of goodies, I keep a set of scripts to create new containers. One of those is a script called ‘RemoveBloatware.ps1’ that lists just about every app in the standard container, something like this:

$MyContainerName = 'MyContainer'

Remove-CompanyInBcContainer `
    -containerName $MyContainerName `
    -companyName 'My Company'

UnInstall-BcContainerApp `
    -containerName $MyContainerName `
    -name 'AMC Banking 365 Fundamentals' `
    -doNotSaveData `
    -doNotSaveSchema `
    -force

UnInstall-BcContainerApp `
    -containerName $MyContainerName `
    -name 'Business Central Cloud Migration - Previous Release' `
    -doNotSaveData `
    -doNotSaveSchema `
    -force

UnInstall-BcContainerApp `
    -containerName $MyContainerName `
    -name 'Business Central Cloud Migration - Previous Release (US)' `
    -doNotSaveData `
    -doNotSaveSchema `
    -force

UnInstall-BcContainerApp `
    -containerName $MyContainerName `
    -name 'Shopify Connector' `
    -doNotSaveData `
    -doNotSaveSchema `
    -force

# etcetera, add any app that you don't want

Now you can call this script from your NewContainer script and have a nicely trimmed container. It made a real difference for me, I can really notice the better performance. Very useful when you’re in the thick of coding and you need to deploy code changes frequently.

Partial Records with SetLoadFields

Fetching and updating records has historically been the greatest culprit of performance problems. The standard way that BC retrieves records is very expensive, since it will always get ALL the fields of a table (and its brethren companion tables). This post covers a (relatively) new option called SetLoadFields, which is used to specify the fields that you want to retrieve.

So What’s the Problem?

The database engine for BC is SQL Server for OnPrem and Azure SQL for SaaS; the business logic translates database operations into T-SQL statements at run time. By default, it issues a SELECT * and that means that for every standard database call, BC retrieves ALL fields from the table. Good for us developers because we never have to think about which fields to fetch. From a performance point of view though this causes a MASSIVE superfluous overhead in data traffic. Some of the most used tables in BC have bazillions of fields, and in any business logic scenario you never need more than a handful of fields.

The problem is exacerbated by the presence of table extensions. Each table extension is represented in the SQL database by a companion table that shares the primary key with the main table. Every time that you retrieve records from the main table, the system also retrieves the fields from the companion tables by issuing a JOIN on the PK fields. You can imagine a popular table like the Sales Line with a dozen table extensions; a SalesLine.FindSet command generates a SQL statement that includes a dozen JOINs.

The problem is that the number of fields that are included in SQL statements has a disproportionate effect on query performance. Read the post that I link to below for more details, but what you need to know is that the same query with all fields can take hundreds of times more than retrieving just half a dozen fields.

Only Get What you Need

To eliminate this overhead, we now have a SetLoadFields command. Basically, what you can do with this command is to define the fields that are included in the SQL statement. Instead of getting all fields and get data that you will never use, you tell BC that you only want your handful of fields.

Need an address from a Vendor? The external document number from an invoice? An Inventory Posting Group from an Item? You don’t have to read 6 million fields to do that anymore.

procedure ShowSomeCode()
var
    Vendor: Record Vendor;
begin
    Vendor.SetLoadFields(Address,"Address 2",City,"Post Code");
    Vendor.Get('10000');
    // do stuff with the address fields and ignore the rest
end;

This code sample generates a SQL statement with a SELECT on just those few fields that are defined in the SetLoadFields command, and it should ignore the companion tables altogether since these are all main table fields.

Read the documentation here and make sure you read how to use it here. For more technical in-depth information on what to do and what happens under the hood (way above my head there), read Mads’ posts here and here.

New Habits

In my day-to-day life as a BC developer I don’t normally see SetLoadFields commands. I even checked the standard objects and it’s actually quite surprising how little it is used there. In a previous life I did a LOT of performance troubleshooting, and this would have been a tremendous help in solving lots of performance problems. I know I will try to make using this command a habit.

Create JSON in AL

Last year I developed an integration to an external REST API from Business Central. One of the things that I had to learn is how to deal with JSON. We now have a bunch of different JSON data types, and if you’re just getting into them, they are hard to keep apart. With this post, I’ll try to explain as easy as I can make it, how to create a JSON object in AL.

JSON Basics

First, of course, the basics.

  • JSON is short for ‘JavaScript Object Notation’, follow this link to read the actual standards but if you’re new to this don’t though, it will only confuse you
  • It’s kind of like XML in concept but much easier to read. No attributes, no formatting rules, just a collection of key/value pairs
  • The equivalent of an ‘XML document’ is called a ‘JSON Object’. The start and end of these JSON objects are marked by curly braces (these {}).
  • Keys and values are always between double quotes (unless they are in non-text data types, but you can always put values in double quotes), separated by a colon. Multiple key/value pairs are separated by commas. This is a very simple JSON object with some key/value pairs:
{
    "A Key": "Example One",
    "Another Key": "Another Value",
    "Number Value": 42,
    "Boolean Value": false
}
  • Nesting is done by adding a new JSON object as a value, like this:
{
    "A Key": "Example Two",
    "Nested thing": {
        "First nested Key": "nested value",
        "Second nested key": "some other value"
    }
}
  • A list of JSON objects is called a ‘JSON Array’. The only restriction here is that each of the objects in an array must be structured the same, otherwise it would not be an array. The start and end of a JSON array is marked by square brackets. Each object has its own start/end curlies, and they are separated by commas. Here’s a simple example:
{
    "A Key": "Example Three",
    "List of Things": [
        {
            "name": "thing 1",
            "age": 10
        },
        {
            "name": "thing 2",
            "age": 12
        }
    ]
}

Note that the objects inside the array are identical in structure. This is really it, you should now be able to decipher any JSON response from any web service.

JSON Data Types in AL

In AL, there are 4 different JSON data types. All of these data types are .NET types that are wrapped in AL data types. You don’t have to worry about constructing these variables, they take care of themselves. All you have to do is make sure that you start with a fresh one, so pay attention to the scope of your variables. If you are not sure about that, use the Clear keyword to empty it out.

  • JsonObject – this data type represents an actual JSON object. It has properties and methods, and you use those to build the object as shown in the code examples below
  • JsonArray – this data type is also an object with properties and methods, and it contains the stuff that is between the square brackets
  • JsonValue – this represents a single value of a simple data type like a Text, an Integer, or a Boolean
  • JsonToken – in the JSON world, the ‘Token’ is similar to a Variant, and it can be either one of the other three Json data types. If you are not sure about the content of a JSON element, you can always use a Token as a data type

Show Me The Code!

When I started this section I was going down a rabbit hole of long sentences and complicated explanations. As I was reading what I wrote I realized that describing these things actually was not making any sense at all. Let me just give you the code, and if you have any questions about this please leave a comment below.

Each example refers to the JSON examples in the ‘JSON Basics’ section above.

Example One

The first example is very straight forward. This object only has simple key/value pairs, so we can simply add those to the object, like this:

    procedure ExampleOne()
    var
        MyJobject: JsonObject;
    begin
        MyJobject.Add('A Key', 'Example One');
        MyJobject.Add('Another Key', 'Another Value');
        MyJobject.Add('Number Value', 42);
        MyJobject.Add('Boolean Value', false);
    end;
Example Two

The second example has another JSON object as one of its values. The ‘Add’ method of the JsonObject data type takes a string as the Key name, and a JsonToken as the value parameter. In the first sample those were simple datatypes, and for this second sample we’re going to create another object to use as the value parameter, like this:

    procedure ExampleTwo()
    var
        MyJobject: JsonObject;
        NestedJObject: JsonObject;
    begin
        MyJobject.Add('A Key', 'Example Two');

        NestedJObject.Add('First Nested Key', 'nested value');
        NestedJObject.Add('Second Nested Key', 'some other value');
        MyJobject.Add('Nested thing', NestedJObject);
    end;
Example Three

The final example includes an array of objects, so we’ll build those one step at a time. Everything just coded in there, I hope you see how the array objects should be refactored into a re-usable function.

    procedure ExampleThree()
    var
        MyJobject: JsonObject;
        ThingJObject: JsonObject;
        MyJArray: JsonArray;
    begin
        MyJobject.Add('A Key', 'Example Three');

        Clear(ThingJObject);
        ThingJObject.Add('name', 'thing 1');
        ThingJObject.Add('age', 10);
        MyJArray.Add(ThingJObject);

        Clear(ThingJObject);
        ThingJObject.Add('name', 'thing 2');
        ThingJObject.Add('age', 12);
        MyJArray.Add(ThingJObject);

        MyJobject.Add('List of Things', MyJArray);
    end;
What about JsonToken?

We did not use a variable of type JsonToken directly, because when we are creating a JSON Object we already know what we have. Indirectly though, we definitely ARE using a Token. The ‘Add’ method of both the JsonObject and the JsonArray data types use a Token as the value parameter. Since the Token can be any simple type or any other Json type, you can simply throw any of those in and it knows what to do with it.

The JsonToken type will come in very useful when you start processing incoming JSON objects, and you need to make sure you correctly convert values into compatile data types in AL.

What’s Next?

These are just some very simple examples of how to build a JSON object, but this is really all the logic you’ll ever need to create your own. The JsonObject will take care of the curlies and the commas, the JsonArray will take care of the square brackets and all the other stuff. The only slightly complex thing you’ll ever need to do beyond this is to now take the JsonObject and set it as the body of an HttpRequest.

Let me know in the comments if this is helpful. There are other super useful posts about this topic, but I wanted to explain this in my own words. I will also write one how to read an incoming JSON object, and at some point I’ll share some helper logic that I’ve created to help take care of conversions and such.

Import Media Files for SaaS

One of the standard ‘Problems’ when you’re in an AL workspace in VSCode is a warning that you are no longer allowed to use BLOB as a datatype for images. This has been at the bottom of my priorities list until I had a request to create a new image for a standard field. With this post I’ll show you how easy it is.

Media Field

The first element that you need is a field in the table. Instead of a BLOB field with subtype Bitmap, you now need a field of type ‘Media’. There is also a data type called ‘MediaSet’ but that’s not what we are going to use. Go to Docs to read about the difference between Media and MediaSets. The field is not editable directly because we will be importing the image through a function.

In addition to the field itself, you need a function to import an image file into the field. In the object below I have a simple table called ‘Book’ with a number, a title and a cover. We use the ImportCover function to do the import, and implement that as an internal procedure, so it can only be used internal to the app. You can of course set the scope as you see fit.

table 50100 BookDnStr
{
    Caption = 'Book';
    DataClassification = CustomerContent;

    fields
    {
        field(10; "No."; Code[20])
        {
            Caption = 'No.';
            DataClassification = CustomerContent;
        }
        field(20; "Title"; Text[100])
        {
            Caption = 'Title';
            DataClassification = CustomerContent;
        }
        field(30; Cover; Media)
        {
            Caption = 'Cover';
            Editable = false;
        }
    }

    keys
    {
        key(PK; "No.")
        {
            Clustered = true;
        }
    }

    internal procedure ImportCover()
    var
        CoverInStream: InStream;
        FileName: Text;
        ReplaceCoverQst: Label 'The existing Cover will be replaced. Do you want to continue?';
    begin
        Rec.TestField("No.");
        if Rec.Cover.HasValue then
            if not Confirm(ReplaceCoverQst, true) then exit;
        if UploadIntoStream('Import', '', 'All Files (*.*)|*.*', FileName, CoverInStream) then begin
            Rec.Cover.ImportStream(CoverInStream, FileName);
            Rec.Modify(true);
        end;
    end;
}

Factbox for the image

Similar to how Item images have been implemented, you can create a factbox to show the book cover and add that to the Book Card. Using a factbox also makes it easy to keep the related actions close to the control.

page 50100 BookCoverDnStr
{
    Caption = 'Book Cover';
    DeleteAllowed = false;
    InsertAllowed = false;
    LinksAllowed = false;
    PageType = CardPart;
    SourceTable = BookDnStr;

    layout
    {
        area(content)
        {
            field(Cover; Rec.Cover)
            {
                ApplicationArea = All;
                ShowCaption = false;
                ToolTip = 'Specifies the cover art for the current book';
            }
        }
    }
    actions
    {
        area(processing)
        {
            action(ImportCoverDnStr)
            {
                ApplicationArea = All;
                Caption = 'Import';
                Image = Import;
                ToolTip = 'Import a picture file for the Book''s cover art.';

                trigger OnAction()
                begin
                    Rec.ImportCover();
                end;
            }
            action(DeleteCoverDnStr)
            {
                ApplicationArea = All;
                Caption = 'Delete';
                Enabled = DeleteEnabled;
                Image = Delete;
                ToolTip = 'Delete the cover.';

                trigger OnAction()
                begin
                    if not Confirm(DeleteImageQst) then
                        exit;
                    Clear(Rec.Cover);
                    Rec.Modify(true);
                end;
            }
        }
    }
    trigger OnAfterGetCurrRecord()
    begin
        SetEditableOnPictureActions();
    end;

    var
        DeleteImageQst: Label 'Are you sure you want to delete the cover art?';
        DeleteEnabled: Boolean;

    local procedure SetEditableOnPictureActions()
    begin
        DeleteEnabled := Rec.Cover.HasValue;
    end;
}

Add to the Page

All that is left is to add the factbox to the page where you have the import action. In this case I have a very simple Card page for the book, and the factbox is show to the side.

page 50101 BookCardDnStr
{
    Caption = 'Book Card';
    PageType = Card;
    ApplicationArea = All;
    UsageCategory = Administration;
    SourceTable = BookDnStr;

    layout
    {
        area(Content)
        {
            group(General)
            {
                field("No."; Rec."No.")
                {
                    ToolTip = 'Specifies the value of the No. field.';
                    ApplicationArea = All;
                }
                field(Title; Rec.Title)
                {
                    ToolTip = 'Specifies the value of the Title field.';
                    ApplicationArea = All;
                }
            }
        }
        area(FactBoxes)
        {
            part(BookCover; BookCoverDnStr)
            {
                ApplicationArea = All;
                SubPageLink = "No." = field("No.");
            }
        }
    }
    actions
    {
        area(Processing)
        {
            group(Book)
            {
                action(ImportCover)
                {
                    Caption = 'Import Cover Art';
                    ApplicationArea = All;
                    ToolTip = 'Executes the Import Cover action';
                    Image = Import;
                    Promoted = true;
                    PromotedCategory = Process;
                    PromotedOnly = true;

                    trigger OnAction()
                    begin
                        Rec.ImportCover();
                    end;
                }
            }
        }
    }
}

This was a fun one to figure out. Let me know in the comments if it was useful to you

Commitment Issues

So you’re evolving your source control practice. You have created a ‘dev’ branch to keep work in progress away from the ‘main’ branch, and maybe you are even using feature branches or branches per developer. As you are putting in place branch policies to prevent just anybody from committing changes, you notice a pattern of commits being created just by merging a change into ‘main’. In this post I will show you how to limit the number of commits.

The Setup

I have a simple project in Azure DevOps, with a repo that has a default branch called ‘main’ and a ‘dev’ branch. The main branch requires a reviewer, which in Azure DevOps means that changes can only be made through pull requests. I’ve committed a couple of changes in dev and I have created a pull request to move that change into the main branch to be released. The PR has been approved and I have clicked the ‘Complete’ button on the PR. The Azure DevOps default Merge Type is set to ‘Merge’ and all looks well, so then I click the ‘Complete Merge’ button.

The Issue

When the PR completes, you go back to the Branches view in Azure DevOps, and you notice that the dev branch is 1 commit behind on Main….

Wait, what? Didn’t I just merge 2 commits from dev into Main? The purpose of the PR was to synchronize the branches, but it appears that dev now a commit behind main. The Merge Type of ‘Merge’ created its own commit in the target branch, main in this case. When you go back to VSCode and merge main back into dev, you would notice that the source files themselves have not changed, even though Git has created two extra commits just from the Merge actions. You see, any time that you do a Merge in Git, it will generate a new commit.

If you were to do a file compare between the two branches, there would be no differences in the source files. The not so good part of using the Merge Type ‘Merge’ is that you will have to refresh your local main branch and merge that into your dev branch BEFORE you can continue developing. In itself maybe this works for many people but I prefer it if I can push out my dev work and continue developing without having to worry about being in synch with main.

A Better Way

There is another (I would argue a ‘better’) way to complete PRs. Another setup: I’ve updated my local main and merged that into my local dev branch; I have made another code change, which I have pushed that to the remote from my VSCode. The Branches view in Azure DevOps now shows dev is one commit ahead of main.

This time when you complete the PR, you select ‘Rebase and fast-forward’ as the Merge Type.

Because we are not merging, Git will not create a new commit. The result of this Merge Type is that the branches are in synch after completing the PR. There is no need for me to go back to VSCode and update my main branch.

So What is the Difference?

I’ll be totally honest here and tell you that I don’t really understand the difference between ‘Merge’ and ‘Rebase’. Something about where in the first the differences are actually merged and the changes are preserved in a new commit, and in the second the branch is rebased and the incoming changes are applied to that new base. I am sure that there is a distinct difference, and that this has an impact on the way you manage branches. I’ve looked for but could not find any posts that clearly explain the two with proper examples. I’m still looking, and I still want to know, but I have other more important things to worry about right now. Maybe a good topic to write about in the future :).

For practical purposes though, for the type of projects that I am involved in, I prefer the Rebase option. Personally, I prefer the branches to be synchronized when a PR is completed. Having to go back and pull the merge commit back into my local dev branch feels like a redundant step.

How to Populate a Test Suite

You’ve spent a bunch of time developing test codeunits, and you’ve figured out how to manually pull those into a Test Suite in the Business Central test toolkit. In this post, I will show you how you can automatically populate the test suite, which is especially useful for automatically testing your app in a build pipeline.

What Are We Talking About?

To keep the sample as simple as possible, I started with a Hello World app that I created with the “AL: Go!” command, plus a test app that has a dependency on it. In this test app, I have two test codeunits that don’t do anything. They are completely useless, just meant to show you how to get them into the test tool.

Those test codeunits are deployed into a BC container that has the test toolkit installed. This test toolkit (search for the ‘AL Test Tool’ in Alt+Q) is a UI that allows you to manually run these tests. Just like a standard BC journal page, when you first open the tool, it will create an empty record called ‘DEFAULT’ into which you must then get the test codeunits. Click on the “Get Test Codeunits” action, and only select the two useless codeunits. You should now see the codeunits and their functions in the Test Tool.

It’s kind of a drag to have to manually import the test codeunits into a Test Suite every time you modify something. To make it much easier on you, you can actually write code to do it for you. Put that code into an Install codeunit, and you never have to worry about manually creating a test suite again.

Show Me The Code!!

First we need an Install codeunit with an OnInstallAppPerCompany trigger, which is executed when the app is installed, both during an initial installation and also when performing an update or re-installation. You could probably create a separate “Initialize Test Suite” codeunit so you can run this logic in other places as well, but we are going to just write the code in our trigger directly.

The code below speaks mostly for itself. I like completely recreating the whole suite, but you can of course modify to your requirements. The important part of this example is a codeunit that Microsoft has given to us for this purpose called “Test Suite Mgt.”. This codeunit gives you several functions that you can use to make this possible.

codeunit 50202 InstallDnStr
{
    Subtype = Install;

    trigger OnInstallAppPerCompany()
    var
        TestSuite: Record "AL Test Suite";
        TestMethodLine: Record "Test Method Line";
        MyObject: Record AllObjWithCaption;
        TestSuiteMgt: Codeunit "Test Suite Mgt.";
        TestSuiteName: Code[10];
    begin
        TestSuiteName := 'SOME-NAME';

        // First, create a new Test Suite
        if TestSuite.Get(TestSuiteName) then begin
            TestSuiteMgt.DeleteAllMethods(TestSuite);
        end else begin
            TestSuiteMgt.CreateTestSuite(TestSuiteName);
            TestSuite.Get(TestSuiteName);
        end;

        // Second, pull in the test codeunits
        MyObject.SetRange("Object Type", MyObject."Object Type"::Codeunit);
        MyObject.SetFilter("Object ID", '50200..50249');
        MyObject.SetRange("Object Subtype", 'Test');
        if MyObject.FindSet() then begin
            repeat
                TestSuiteMgt.GetTestMethods(TestSuite, MyObject);
            until MyObject.Next() = 0;
        end;

        // Third, run the tests. This is of course an optional step
        TestMethodLine.SetRange("Test Suite", TestSuiteName);
        TestSuiteMgt.RunSelectedTests(TestMethodLine);
    end;
}

When you deploy your test app, it will now create a new Test Suite called ‘SOME-NAME’, it will pull your test codeunits with their test functions into the Test Suite, and it will execute all tests as part of the installation.

This code is very useful when you are developing the test code, because you won’t ever have to pull in any test codeunits into your test suite manually. Not only that, it will prove very useful when you start using pipelines, and you will be able to have precise control over which codeunits run at what point.

Dependencies

Here are the dependencies that I’m using :

  "dependencies": [
    {
      "id": "23de40a6-dfe8-4f80-80db-d70f83ce8caf",
      "name": "Test Runner",
      "publisher": "Microsoft",
      "version": "18.0.0.0"
    },
    {
      "id":  "5d86850b-0d76-4eca-bd7b-951ad998e997",
      "name":  "Tests-TestLibraries",
      "publisher":  "Microsoft",
      "version": "18.0.0.0"
    }
  ]

Credits

This post has been in my drafts for a while now, based on a question I posted to my Twitter, click on the Twee below to see the replies. The code in this post was copied almost verbatim from Krzysztof’s repo, he has a link in one of the replies. I had worked out my own example based on his code but I lost that when I had to clean up my VM’s.