Test access to Retail server on The New Dynamics AX

Just a quick heads-up on a feature allowing you to test the access to the Retail server in The New Dynamics AX.

From a browser open the site:

https://<my server name>ret.cloudax.dynamics.com/healthcheck?testname=ping

It shows you a ping test result looking like this:

RetailServerTest

If you need the results in a more developer-friendly format you can add a &resultFormat=xml which gives you something like this:

RetailServerTestXML

 

 

Troubleshooting the Commerce Data Exchange

In AX 2012 troubleshooting the Commerce Data Exchange is necessary. It’s somewhat a blackbox but we do have a few tools in the belt and handles to pull.

This post describes a couple of these in AX 2012 R2. The following versions are a bit more informative; but some of the tips are still relevant.

 

What’s in the queues?

The communication flow is passed through the SQL server and during installation you are asked for a message box database. When looking into this database two of the tables are showing you the flow of messages and a few related (general) pieces of information. I use the following SQL statements as a standard approach to these tables:

SELECT JobID, Status, TryCount, ErrorNo, ServerMsg, FinishedDateTime, PackageNo, ServiceName 
FROM [RetailMsgDB].[dbo].[IncomingMessages]
ORDER BY FinishedDateTime DESC
SELECT JobID, Status, TryCount, ErrorNo, ServerMsg, FinishedDateTime, PackageNo, ServiceName 
FROM [RetailMsgDB].[dbo].[OutgoingMessages]
ORDER BY FinishedDateTime DESC

/*
delete from IncomingMessages
delete from OutgoingMessages
*/

The result of the query looks something like this:

AX2012_RetailMsgBoxSQL

The first result set shows the messages going in to the server and the second result set shows the outbound. As long as the ErrorNo is 0 you are good. If it does show up with an error you need to start digging a bit. The JobID column shows you in which job to look.

Notice that I have to delete statements disabled by comment-tags. These are nice when you would like to reset the contents of the queue-tables. To execute the you need to mark them (without the /* and */) and execute.

 

Getting info from the log file and packages

The data exchange service offers a couple of tricks regarding its log file and the packages. The level of informations is setup in the Service Settings:

RetailCDXLog

This offers lot of settings for the service but in this post I will only be touching a couple of them.

RetailCDXServiceSettings

First, make sure you customize the right service and click Next until you get access to the Working Directory. This is where the files are dumped for processing by the service.

RetailCDXWorkDir

Moving on we can change the settings for the log file.

RetailCDXLogDir

We have several relevant settings in this part. First of all the directory for the log file and the Log Level. As default it logs the errors only and that is often enough. If you want it to be more chatty you just click in what you want to know from the service. Notice, that the log file does get a bit overwhelming to look at if you click on everything.

Reading the log file can give you a very specific error to work with. With a field mismatch it could look a bit like this:

2016.2.24 9:40:27:412 (17528) [1] CSockConn::Listen: bind failed
2016.2.24 9:40:27:417 (17528) [1] AdminThread: ERROR: Listen failed on port 23. Check the Retail Store Connect setup for server HJHVIN02
2016.2.24 9:44:43:318 (14236) [1] CCommMgr::HandleException: Target request handler failed to process target request header: Microsoft.Dynamics.Retail.StoreConnect.Request.SQLHandler.ProcessTargetRequestHeaderException: ProcessTargetRequestHeader failed to execute all write requests. ---> Microsoft.Dynamics.Retail.StoreConnect.Request.SQLHandler.ProcessWriteRequestException: Write request on table:[dbo].[RETAILPOSITIONPOSPERMISSION] failed to execute. ---> Microsoft.Dynamics.Retail.StoreConnect.Request.SQLHandler.RunException: Run() failed while creating temporary table. ---> Microsoft.Dynamics.Retail.StoreConnect.Request.SQLHandler.CreateTempTableException: Query: SELECT TOP 0 [ALLOWBLINDCLOSE],[ALLOWCHANGENOVOID],[ALLOWCREATEORDER],[ALLOWEDITORDER],[ALLOWFLOATINGTENDERDECLARATION],[ALLOWMULTIPLELOGINS],[ALLOWMULTIPLESHIFTLOGON],[ALLOWOPENDRAWERONLY],[ALLOWPASSWORDCHANGE],[ALLOWPRICEOVERRIDE],[ALLOWRESETPASSWORD],[ALLOWRETRIEVEORDER],[ALLOWSALESTAXCHANGE],[ALLOWTENDERDECLARATION],[ALLOWTRANSACTIONSUSPENSION],[ALLOWTRANSACTIONVOIDING],[ALLOWXREPORTPRINTING],[ALLOWZREPORTPRINTING],[MANAGERPRIVILEGES],[MAXIMUMDISCOUNTPCT],[MAXLINEDISCOUNTAMOUNT],[MAXLINERETURNAMOUNT],[MAXTOTALDISCOUNTAMOUNT],[MAXTOTALDISCOUNTPCT],[MAXTOTALRETURNAMOUNT],[NAME],[POSITION],[POSPERMISSIONGROUP],[RECID],[USEHANDHELD] INTO [#dbo_RETAILPOSITIONPOSPERMISSION_6a5067cd-ba66-41c4-bf8f-da7dc918b25e] FROM [dbo].[RETAILPOSITIONPOSPERMISSION];
 ---> System.Data.SqlClient.SqlException: Invalid column name 'ALLOWPASSWORDCHANGE'.

 

 

Another feature is the “Keep Packages Files”. It keeps the files in the working folder for later review. Again, this will leave you with a lot of data in the directory.

With the Keep Package Files set you get a couple of files in the work folder with the .tmp extension. One that ends with an I and one that ends with an R. The I file is the definition of the data model and the R file is the actual data. As is they are close to unreadable. So to get the result we can use the Pack Viewer tool. It is started from the Start menu next to the Service Settings.

RetailCDXPackViewer

Select the file you want to convert. The target folder is automatically filled in. You can change it without any problems. Mark the Open Folder to open the target folder when done and click the Convert button.

The I file gives you one file that looks like this:

RetailCDXPackDataModel

Converting the R file you get a multiple files depending on the number of tables in the package:

RetailCDXPackViewData

They are named with the table name for easy access. The SCTargetRequestHeader file is a content and action overview. The data files are easy(-ish) read XML files with the complete data set to be transferred.

RetailCDXPackViewDataContents

Remember to disable the keep package flag when done and restart the service.

 

This is not the full and complete troubleshooting guide but a quick intro to a couple of entry points when struggling with an AX 2012 R2 Retail.

Enabling index hints is deprecated in AX 2012 … almost

Sending index hints to the SQL server from AX has been around for a long time and often it has not done any good since the SQL server is pretty good at picking the best way it self.

So when it ended up as a parameter in the AOS server configuration in AX 2009 and then removed from the configuration in AX 2012 we seemed clear from the trouble it could cause. Microsoft stated that it was deprecated with AX 2012 and no longer available …

So it seemed a bit strange that the SQL server apparently received the information on a server I was working on recently.

While going through about all possible stuff to locate why it was acting like the non-existing index hint flag was enabled, I was going through the registration database to compare it against an AOS we knew was working as expected. And there it was … the registration key called “hint”.

I did a bit of research and I was not the only one struggling with this. As it appears there are these values to choose from :

Empty = Index hints are enabled and LTRIM (yes, it is there too) is disabled.

0 = Both index hints and LTRIM are disabled. This has to be the recommended setting.

1 = The same as empty. Does that make sense? No, not really. Anyways …

2 = Index hints are disabled and LTRIM is enabled.

3 = Both index hints AND LTRIM are enabled

 

And just for refreshing the memory: The location of the registration keys are

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\Dynamics Server\6.0

and in here find the right AOS and configuration.

Locate duplicate values (or the opposite) in a table

The way X++ handles SQL statement often lacks a bit compared to what you can do in standard TSQL.

One of the areas is if you want to get a set of records from a table where a specific value in a field only occurs once in the table. You could traverse through the records and compare values as you go along. That could pull down performance significantly depending on the record count. Another way of doing it is to group by the field while counting the records in e.g. RecId. Then traversing this result set and look at the value in RecId could give you either the unique or not unique value depending on what you need.

A better way would be to let the SQL server do some of the work and consequently return as few records as possible. Here is a job that illustrates this and the group by version mentioned above.

AX 2012 queries support the HAVING clause. That can in some scenarios do the same and a bit more elegant than this.

 

static void DEMO_NoDuplicates(Args _args)
{
    CustGroup custGroup;
    CustGroup custGroup2;
    PaymTermId lastPaymTermId;

    info("TRAVERSE COUNT");

    while select count(recid) from custGroup
        group by PaymTermId
    {
        if (custGroup.RecId == 1)
        {
            while select custGroup2
                where custGroup2.PaymTermId == custGroup.PaymTermId
            {
                info(custGroup2.PaymTermId);
            }
        }
    }

    info("USE JOIN");
    
    while select custGroup
        order by PaymTermId
        notexists join custGroup2
            where custGroup.RecId != custGroup2.RecId
               && custGroup.PaymTermId == custGroup2.PaymTermId
    {
        info(custGroup.PaymTermId);
    }
}


Manipulating data while ignoring a TTSABORT

OK, the subject of this post will probably be categorised as dangerous and by some as borderline stupid. The content should NOT be considered used in everyday work since it very easily could give you massive inconsistency in data and a broken system. There! I said it and I do not want to be the one saying I told you so afterwards. This blog post is only for information and not a recommendation.

So what is all the fuzz about? Well, sometimes it could be nice allowing some data to be inserted, updated or deleted although a TTSABORT command or an error is thrown. That could be relevant in regards of a log being updated no matter what the result of a job is or the likes.

The mechanism is known from the batch queue that has its status updated no matter how the job finishes.

 

The trick is using the UnitOfWork and UserConnection framework within the TTS-scope. This allows you to create a connection to the database that is not a part of the TTS but is running its own show.

And this is where it gets dangerous/stupid in some scenarios. Imagine a inventory transactions customisation manipulating data in some circumstances within the TTS and some without. The result could be data almost impossible to recover to a consistent state again.

 

In this example we want to update the Tax Group Id on the Customer groups and log the changes to a table no matter what happens.

I have created a table – DEMO_Log – with CustGroup as the only field. We would like this table to receive a new record not depending on success nor failure in the update of the CustGroup table.

Next step is to create a class doing the work. In this case it is called DemoIgnoreTTSAbort and it has a run method like this:

private void run()
{
    info("BEFORE UPDATING");
    this.showInfo();
    
    ttsBegin;
    this.updateCustGroup();
    ttsAbort;

    info("AFTER NORMAL UPDATE");
    this.showInfo();
    
    ttsBegin;
    this.updateCustGroup2();
    ttsAbort;

    info("AFTER ALTERNATIVE UPDATE");
    this.showInfo();
}

It starts by showing the current records in a resume like this

private void showInfo()
{
    CustGroup custGroup;
    DEMO_Log log;

    select count(RecId) from custGroup
        where custGroup.TaxGroupId;

    select count(RecId) from log;

    info(strFmt("Customer groups with Tax group id: %1", custGroup.RecId));
    info(strFmt("Log records: %1", log.RecId));
}

The idea is to give us a count of customer groups with Tax group ids and the count of records in our log table.

Then we – inside a TTS scope – uses the updateCustGroup method to try updating the groups like this:

private void updateCustGroup()
{
    CustGroup custGroup;

    while select forUpdate custGroup
        where ! custGroup.TaxGroupId
    {
        custGroup.TaxGroupId = 'TX';
        custGroup.update();
        this.insertInLog(custGroup.CustGroup);
    }
}

Each CustGroup record with no TaxGroupId content is updated with ‘TX’ and a log is inserted using the method insertInLog that goes like this:

private void insertInLog(CustGroupId _custGroupId)
{
    DEMO_Log log;
 
    Log.CustGroup = _custGroupId;
    log.insert();
}

I think that one is pretty self explaining…

The TTS scope is then ended with an TTSABORT so no records within the scope is updated/inserted.

We then pull the showInfo once more to se if anything has happened. And nothing has. No surprise.

 

The next part is a new TTS scope and a method (updateCustGruop2) which is almost identical to updateCustGroup is used:

private void updateCustGroup2()
{
    CustGroup custGroup;
  
    while select forUpdate custGroup
        where ! custGroup.TaxGroupId
    {
        custGroup.TaxGroupId = 'TX';
        custGroup.update();

        this.insertInLog2(custGroup.CustGroup);
    }
}

 

The only difference between updateCustGroup and updateCustGroup2 is that it calls insertInLog2 instead of insertInLog after updating each record.

This method is the key to all this and it looks like this:

private void insertInLog2(CustGroupId _custGroupId)
{
    DEMO_Log log;
    UnitofWork unitOfWork;
    UserConnection userConnection;

    userConnection = new UserConnection();
    unitOfWork = new UnitofWork();

    log.CustGroup = _custGroupId;
    log.insert();

    unitOfWork.insertonSaveChanges(log);
    unitOfWork.saveChanges(userConnection);
}

Compared to insertInLog it starts by adding to new variables – unitOfWork and userConnection which gives us an extra connection to the database not included in the TTS scope. First is a basic instantiation followed by the insert of the DEMO_Log record like in the insertInLog method. The next statement is where we tell unitOfWork to insert the log record(s) upon the call of the saveChanges method. There is a deleteOnSaveChanges and updateOnSaveChanges if you want something else than inserts.

Finally we call the saveChanges using the above declared userConnection.

Ending the run method with another call of the showInfo reveals that although we abort our TTS scope and the customer groups remain unchanged the log is fully updated:

Message (05:09:54 am)
BEFORE UPDATING
Customer groups with Tax group id: 0
Log records: 0
AFTER NORMAL UPDATE
Customer groups with Tax group id: 0
Log records: 0
AFTER ALTERNATIVE UPDATE
Customer groups with Tax group id: 0
Log records: 7

Once again: Remember that this is only to be used with extreme caution …

Change which index is clustered – directly on the SQL server

Todays post is not exactly Dynamics AX relevant. However, in some cases where you need to do some immediate testing og hot fixing and you are not able to change it on a running AX environment you could end up making temporary index modifications directly on the SQL server.

The thing about this is that when you later on synchronise from Dynamics AX changes often are overwritten in that process. That can easily be ok as long as you are aware of it.

One of the things that needs a bit more work than just adding/removing indexes or fields is to change which index is clustered. For demonstration purposes I have created a small database called DemoDB with a table called PersonTbl.

It contains 3 fields and 2 indexes as shown here:

TableWithWrongClusteredIndex

The thing is that we in this case for some reason need to have the IdIdx as the clustered index and it is not an option to wait for the change to be released through the normal procedure in AX.

If we look at the clustered index NameIdx we cannot un-cluster it:

OldClusteredIndex

And clustering the IdIdx index is not possible from the properties form either:

NewClusteredIndex

To make this happen we need to modify the table through design mode. Right click on the table and select Design:

DesignTable

Right click anywhere on the columns overview and select Indexes/Keys:

SelectIndexesKeys

Now we can uncluster the NameIdx table like this:

UnclusterIndex

And make IdIdx clustered:

ClusterIndex

Click Close and save the table modifications.

Voila! The indexes have been changed. You might need to refresh to make it visible and should anyone be using the table and indexes while you make your customisations the saving part would fail.

TableWithCorrectClusteredIndex

 

Please remember that this procedure is not exactly best practice and should only be used as a last exit.

Prefixing a strings with @’s – multi lines and escape characters

This post just shows a couple of ways to use the feature of putting a @ in front of a text string and what the result is.

The first example shows how to use it when referring to eg. a file with backslashes.

Example 2 shows how to split a text line into two or more lines for easier reading.

 

static void coupleOfStringTricks(Args _args)
{
    str text1;
 
    // Example 1
    // ---------

    // Using a backslash in the text messes up the result
    text1 = "c:\temp\someFile.txt";
    info(text1);

    // An option is to use double backslash
    text1 = "c:\\temp\\someFile.txt";
    info(text1);

    // Or use the @ as prefix to the string
    text1 = @"c:\temp\someFile.txt";
    info(text1);


    // Example 2
    // ---------

    // Allowing to extend a text string over multiple lines
    text1 = @"This text is toooooooooooooooooooo loooooooooooooooong
    to keep in one line for easy reading";
    info(text1);
}