Enabling index hints is deprecated in AX 2012 … almost

Sending index hints to the SQL server from AX has been around for a long time and often it has not done any good since the SQL server is pretty good at picking the best way it self.

So when it ended up as a parameter in the AOS server configuration in AX 2009 and then removed from the configuration in AX 2012 we seemed clear from the trouble it could cause. Microsoft stated that it was deprecated with AX 2012 and no longer available …

So it seemed a bit strange that the SQL server apparently received the information on a server I was working on recently.

While going through about all possible stuff to locate why it was acting like the non-existing index hint flag was enabled, I was going through the registration database to compare it against an AOS we knew was working as expected. And there it was … the registration key called “hint”.

I did a bit of research and I was not the only one struggling with this. As it appears there are these values to choose from :

Empty = Index hints are enabled and LTRIM (yes, it is there too) is disabled.

0 = Both index hints and LTRIM are disabled. This has to be the recommended setting.

1 = The same as empty. Does that make sense? No, not really. Anyways …

2 = Index hints are disabled and LTRIM is enabled.

3 = Both index hints AND LTRIM are enabled


And just for refreshing the memory: The location of the registration keys are

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\Dynamics Server\6.0

and in here find the right AOS and configuration.

Locate duplicate values (or the opposite) in a table

The way X++ handles SQL statement often lacks a bit compared to what you can do in standard TSQL.

One of the areas is if you want to get a set of records from a table where a specific value in a field only occurs once in the table. You could traverse through the records and compare values as you go along. That could pull down performance significantly depending on the record count. Another way of doing it is to group by the field while counting the records in e.g. RecId. Then traversing this result set and look at the value in RecId could give you either the unique or not unique value depending on what you need.

A better way would be to let the SQL server do some of the work and consequently return as few records as possible. Here is a job that illustrates this and the group by version mentioned above.

AX 2012 queries support the HAVING clause. That can in some scenarios do the same and a bit more elegant than this.


static void DEMO_NoDuplicates(Args _args)
    CustGroup custGroup;
    CustGroup custGroup2;
    PaymTermId lastPaymTermId;

    info("TRAVERSE COUNT");

    while select count(recid) from custGroup
        group by PaymTermId
        if (custGroup.RecId == 1)
            while select custGroup2
                where custGroup2.PaymTermId == custGroup.PaymTermId

    info("USE JOIN");
    while select custGroup
        order by PaymTermId
        notexists join custGroup2
            where custGroup.RecId != custGroup2.RecId
               && custGroup.PaymTermId == custGroup2.PaymTermId

Joins, indexes and an example of when they don’t match…

We experienced some data locks that gave a customer some performance issues. Going through the motions we found this statement (scrambled) being the problem:

update_recordSet adjustment
    setting Voucher = voucher
    where ! adjustment.voucher
        join trans
            where trans.Status == Status::Open
               && trans.DateTime < perDateTime
               && trans.RecId == adjustment.Trans;

The table Adjustment is connected to the table Trans through a relation like this: Adjustment.Trans = Trans.RecId. And Adjustment has – among others – an unclustered index like this: Trans, Voucher and a couple of other fields.

So you might think that the SQL server was capable of utilising this index since both Trans and Voucher are in play in the attempt to limit the records involved.

Looking at it from the SQL server it ends up like this:

AND ((T1.VOUCHER = @P5)))
AND (((T2.STATUS = @P8)

Now, when executing this ended up giving an index scan resulting in heavy locking of data. The reason for this – and the reason why the index could not be used – is that the SQL server sees this as two statements selecting adjustment records with the Voucher field as only range and the trans records with the specified ranges except the relation range and then returns the intersection of these two result sets.

Adding an index with Voucher as first field solves the problem and the data locking stops.

Change which index is clustered – directly on the SQL server

Todays post is not exactly Dynamics AX relevant. However, in some cases where you need to do some immediate testing og hot fixing and you are not able to change it on a running AX environment you could end up making temporary index modifications directly on the SQL server.

The thing about this is that when you later on synchronise from Dynamics AX changes often are overwritten in that process. That can easily be ok as long as you are aware of it.

One of the things that needs a bit more work than just adding/removing indexes or fields is to change which index is clustered. For demonstration purposes I have created a small database called DemoDB with a table called PersonTbl.

It contains 3 fields and 2 indexes as shown here:


The thing is that we in this case for some reason need to have the IdIdx as the clustered index and it is not an option to wait for the change to be released through the normal procedure in AX.

If we look at the clustered index NameIdx we cannot un-cluster it:


And clustering the IdIdx index is not possible from the properties form either:


To make this happen we need to modify the table through design mode. Right click on the table and select Design:


Right click anywhere on the columns overview and select Indexes/Keys:


Now we can uncluster the NameIdx table like this:


And make IdIdx clustered:


Click Close and save the table modifications.

Voila! The indexes have been changed. You might need to refresh to make it visible and should anyone be using the table and indexes while you make your customisations the saving part would fail.



Please remember that this procedure is not exactly best practice and should only be used as a last exit.

Locking on the InventSumDelta table – additional tweaks

This is a follow up on the post Locking on the InventSumDelta table.

We have made a couple of more tweaks that seem to have removed the majority of lockings remaining. All though we managed to remove a lot by going through the motions in the above mentioned post the users continuously received deadlock errors when inserting lines in sales order, inventory journals and so on.

This made us look a bit deeper into the statistics and how they are created and used. What we saw was that all though we had proper indexing on the InventSumDelta table it sometimes still took the wrong index. When looking at the index statistics the RecId index was only used for scanning and not really doing anything useful taken into consideration how the table is used. Still the index were used and scanned.

We therefore decided to disable the index hoping that the statistics would come to their senses and pick a more relevant index. It did not.

Instead it took the index we created earlier having ItemId as first column although the delete from statement only used TTSId, DataAreaId and Partition. None of these columns were in the RecId index. That is when we realised the Optimizer had no clue on what to look for. We tried changing the column sequence on the two tables to have Partition and DataAreaId in the beginning, but nothing happened until we disabled the ItemId index too. That had an instant effect and practically removed the deadlock issues immediately. The index statistics showed an instant change from scanning the ItemId index to seeking the TTSId index.

Please do not hesitate if you have any questions, suggestions or comments to the two posts.

Locking on the InventSumDelta table

With one of our customers we have been struggling with severe lockings on the InventSumDelta table. The effect of the lockings is that the users cannot perform their daily work while for example somebody is invoicing a large sales orders or other processes that accesses inventory data for shorter og longer period of time.

While trying to fix this issue we have been attacking from several fronts and in this blog post I will try to list up some of the actions we have taken to work our way towards a solution.

The current status is that we are still experiencing locks. However, they have been minimised to a – more or less – acceptable level. With performance being somewhat subjective it is difficult to say when enough is enough.

So, what servers are we working on?

5 AOS servers + a load balancer. All running Windows Server 2008 R2

The SQL server is a SQL 2012 Standard edition also running on a Windows Server 2008 R2 machine.

What have we done so far?

On the technical side we have done the following:

To eliminate the disk I/O, which we suspected in the beginning, we added a SSD drive directly in the SQL server and moved the databases from the conventional SAN to the new drive. The disk performance rocketed but still we had locking issues. The current setup is the AX databases on a SSD SAN and the TempDB running on the local SSD to boost performance.

Next step was to remove the page locking on the indexes on the InventSumDelta table. The theory here was that the records were locked based on an overlap in the pages. Then the OS and SQL server was patched with every relevant patch and CU and we doubled the RAM available to the SQL service from 32 to 64 GB. This was done since we could see it was running a bit low on the RAM. The result of being shorthanded here is actually increased lockings even though we disabled the page locking. Still no cigar. The general performance increased but still the users were stuck.

In the beginning we were convinced that this was due to general performance issues in AX 2012 R2. Yes, they do exist. 🙂

But more and more pointed towards the SQL server being the bottle neck. That made us start fiddling around with fill factors going from 80 to 70 and experimenting with even more extreme numbers ending back on 80 since we saw no noticeable effect besides a rapidly increasing database size.

We did – assisted by Jesper from Miracle A/S – a lot of tracing on the SQL server and discovered multiple indexes missing from the system. A couple of the were probably specific for the way the customer works and a couple of them were in the category of public interest I think. Both of them are created on InventSumDelta:


These index tweaks actually made a difference. Again, we did not reach the finish line. But we did move a fair bit closer.

While diving into this we came across what probably is the actual problem. What we discovered was that even though we had an excellently indexed table the SQL server Optimizer seemed to ignore that fact randomly and selected indexes not matching the query and therefore locking the full table.

There are several ways to address this. One is to increase the frequency of updating statistics. Going down to every two minutes did not do any good. What we then did was to completely disable the automatic statistics update on all indexes on the table. The theory here was that since the content of the table is – with high frequency – continuously switching from no records to x number of records the statistics could in most cases be based on a wrong perception of how the indexes was utilized the best. And this seemed to be a hit.

Since this was changed we have seen a significant drop in lockings. No doubt that the new indexes has made a difference and that we would not be on our way out of the woods without them. However, I believe that the main handle we pulled was in the way statistics are handled.

Afterwards we have added a couple of indexes more on InventSumDeltaDim and SalesInvoiceTmp to further improve performance. I will try to create a follow-up post to summarise these steps.

This long path does not necessarily fix or patch your system if you are experiencing the same issues. If it does I hope that it makes your customers happy.  If not, I hope that you can use a bit of our findings and that you will share yours by responding to this blog post.


UPDATE: Locking on the InventSumDelta table – additional tweaks