Test access to Retail server on The New Dynamics AX

Just a quick heads-up on a feature allowing you to test the access to the Retail server in The New Dynamics AX.

From a browser open the site:

https://<my server name>ret.cloudax.dynamics.com/healthcheck?testname=ping

It shows you a ping test result looking like this:


If you need the results in a more developer-friendly format you can add a &resultFormat=xml which gives you something like this:




Joins, indexes and an example of when they don’t match…

We experienced some data locks that gave a customer some performance issues. Going through the motions we found this statement (scrambled) being the problem:

update_recordSet adjustment
    setting Voucher = voucher
    where ! adjustment.voucher
        join trans
            where trans.Status == Status::Open
               && trans.DateTime < perDateTime
               && trans.RecId == adjustment.Trans;

The table Adjustment is connected to the table Trans through a relation like this: Adjustment.Trans = Trans.RecId. And Adjustment has – among others – an unclustered index like this: Trans, Voucher and a couple of other fields.

So you might think that the SQL server was capable of utilising this index since both Trans and Voucher are in play in the attempt to limit the records involved.

Looking at it from the SQL server it ends up like this:

AND ((T1.VOUCHER = @P5)))
AND (((T2.STATUS = @P8)

Now, when executing this ended up giving an index scan resulting in heavy locking of data. The reason for this – and the reason why the index could not be used – is that the SQL server sees this as two statements selecting adjustment records with the Voucher field as only range and the trans records with the specified ranges except the relation range and then returns the intersection of these two result sets.

Adding an index with Voucher as first field solves the problem and the data locking stops.

Change which index is clustered – directly on the SQL server

Todays post is not exactly Dynamics AX relevant. However, in some cases where you need to do some immediate testing og hot fixing and you are not able to change it on a running AX environment you could end up making temporary index modifications directly on the SQL server.

The thing about this is that when you later on synchronise from Dynamics AX changes often are overwritten in that process. That can easily be ok as long as you are aware of it.

One of the things that needs a bit more work than just adding/removing indexes or fields is to change which index is clustered. For demonstration purposes I have created a small database called DemoDB with a table called PersonTbl.

It contains 3 fields and 2 indexes as shown here:


The thing is that we in this case for some reason need to have the IdIdx as the clustered index and it is not an option to wait for the change to be released through the normal procedure in AX.

If we look at the clustered index NameIdx we cannot un-cluster it:


And clustering the IdIdx index is not possible from the properties form either:


To make this happen we need to modify the table through design mode. Right click on the table and select Design:


Right click anywhere on the columns overview and select Indexes/Keys:


Now we can uncluster the NameIdx table like this:


And make IdIdx clustered:


Click Close and save the table modifications.

Voila! The indexes have been changed. You might need to refresh to make it visible and should anyone be using the table and indexes while you make your customisations the saving part would fail.



Please remember that this procedure is not exactly best practice and should only be used as a last exit.

The effect of unnecessary tables in a query

The other day I was asked to look at a performance issue with a form opening slow. After going through the normal motions I ended up with a method being called hundreds of time. It did not take that long time to execute but all in all it was the reason for the form slowing down.

What the method did was to call a view and summarise a quantity field. Not something extra ordinary but I gave the view a look anyhow. And that is what triggered me to do this small post on a simple but often effective question: Do I really need that table? 🙂

Let me show you what I found with a simple example. We want to have listed all purchase order ids, item ids, the line status and quantity sorted by purchase order id, line status and item id.

This is a simplified example of the real life example and the demo view looks like this:


It returns the data as expected, it executes in a few ms and everybody is happy. Well, until you execute it 500 times in a row and get tired of waiting. Looking at what data we actually need returned points us towards the PurchTable. The only thing we get from this table is the PurchId which is present at the PurchLine table as well and fully indexed there as well.

So if we change the view to look like this instead we get a significant effect:


The result set is the same but the performance has increased. Let us have a quick look at the statistics.

First what happens when executing the first view:

View1 - statistics

And then what happens with the updated view:

View2 - statistics

Well, several things indicates that we made a score. Packages received from the server has been reduced from 6 to 4 (33%) and bytes received reduced from 13921 to 5885 (58%). With this example only returning very few records (9) we will not see an effect in the execution times being dramatic, however we see an improvement.

All we did was to simplify and still got the exact same result at a much lower cost. This is not only relevant for views but any query made against the SQL server.

Locking on the InventSumDelta table – additional tweaks

This is a follow up on the post Locking on the InventSumDelta table.

We have made a couple of more tweaks that seem to have removed the majority of lockings remaining. All though we managed to remove a lot by going through the motions in the above mentioned post the users continuously received deadlock errors when inserting lines in sales order, inventory journals and so on.

This made us look a bit deeper into the statistics and how they are created and used. What we saw was that all though we had proper indexing on the InventSumDelta table it sometimes still took the wrong index. When looking at the index statistics the RecId index was only used for scanning and not really doing anything useful taken into consideration how the table is used. Still the index were used and scanned.

We therefore decided to disable the index hoping that the statistics would come to their senses and pick a more relevant index. It did not.

Instead it took the index we created earlier having ItemId as first column although the delete from statement only used TTSId, DataAreaId and Partition. None of these columns were in the RecId index. That is when we realised the Optimizer had no clue on what to look for. We tried changing the column sequence on the two tables to have Partition and DataAreaId in the beginning, but nothing happened until we disabled the ItemId index too. That had an instant effect and practically removed the deadlock issues immediately. The index statistics showed an instant change from scanning the ItemId index to seeking the TTSId index.

Please do not hesitate if you have any questions, suggestions or comments to the two posts.

Locking on the InventSumDelta table

With one of our customers we have been struggling with severe lockings on the InventSumDelta table. The effect of the lockings is that the users cannot perform their daily work while for example somebody is invoicing a large sales orders or other processes that accesses inventory data for shorter og longer period of time.

While trying to fix this issue we have been attacking from several fronts and in this blog post I will try to list up some of the actions we have taken to work our way towards a solution.

The current status is that we are still experiencing locks. However, they have been minimised to a – more or less – acceptable level. With performance being somewhat subjective it is difficult to say when enough is enough.

So, what servers are we working on?

5 AOS servers + a load balancer. All running Windows Server 2008 R2

The SQL server is a SQL 2012 Standard edition also running on a Windows Server 2008 R2 machine.

What have we done so far?

On the technical side we have done the following:

To eliminate the disk I/O, which we suspected in the beginning, we added a SSD drive directly in the SQL server and moved the databases from the conventional SAN to the new drive. The disk performance rocketed but still we had locking issues. The current setup is the AX databases on a SSD SAN and the TempDB running on the local SSD to boost performance.

Next step was to remove the page locking on the indexes on the InventSumDelta table. The theory here was that the records were locked based on an overlap in the pages. Then the OS and SQL server was patched with every relevant patch and CU and we doubled the RAM available to the SQL service from 32 to 64 GB. This was done since we could see it was running a bit low on the RAM. The result of being shorthanded here is actually increased lockings even though we disabled the page locking. Still no cigar. The general performance increased but still the users were stuck.

In the beginning we were convinced that this was due to general performance issues in AX 2012 R2. Yes, they do exist. 🙂

But more and more pointed towards the SQL server being the bottle neck. That made us start fiddling around with fill factors going from 80 to 70 and experimenting with even more extreme numbers ending back on 80 since we saw no noticeable effect besides a rapidly increasing database size.

We did – assisted by Jesper from Miracle A/S – a lot of tracing on the SQL server and discovered multiple indexes missing from the system. A couple of the were probably specific for the way the customer works and a couple of them were in the category of public interest I think. Both of them are created on InventSumDelta:


These index tweaks actually made a difference. Again, we did not reach the finish line. But we did move a fair bit closer.

While diving into this we came across what probably is the actual problem. What we discovered was that even though we had an excellently indexed table the SQL server Optimizer seemed to ignore that fact randomly and selected indexes not matching the query and therefore locking the full table.

There are several ways to address this. One is to increase the frequency of updating statistics. Going down to every two minutes did not do any good. What we then did was to completely disable the automatic statistics update on all indexes on the table. The theory here was that since the content of the table is – with high frequency – continuously switching from no records to x number of records the statistics could in most cases be based on a wrong perception of how the indexes was utilized the best. And this seemed to be a hit.

Since this was changed we have seen a significant drop in lockings. No doubt that the new indexes has made a difference and that we would not be on our way out of the woods without them. However, I believe that the main handle we pulled was in the way statistics are handled.

Afterwards we have added a couple of indexes more on InventSumDeltaDim and SalesInvoiceTmp to further improve performance. I will try to create a follow-up post to summarise these steps.

This long path does not necessarily fix or patch your system if you are experiencing the same issues. If it does I hope that it makes your customers happy.  If not, I hope that you can use a bit of our findings and that you will share yours by responding to this blog post.


UPDATE: Locking on the InventSumDelta table – additional tweaks