Azure SQL firewall access

Some things were easier in the good old days before Dyn365 and Azure and some are better. Often the things that were better before tends to be things we need to relearn.

Allowing firewall access to Azure SQL (for example regarding BI) is easily scripted and here are 3 useful scripts for that purpose:

List current firewall rules:

SELECT * FROM sys.firewall_rules

If you need to grant access to the IPs 111.222.111.222 to 111.222.111.250:

EXEC sp_set_firewall_rule N'The_Name_That_Will_Be_Shown_On_The_List', '111.222.111.222', '111.222.111.250';
Getting rid of the above rule is just as simple:
EXEC sp_delete_firewall_rule N'The_Name_That_Will_Be_Shown_On_The_List'

The three above is all server rules so you need to run them towards the master database.

You can do the same tricks on database level:

Listing

SELECT * FROM sys.database_firewall_rules

Granting

EXEC sp_set_database_firewall_rule N'The_Name_That_Will_Be_Shown_On_The_List', '111.222.111.222', '111.222.111.250';

Deleting

EXEC sp_delete_database_firewall_rule N'The_Name_That_Will_Be_Shown_On_The_List'

 

Remember to be run them on the desired database.

 

Please keep in mind that you are messing with security and I – as usual – don’t take any responsibility in any damage you might make by using the above.

Advertisements

Enabling index hints is deprecated in AX 2012 … almost

Sending index hints to the SQL server from AX has been around for a long time and often it has not done any good since the SQL server is pretty good at picking the best way it self.

So when it ended up as a parameter in the AOS server configuration in AX 2009 and then removed from the configuration in AX 2012 we seemed clear from the trouble it could cause. Microsoft stated that it was deprecated with AX 2012 and no longer available …

So it seemed a bit strange that the SQL server apparently received the information on a server I was working on recently.

While going through about all possible stuff to locate why it was acting like the non-existing index hint flag was enabled, I was going through the registration database to compare it against an AOS we knew was working as expected. And there it was … the registration key called “hint”.

I did a bit of research and I was not the only one struggling with this. As it appears there are these values to choose from :

Empty = Index hints are enabled and LTRIM (yes, it is there too) is disabled.

0 = Both index hints and LTRIM are disabled. This has to be the recommended setting.

1 = The same as empty. Does that make sense? No, not really. Anyways …

2 = Index hints are disabled and LTRIM is enabled.

3 = Both index hints AND LTRIM are enabled

 

And just for refreshing the memory: The location of the registration keys are

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\Dynamics Server\6.0

and in here find the right AOS and configuration.

Locate duplicate values (or the opposite) in a table

The way X++ handles SQL statement often lacks a bit compared to what you can do in standard TSQL.

One of the areas is if you want to get a set of records from a table where a specific value in a field only occurs once in the table. You could traverse through the records and compare values as you go along. That could pull down performance significantly depending on the record count. Another way of doing it is to group by the field while counting the records in e.g. RecId. Then traversing this result set and look at the value in RecId could give you either the unique or not unique value depending on what you need.

A better way would be to let the SQL server do some of the work and consequently return as few records as possible. Here is a job that illustrates this and the group by version mentioned above.

AX 2012 queries support the HAVING clause. That can in some scenarios do the same and a bit more elegant than this.

 

static void DEMO_NoDuplicates(Args _args)
{
    CustGroup custGroup;
    CustGroup custGroup2;
    PaymTermId lastPaymTermId;

    info("TRAVERSE COUNT");

    while select count(recid) from custGroup
        group by PaymTermId
    {
        if (custGroup.RecId == 1)
        {
            while select custGroup2
                where custGroup2.PaymTermId == custGroup.PaymTermId
            {
                info(custGroup2.PaymTermId);
            }
        }
    }

    info("USE JOIN");
    
    while select custGroup
        order by PaymTermId
        notexists join custGroup2
            where custGroup.RecId != custGroup2.RecId
               && custGroup.PaymTermId == custGroup2.PaymTermId
    {
        info(custGroup.PaymTermId);
    }
}


Manipulating data while ignoring a TTSABORT

OK, the subject of this post will probably be categorised as dangerous and by some as borderline stupid. The content should NOT be considered used in everyday work since it very easily could give you massive inconsistency in data and a broken system. There! I said it and I do not want to be the one saying I told you so afterwards. This blog post is only for information and not a recommendation.

So what is all the fuzz about? Well, sometimes it could be nice allowing some data to be inserted, updated or deleted although a TTSABORT command or an error is thrown. That could be relevant in regards of a log being updated no matter what the result of a job is or the likes.

The mechanism is known from the batch queue that has its status updated no matter how the job finishes.

 

The trick is using the UnitOfWork and UserConnection framework within the TTS-scope. This allows you to create a connection to the database that is not a part of the TTS but is running its own show.

And this is where it gets dangerous/stupid in some scenarios. Imagine a inventory transactions customisation manipulating data in some circumstances within the TTS and some without. The result could be data almost impossible to recover to a consistent state again.

 

In this example we want to update the Tax Group Id on the Customer groups and log the changes to a table no matter what happens.

I have created a table – DEMO_Log – with CustGroup as the only field. We would like this table to receive a new record not depending on success nor failure in the update of the CustGroup table.

Next step is to create a class doing the work. In this case it is called DemoIgnoreTTSAbort and it has a run method like this:

private void run()
{
    info("BEFORE UPDATING");
    this.showInfo();
    
    ttsBegin;
    this.updateCustGroup();
    ttsAbort;

    info("AFTER NORMAL UPDATE");
    this.showInfo();
    
    ttsBegin;
    this.updateCustGroup2();
    ttsAbort;

    info("AFTER ALTERNATIVE UPDATE");
    this.showInfo();
}

It starts by showing the current records in a resume like this

private void showInfo()
{
    CustGroup custGroup;
    DEMO_Log log;

    select count(RecId) from custGroup
        where custGroup.TaxGroupId;

    select count(RecId) from log;

    info(strFmt("Customer groups with Tax group id: %1", custGroup.RecId));
    info(strFmt("Log records: %1", log.RecId));
}

The idea is to give us a count of customer groups with Tax group ids and the count of records in our log table.

Then we – inside a TTS scope – uses the updateCustGroup method to try updating the groups like this:

private void updateCustGroup()
{
    CustGroup custGroup;

    while select forUpdate custGroup
        where ! custGroup.TaxGroupId
    {
        custGroup.TaxGroupId = 'TX';
        custGroup.update();
        this.insertInLog(custGroup.CustGroup);
    }
}

Each CustGroup record with no TaxGroupId content is updated with ‘TX’ and a log is inserted using the method insertInLog that goes like this:

private void insertInLog(CustGroupId _custGroupId)
{
    DEMO_Log log;
 
    Log.CustGroup = _custGroupId;
    log.insert();
}

I think that one is pretty self explaining…

The TTS scope is then ended with an TTSABORT so no records within the scope is updated/inserted.

We then pull the showInfo once more to se if anything has happened. And nothing has. No surprise.

 

The next part is a new TTS scope and a method (updateCustGruop2) which is almost identical to updateCustGroup is used:

private void updateCustGroup2()
{
    CustGroup custGroup;
  
    while select forUpdate custGroup
        where ! custGroup.TaxGroupId
    {
        custGroup.TaxGroupId = 'TX';
        custGroup.update();

        this.insertInLog2(custGroup.CustGroup);
    }
}

 

The only difference between updateCustGroup and updateCustGroup2 is that it calls insertInLog2 instead of insertInLog after updating each record.

This method is the key to all this and it looks like this:

private void insertInLog2(CustGroupId _custGroupId)
{
    DEMO_Log log;
    UnitofWork unitOfWork;
    UserConnection userConnection;

    userConnection = new UserConnection();
    unitOfWork = new UnitofWork();

    log.CustGroup = _custGroupId;
    log.insert();

    unitOfWork.insertonSaveChanges(log);
    unitOfWork.saveChanges(userConnection);
}

Compared to insertInLog it starts by adding to new variables – unitOfWork and userConnection which gives us an extra connection to the database not included in the TTS scope. First is a basic instantiation followed by the insert of the DEMO_Log record like in the insertInLog method. The next statement is where we tell unitOfWork to insert the log record(s) upon the call of the saveChanges method. There is a deleteOnSaveChanges and updateOnSaveChanges if you want something else than inserts.

Finally we call the saveChanges using the above declared userConnection.

Ending the run method with another call of the showInfo reveals that although we abort our TTS scope and the customer groups remain unchanged the log is fully updated:

Message (05:09:54 am)
BEFORE UPDATING
Customer groups with Tax group id: 0
Log records: 0
AFTER NORMAL UPDATE
Customer groups with Tax group id: 0
Log records: 0
AFTER ALTERNATIVE UPDATE
Customer groups with Tax group id: 0
Log records: 7

Once again: Remember that this is only to be used with extreme caution …

Changing privileges on multiple SQL server tables for a user

This is not really a Dynamics AX thing but with BI in mind it kind of relates.

The thing is that sometimes you have users that needs access to AX tables and views on the SQL server to maintain and develop BI. And often these users should have access to almost every table except a few. Payroll tables are a good example.

In a normal situation you would use schemas to handle this but since AX creates all tables as part of the dbo schema this is not an option. So I created this little script for a colleague to accelerate the proces:

DECLARE @sqlStatement NVARCHAR(max)
DECLARE @user NVARCHAR(30)
DECLARE @filter NVARCHAR(10)

-- ==========
-- Initialize 
-- ==========

SET @sqlStatement = '';
SET @user = 'contoso\abc';
SET @filter = 'payroll%';

-- ================
-- Get REVOKE lines
-- ================

SELECT @sqlStatement = @sqlStatement + 'REVOKE SELECT ON [' + NAME + '] TO [' + @user + '];' FROM SysObjects WHERE (TYPE = 'U' OR TYPE = 'V')

-- ================
-- Get GRANT lines
-- ================

SELECT @sqlStatement = @sqlStatement + 'GRANT SELECT ON [' + NAME + '] TO [' + @user + '];' FROM SysObjects WHERE (TYPE = 'U' OR TYPE = 'V') AND NAME NOT LIKE '' + @filter + ''

-- ==============================
-- Execute privilege change query
-- ==============================

EXECUTE sp_executesql @sqlStatement;

What it does is that first it creates 3 variables. One for an SQL statement, one for the user identity and one for filtering what tables NOT to grant access to. In this case we want user ABC in the contoso domain to have SELECT rights on all tables except those beginning with “Payroll”.

It then starts by creating the REVOKE SELECT statements for all tables and views (User tables only) and adds them to the @sqlStatement variable.

Then it traverses through the tables and views to make the GRANT SELECT part on all tables NOT matching the filter in the @filter variable.

Finally the created statement is executed using the sp_executesql stored procedure.

It is not the fastest statement in the world; but it gets the job done.

Joins, indexes and an example of when they don’t match…

We experienced some data locks that gave a customer some performance issues. Going through the motions we found this statement (scrambled) being the problem:

update_recordSet adjustment
    setting Voucher = voucher
    where ! adjustment.voucher
        join trans
            where trans.Status == Status::Open
               && trans.DateTime < perDateTime
               && trans.RecId == adjustment.Trans;

The table Adjustment is connected to the table Trans through a relation like this: Adjustment.Trans = Trans.RecId. And Adjustment has – among others – an unclustered index like this: Trans, Voucher and a couple of other fields.

So you might think that the SQL server was capable of utilising this index since both Trans and Voucher are in play in the attempt to limit the records involved.

Looking at it from the SQL server it ends up like this:

(@P1 NVARCHAR(21), @P2 INT, @P3 BIGINT, @P4 NVARCHAR(5), @P5 NVARCHAR(21), @P6 BIGINT, @P7 NVARCHAR(5), @P8 INT, @P9 DATETIME2) UPDATE T1
SET VOUCHER = @P1, RECVERSION = @P2
FROM ADJUSTMENT T1
CROSS JOIN TRANS T2
WHERE (((T1.PARTITION = @P3)
AND (T1.DATAAREAID = @P4))
AND ((T1.VOUCHER = @P5)))
AND (((T2.PARTITION = @P6)
AND (T2.DATAAREAID = @P7))
AND (((T2.STATUS = @P8)
AND (T2.DATETIME < @P9))
AND (T2.RECID = T1.TRANS)))

Now, when executing this ended up giving an index scan resulting in heavy locking of data. The reason for this – and the reason why the index could not be used – is that the SQL server sees this as two statements selecting adjustment records with the Voucher field as only range and the trans records with the specified ranges except the relation range and then returns the intersection of these two result sets.

Adding an index with Voucher as first field solves the problem and the data locking stops.

Using HAVING in a query

With AX 2012 we now have the option of using HAVING in a query. This allows us to limit a result set based on aggregated fields.

The advantage is that we let the SQL server do some filtering and receive less records compared to the old-school version were we had to receive all records and then use an IF or the likes to filter away the records that did not match the criterias.

In this simple example we want to get all sales ids on orders having lines with a line amount total of 100000 and beyond.

 

static void SimpleQueryWithHavingExample(Args _args)
{
    Query query;
    QueryRun queryRun;
    QueryBuildDataSource qBDS_SalesTable;
    QueryBuildDataSource qBDS_SalesLine;
    SalesTable salesTable;
    SalesLine salesLine;
 
    // Init query
    query = new Query();
 
    // Add datasources and use standard relations
    qBDS_SalesTable = query.addDataSource(tableNum(SalesTable));
    qBDS_SalesLine = qBDS_SalesTable.addDataSource(tableNum(SalesLine));
    qBDS_SalesLine.relations(true);
 
    // Add a group by on SalesTable
    qBDS_SalesTable.addGroupByField(fieldNum(SalesTable, SalesId));

    // Add aggregation on LineAmount
    qBDS_SalesLine.addSelectionField(fieldNum(SalesLine, LineAmount), SelectionField::Sum);
 
    // Add the having filter
    query.addHavingFilter(qBDS_SalesLine, fieldStr(SalesLine, LineAmount), AggregateFunction::Sum).value(SysQuery::range(100000, ''));
 
    // Create and run the queryRun object
    queryRun = new QueryRun(query);

    while (queryRun.next())
    {
        salesTable = queryRun.get(tablenum(SalesTable));
        salesLine = queryRun.get(tableNum(salesLine));
        info(strFmt("%1 %2", salesTable.SalesId, salesLine.LineAmount));
    }
}

 

I am sure that this will come in handy. 🙂