SSAS Locking: CommitTimeout and ForceCommitTimeout

There are already plenty of good posts out there on this topic:

As mentioned by Andrew Calvett, it is possible to set the CommitTimeout for a Process command.  Here is how to do it.

<Execute xmlns="urn:schemas-microsoft-com:xml-analysis">
  <Command>

    <Process xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
      <Type>ProcessFull</Type>
      <Object>
        <DatabaseID>AdventureWorks</DatabaseID>
        <DimensionID>Date_d2c7ec3d-c72c-435d-bd43-8283714cc2dd</DimensionID>
      </Object>
    </Process>

  </Command>
  <Properties>
    <PropertyList>
       <CommitTimeout>20000</CommitTimeout>
   </PropertyList>
  </Properties>
</Execute>

 

Setting CommitTimeout to 20 seconds (20000 milliseconds) means it will kick in before the server-level default ForceCommitTimeout of 30 seconds. To try this out, run the following query, which takes about 50 seconds on my laptop.  As soon as the query starts running, execute the process command.  It should rollback the process command and allow the query to run to completion.

DEFINE
MEASURE 'Date'[Bad Measure] =
   COUNTROWS(GENERATE(ALL('Internet Sales'), VALUES('Date'[Day Of Year])))
EVALUATE
ADDCOLUMNS(
   VALUES('Date'[Calendar Year])
   ,"Bad Measure", [Bad Measure]
)

 

Change CommitTimeout to 40 seconds (40000 milliseconds) and the default ForceCommitTimeout of 30 seconds will kick in instead.  The query will fail – instead of the Process command – “because of locking conflicts”.

Experiences & Tabular Tips

A project I worked on for some time went live months ago. My role as “tech lead” had a foot in many camps: solution architecture, business analysis, scrum master, data modeling, BI evangelism and BI development. We built a data warehouse of over half a terabyte with a handful of tables edging half a billion rows. It is a major component of the data-services layer for the organization. Thousands of applications/reports (primarily Excel/Access apps on network shares) that were tightly-coupled with systems of record were either replaced by a corporate BI & reporting solution, or rebuilt to consume data from reusable data services. We retired other competing “data layers” (legacy DW and a data-service + BI layer with a mix of technologies including multidimensional cubes). We also enhanced limited MDM capabilities resulting in greater business ownership of master data, rather than using the MDM repository as an unnecessary duplication of data. A data-governance model has been established to ensure growth and longevity of the solution.

As is the case with many enterprise organizations that have built a culture of encouraging (or even training) business users to query corporate systems of record directly, such a project required a fundamental change in business processes, and faced some resistance. It was not enough to tell stakeholders that a reusable data layer would provide a “single version of the truth”, promote consistent decisions across the organization, lower the total cost of ownership due to addressing inefficiencies where the same business calculations are being applied in hundreds of places, inconsistent terminology, etc. What forced the issue was that the biggest system of record in the organization (containing roughly 50% of the data anyone is interested in) was being replaced. Given that thousands of apps were tightly coupled with this SoR and therefore exposed, they would all break and the business was at risk of grinding to a halt. This was enough to convince key stakeholders of the benefits of a reusable data layer coupled with governance model, in part to protect them from this situation in the future. So we went live alongside this major SoR migration, replacing or integrating countless apps, cleaning up the architecture of the organization, introducing hundreds of users to business intelligence as a concept, and transforming the way data is accessed for decision making. I personally trained over 200 users on how to use the new BI layer to access corporate data with minimal technical skills. My motto throughout has been “clicky clicky draggy droppy” data access. I think it’s fair to say the project was a success.

Focusing on SSAS …

We started with multidimensional and had performance issues. Despite optimization, multidimensional struggled. We moved to tabular. I did not find it difficult to get 50 to 100 GB of memory on the servers, which was more than enough for tabular when limited to the data set required for BI. However, as pointed out by others in the blogosphere, the better performance of tabular did not come for free …

Personally, I did not find limited functionality in tabular to be a major issue. As I mentioned in my Multidimensional or Tabular post, most functionality can be delivered even if using a “workaround” (often DAX instead of decorating the metadata of the model in multidimensional). True show stoppers like unary operators and write back were not required.

The bigger issues were with the tabular development experience. The tabular-model designer inherited from Power Pivot is sluggish for models with lots of tables. The Excel-like DAX formula bar is, to put it politely, annoying. Without saying anything too controversial, MS corporate BI has played second fiddle lately.

Here are a couple of tips/techniques. This is not a comprehensive list.

Author measures in a query-authoring tool and copy/paste them into the tabular model

Authoring measures in the tabular model designer is not effective because the workspace database contains hardly any data. Unfortunately, my laptop doesn’t have 50 GB of memory. Setting the workspace database to point at the Dev instance is not an option because it would exacerbate the sluggish development experience, not to mention screwing up partitions on the Dev instance, etc. Aside from providing a modeling scratchpad, the workspace database is basically redundant in an enterprise development scenario.

If you prefer DAX queries, author measures in DAX Studio (querying the Dev instance of course) and copy/paste the measure definitions into the tabular model designer when you’re done.

If you prefer MDX queries, author measures in Management Studio and copy/paste the measure definitions into the tabular model designer when you’re done. You can author DAX measures in the WITH clause of an MDX query. I personally do this a lot because I find it less typing for an MDX SELECT statement than a DAX EVALUATE statement. Also I can refer to user hierarchies in the query (if not the measures).

For large models, use a subset in the tabular model designer

The tabular model designer inherited from Power Pivot may work well for a typical Power Pivot model. This is not always the case in SSAS tabular. The largest of the models I work with has about 80 tables, many with lots of columns. There are facts at various levels of granularity reducing the denormalization opportunity. Including internal measures not visible to users, there are over 800 measures (thankfully organized using display folders)! Due to business requirements to report measures side by side, I cannot split it into multiple tabular models.

Even in Manual Calculations mode (Model > Calculation Options > Manual Calculation), this model freezes the IDE for 10 to 15 seconds every time I do anything trivial. For significant modeling enhancements, this is unworkable. I simply don’t have the patience to sit there while the IDE is “Finishing …” all the time.

For significant enhancements, use BISM Normalizer to export a subset of the model to a temporary tabular project. Do the enhancements in there where there is a responsive development experience, and then BISM Normalizer the enhancements back to the main model when you’re done. This saves time and minimizes hair loss.

Place calculation logic at the appropriate level of the architecture

The tabular storage engine is clearly much faster than that of multidimensional. The same cannot be said for the formula engine.

This may sound obvious, but place calculation logic at the appropriate level of the architecture. Some calculations (e.g. traditional aggregated level calcs) are much better suited to a cube/tabular model than the ETL. In other cases there may be a considerable performance gain by performing a calculation in the ETL rather than the tabular model, thereby avoiding the formula engine.

This also promotes reusability of business logic because non-BI applications that submit SQL instead of MDX/DAX can leverage the same calculations.

BISM Normalizer: Version 1.3.11 Released!

Download it from the BISM Normalizer Visual Studio Gallery page.

Enhancements in Version 1.3.11.1

Support for tabular objects created by BIDS Helper.

  • Actions are first-class objects listed in the differences grid.
  • Display folders are attributes of other objects (tables with columns/hierarchies, measures) and visible in those objects’ definitions.
  • Translations are attributes of other objects (tables with columns/hierarchies, measures, perspectives, actions) and visible in those objects’ definitions.

It is necessary to enable the features in the BISM Normalizer options dialog (accessible from the Connections dialog).

BISM Normalizer Options

Multidimensional or Tabular

This post is not a list of multidimensional features unsupported by tabular. This has been documented already by various reliable sources including the following posts, as well as PASS presentations.

First thing I would like to say is I agree that there is lots of work for tabular to catch up to the feature-rich multidimensional. The tabular-model designer inherited from Power Pivot is sluggish for models with lots of tables. The Excel-like DAX formula bar is, to put it politely, annoying. Without saying anything too controversial, MS corporate BI has been playing second fiddle lately.

However, tabular does make sense for many customers today. For most customers, having fast performance is more important than the unsupported features – which invariably either have “workarounds” or are fringe use cases.

On the workarounds, if the same functionality can be delivered to the business, they don’t care if we technical people see it as a “workaround” because it’s not delivered the same way we are used to. And the business people are the ones that matter. This applies to many-to-many relationships, parent-child hierarchies, role-playing dimensions (can create multiple instances of same table), and various other items.

For what I’m calling the fringe use cases, the supportability of some of these does not make sense for many customers. Hand a solution to support that uses MDX stored procs, extensive scoped-cell assignments and they will struggle. How many implementations use these features because the developer thought they were cool rather than having any real business need? I think quite a few.

Other use cases may be showstoppers like unary operators and writeback, but not for the majority of implementations.

Scoped-cell assignments are in the potential showstopper list too, but in most cases if calculation logic is pushed to the ETL layer (where it belongs if not one of the strengths of the cube/tabular model like aggregated level calcs, or those which would cause a data explosion problem at the relational level, etc) to avoid the formula engine where possible, then DAX is a pretty capable and powerful language for calculations built into the tabular model.

On the tabular memory limitation, many customers I’ve talked to are worried they won’t fit into memory when they are actually nowhere near the upper limit of what they can relatively easily get on a server (especially when limited to the required data). Also, more memory will only become more viable in the future. For a multi-terabyte data warehouse implementation like a Yahoo.com, then yes but again this is a fringe use case.

As noted elsewhere,

  • When a project chooses multidimensional or tabular, it is not possible to change your mind without starting development again.
  • I think it is widely accepted that Microsoft is more likely to build new features and put future development investment into tabular than multidimensional.

It is a valid statement that a project may need some of the features unsupported by tabular at a later date, which could be a problem. Conversely a project may encounter performance issues with multidimensional that were not anticipated at the start of the project.  One could also argue that, as new features are built into tabular, could be stuck with multidimensional and unable to leverage better tabular capabilities in the future.

Update Nov 13 2013 – post from Marco Russo: Updates about Multidimensional vs Tabular

Deploy Tabular Models and Retain Partitions

Kasper de Jonge asked a question on Twitter that I would like to answer here to a) give it more exposure, and b) I pretty quickly ran out of 140 characters in my Twitter reply.

I initially tweeted: Problem when deploy tabular models to Dev server from SSDT & lose dynamic partitions? Deploy with BISM Normalizer & retain partitions

Kasper tweeted: Hey Christian, what do you mean by dynamic partitions?

And here is my response …

Hi Kasper,

We often partition the large tables in a tabular model to speed up processing times (as I’m sure you know).  For example, we might partition by month and then process only the most recent couple of months every night – which is a lot quicker than processing the whole table.  Normally this is done in AMO code that is called from a SSIS package – so it can be managed by the ETL to increment/drop partitions, etc.  The version of the model in source control does not contain the partition objects.

While developing in SSDT, we frequently deploy to a dev server (obviously not test/production).  This is necessary because all the dev reports are pointing at the server – not the workspace database on the developer machine.  If the dev ETL has built partitions, they are lost upon deployment from SSDT (using right-click deploy) – and they take a long time to rebuild/reprocess.

When creating a BISM Normalizer difference comparison, we have the option to “include partitions in table definitions”.  If we leave this unchecked, partitions will not be considered when comparing tables.  BISM Normalizer will treat such tables as equal and will not mess with them – thereby avoiding reprocessing.

Include Partitions Option

The SSAS Deployment Wizard is the only other deployment method for tabular models that supports “retain partitions” functionality.  However, you would not use it to deploy from SSDT to the dev server when making a change to a DAX calc or something.  BIDS Helper‘s Deploy MDX Script feature serves a similar purpose for multidimensional.

For regular Test/Production deployments, we can use the deployment wizard. Alternatively, we can use BISM Normalizer to (more easily) create an xmla script for the release to apply on Test/Production.

Another use case is bug fixes and partial deployments.  BISM Normalizer can create a script to apply on Test/Production – which updates only the calculation(s) that need to be fixed without a full release cycle – and does not mess with everything else that is already there.  The deployment wizard only supports “all-or-nothing” deployments, which are not appropriate for bug fixes and partial deployments.

Cheers!

Christian

Why not BISM Normalizer work directly with PowerPivot workbooks?

Gerhard Brueckl asked a question on the BISM Normalizer page Q&A section that I would like to answer here to give it more exposure.

Gerhard wrote …

Hi Christian,

first of all – thanks for that awesome tool!
Is it somehow possible to compare a Server/Project to a PowerPivot Workbook and vice-versa?
Office 2013 introduced a new object model for PowerPivot where you can access the PowerPivot – is it possible to extend BISM Normalizer to make use of this?

greets,
gerhard

My response …

Hi Gerhard,

Thanks for the feedback! Others have asked the same question including Grant Paisley.

I would love to have BISM Normalizer work directly with PowerPivot. Not only allow selection of an Excel file to act as a source/target model, but also have an Excel Addin version of BISM Normalizer to support non-Visual Studio users.

I agree there is a strong use case for this. PowerPivot enables users to easily pull data from uncommon sources – and often mash it up with corporate data. So inheriting (some or all of) another tabular model’s metadata would be useful to say the least.

PowerPivot now has the API you mentioned, but it would be much easier if Microsoft provided a single API for both Analysis Services and PowerPivot. The easiest way for me would be if Microsoft allowed access to the PowerPivot AMO interface. Instead, Microsoft has chosen to keep this interface private.

I built an object model on top of AMO for BISM Normalizer. It contains classes for all the objects that BISM Normalizer displays on the differences grid and abstracts calls to AMO. But even so, under the abstraction layer, BISM Normalizer basically just copies AMO objects from source to target. It does not need to rebuild them, which would be required to get them from AMO to DataModel OM in Excel 2013 and vice versa.

I raised this Connect item for this purpose. Feel free to vote for it.

Thanks,
Christian

IF ISCROSSFILTERED performs better than IF HASONEVALUE

See this post for the functional difference between ISCROSSFILTERED and HASONEVALUE.

This DAX query on AdventureWorks runs in less than a second (SP1).

DEFINE
MEASURE 'Date'[Bad Measure] =
   COUNTROWS(GENERATE(ALL('Internet Sales'), VALUES('Date'[Week Of Year])))
MEASURE 'Date'[Conditional Evaluation] =
   IF(ISCROSSFILTERED('Product'[Product Name]), [Bad Measure], 0)
//  IF(HASONEVALUE('Product'[Product Name]), [Bad Measure], 0)
EVALUATE
ADDCOLUMNS(
   VALUES('Date'[Calendar Year])
   ,"Conditional Evaluation", [Conditional Evaluation] //('Product'[Product Name] = "Mountain Pump")
)
ORDER BY 'Date'[Calendar Year]

Uncomment the product filter [‘Product'[Product Name] = “Mountain Pump”] to include it. Now it takes about 8 seconds. Looking at the query, this makes sense because it has to call the “Bad Measure”. This suggests that IF ISCROSSFILTERED is optimized to perform conditional logic only when necessary. This behavior is similar to short-circuit evaluation, but not quite the same thing.

Now comment out the line containing ISCROSSFILTERED and uncomment the line containing HASONEVALUE. It takes about 8 seconds whether the product filter is applied or not. This suggests that using IF HASONEVALUE is not optimized; it does perform conditional logic unnecessarily.

Having watched Marco Russo’s Inside DAX Query Plans session from the PASS 2012 Summit, I ran Profiler traces to confirm this theory. For the above query at least, this does seem to be the case.

  • Even when the product filter is not applied, IF HASONEVALUE executes a Vertipaq SE Query on ‘Internet Sales'[RowNumber]. RowNumber is an internal, unique column on every table.
  • The physical DAX query plan includes the following operation, which I believe means it is iterating 19.2 million rows for the formula engine to deal with.
Spool_Iterator<Spool>: IterPhyOp IterCols(0, 108, 136)('Date'[Calendar Year], 'Internet Sales'[RowNumber], 'Date'[Week Of Year]) #Records=19206564 ...

Note: 19,206,564 = 60,398 (number of rows in Internet Sales) * 53 (number of distinct Week of Year values) * 6 (number of rows in the resultset)

Needless to say, none of this happens with IF ISCROSSFILTERED when the product filter is not applied. There are less Vertipaq SE Queries, and the largest Spool_Iterator operation in the physical plan is for 6 rows (the number of rows in the resultset).

SSAS Tabular SP1: Change in how measures are serialized can cause problems

SP1 changed the way measures are stored in the bim file for tabular models with compatibility level of SP1 (1103).  Instead of having a single command with all the measures, it now creates a command per measure with annotations. This can cause issues with old versions of BISM Normalizer.  This is fixed in Version 1.3.7, which is tested with SP1.

This can also cause problems for other tools that save measures back to the bim file – like the DAX Editor.

Currency Conversion in SSAS 2012 (Multidimensional & Tabular)

Here is a copy of an article I wrote on the TechNet wiki.  It  merges the information in my previous posts (here and here) on currency conversion in Analysis Services. This new article is refreshed for SSAS 2012 and covers all the material in one place.

Types of Currency Conversion

As defined by the Business Intelligence Wizard in Multidimensional, there are 3 types of currency conversion.

  • One-to-many.  In this case we store the transactions (for example in the data warehouse) in a single base currency.  Even if they originally took place in different countries, we can convert to a single currency in the ETL.  The cube or tabular model will then convert to the reporting currencies.
  • Many-to-many.  Here we store the transactions in the original, different currencies.  The cube or tabular model now has (roughly) twice as many calculations that need to be performed.  First convert to a single “pivot” currency, and then to the reporting currencies.
  • Many-to-one.  Again (like many-to-many), we store the transactions in the original, different currencies.  However, in this case there is no business requirement to report in different reporting currencies.  We just choose not to utilize the ETL to convert to a base currency.  Instead, the cube or tabular model is used for this purpose.

The most common of these, in a data warehousing environment at least, is one-to-many.

Business Requirement

The primary business requirement is that users can view all transactions in any of the supported reporting currencies.  As a secondary requirement, the user may also be interested in what country/currency the original transaction took place.

Even the secondary requirement can be delivered by one-to-many.  We can achieve this by storing the lineage of the original currency in the model and report on it as required.

This article will therefore focus on one-to-many.  Apart from being the most common form of currency conversion, the techniques used for the other 2 types (many-to-many and many-to-one) are basically variations of the techniques used by one-to-many.

Extra Columns in Currency Dimension Table

As a prerequisite to the examples in this article, the following 2 columns are added to the DimCurrency table in AdventureWorksDW.  LCID is used for multidimensional and FormatString is for tabular.

LCID

The LCIDs are available here: http://www.microsoft.com/globaldev/reference/lcid-all.mspx  

Tip: an easy way to get the format strings is in the tabular-model designer in SSDT (for a dummy temporary project), select Format=Currency and set the currency symbol in the properties window of SSDT.

CurrencyPropertiesWindow

This internally sets the FormatString property to the custom string.  Can then right-click View Code on the .bim file and search for instances of the format string.

<Format Format="Currency" Accuracy="2" xmlns="">
 <Currency LCID="2057" DisplayName="£ English (United Kingdom)" Symbol="£" PositivePattern="0" NegativePattern="1" />
</Format>
 ...
<ddl300_300:FormatString>"£"#,0.00;-"£"#,0.00;"£"#,0.00</ddl300_300:FormatString>

Multidimensional

Consider the following DSV on top of AdventureWorksDW, which includes DimTime, FactInternetSales, FactCurrencyRate and DimCurrency, it should look something like this.

DSV

We are only concerned with one of the 3 relationships between Internet Sales Facts and Date (the one based on OrderDateKey).

Consider the following statements.

  • For a particular date, there will be many exchange rates; one for each of the supported reporting currencies
  • For a particular currency, there will be many exchange rates; one for each of the dates in the date range for which we are loading data.  Exchange rates change over time.
  • Therefore, the correct way to model this is with a many-to-many dimension between Internet Sales and Currency.  This will be shown below when we refer to the Dimension Usage tab.

The only measures in the (simplified) cube are Sales Amount (from Fact Internet Sales) and End Of Day Rate (from Currency Rate).  End Of Day Rate has an AggregateFunction property value of “LastNonEmpty”.  LastNonEmpty is defined as a semi-additive measure.  This means it will sum for all dimensions except Date.  For the Date dimension, it will take the last non-empty child.  For example, the exchange rate we would want to see for a week member in a report would not be the sum of the exchange rate on Monday + exchange rate on Tuesday + exchange rate on Wednesday, …  Instead, we just want the most recent exchange rate (maybe exchange rate on Saturday).

LastNonEmpty

The Date attribute in the Date dimension is the key attribute.  It has a KeyColumns property of DateKey (which is the surrogate key of the DimTime table) and a NameColumn of SimpleDate (what the users see at Date level).

If we use the Dimension Wizard to create the Date dimension, it makes it easy to flag with Type = Date, etc.  This is one of the few cases where having these properties set correctly actually affects cube behavior.  For example, semi-additive measures and some MDX functions like YTD won’t work without Type = Date.

Date attribute

Here are the mappings in the Dimension Usage tab.  They are pretty straightforward.  There is a many-to-many relationship between Currency and Fact Internet Sales.

Relationships

Here is a screenshot of the Currency dimension.  The main points about the Currency dimension are as follows.

  • The KeyColumns property of the Currency attribute is set to CurrencyKey (which is the surrogate key in for the DimCurrency table in AdventureWorksDW).
  • The NameColumn property of the Currency attribute is set to CurrencyName (which is what we want the users to see).
  • The Type property of the dimension (not attribute) is set to Currency.  The only purpose of this (unlike Type=Time that does affect cube behaviour) is to inform client tools in case they want to display currency dimensions differently to regular dimensions.
  • The Type property of the Currency attribute is set to CurrencyName.  Again, this is just to inform client tools.
  • The IsAggregatable property of the Currency attribute is set to False.  This removes the All level for the attribute. We would after all not want to sum the values of different currencies.  500 pounds sterling plus 100,000 cruzeiros does not make any sense.
  • Set the DefaultMember property of the Currency attribute to whatever the Sales Amount values are stored in.  In the case of AdventureWorksDW, it is US Dollars.
  • The ValueColumn property is set to LCID, which is an extra column on the Currency table.  See section below on currency formats for an explanation of this.

Currency Dim

Next we need to perform the conversion calculation itself.  There are various options here, some of which are better than others.

  • Measure expressions
  • MDX script cell assignments (which is how the SSAS Multidimensional BI Wizard does it)
  • MDX script calculated members
  • Performing the conversion in the MDX client queries

Measure expressions are arguably the best way to perform currency conversion in Multidimensional.  They tend to perform better than the other options.  Hence this article will use the measure expressions option to perform the conversion calculation.

Measure Expressions

The Sales Amount measure has the following measure expression: “[Sales Amount] / [End Of Day Rate]”.  The facts are stored in US Dollars and the base currency is US Dollars.  The measure expression is a division rather than a multiplication because this is the way the exchange rates are stored in AdventureWorks.  For example, the exchange rate for GBP is roughly 1.5 in the FactCurrencyRate table.  Therefore, to convert $15 from the pivot currency to GBP, 15 / 1.5 gives us 10.  Doing a multiplication would result in £22.50 (obviously wrong).

Note: measure expressions are performed at leaf level.

Measure Expressions

Back in the Dimension Usage tab, it is worth setting the DirectSlice property of the many-to-many relationship.  Setting it to “([Currency].[Currency].&[100])” means that, when querying Sales Amounts by the base/pivot currency (US Dollars), SSAS has the option to just return the value as it appears in the fact table – i.e. without applying the measure expression.  After all, there is no need to convert US Dollars into US Dollars!  This gives SSAS the option of using DirectSlice as a performance optimization.

We need a tuple with the default member for every enabled attribute in the outer many-to-many dimension.  Incidentally, this is what the Root(<dimension_expression>) function would return, but we can’t use any MDX functions in the DirectSlice property (hence the Functions pane is disabled), so we have to explicitly list each default member in a tuple.

Note: SSAS may choose not to use the DirectSlice optimization (depending on internal algorithms beyond the scope of this article).  Therefore, we still need to ensure the FactCurrencyRate table has an exchange rate of 1 for converting US Dollars into US Dollars – and for every single day for which we have data.  Otherwise, Analysis Services would be looking for an End Of Day Rate to divide by, find null and return null.

DirectSlice

Currency Format

Lastly, we need an MDX script cell assignment (in the Calculations tab) to set the Locale ID for currencies.  This will avoid displaying 500 pounds sterling as “$500”.  For a complete list of the Locale IDs, seehttp://www.microsoft.com/globaldev/reference/lcid-all.mspx 

Language([Currency].[Currency].[United Kingdom Pound]) = 2057;
Language([Currency].[Currency].[Brazilian Real]) = 1046;
Language([Currency].[Currency].[US Dollar]) = 1033;

The above assignments are for illustration purposes only.  The main problem with this approach is maintainability.  If we bring in new currencies in the future, we need to modify the MDX script.  The ideal would be to store them in the DimCurrency table, expose them using the ValueColumn property of the Currency attribute, and use that for the assignment.

Language(([Currency].[Currency].Members, [Measures].[Internet Sales Amount])) =
[Currency].[Currency].CurrentMember.MemberValue;

Note: the ValueColumn property of the dimension can be used as a form of “property bag” to store additional information for each dimension member.

Note: the MDX MemberValue (or Member_Value) function will …

  • If no ValueColumn property is specified, it will return the value from NameColumn
  • If no ValueColumn and NameColumn property, it will return the value from KeyColumn

Now if we run a query we should get back correct numbers with the conversion calculation applied at leaf level, and displayed using foreign currency formats.

Currency format

Note: the values in this screenshot are not the actual numbers that ship with AdventureWorks.

Tabular

The tabular modeling experience is much simpler than multidimensional.  There is nowhere to define the many-to-many relationship, IsAggregatable=False, LastNonEmpty aggregation function, etc.   The tabular modeling experience is basically just a bunch of tables and relationships with fewer options to set properties.  Responsibility for dealing with these features is handed over to DAX.

Tabular model designer

Here is a DAX query that handles (almost) all the features.  Again, instead of “decorating the meta-data of the model” as we did in multidimensional, responsibility is offloaded to DAX.

DEFINE
MEASURE ‘Internet Sales'[Internet Sales] =
SUMX
(
‘Currency Rate’
, CALCULATE( Sum(‘Internet Sales'[Sales Amount]) /
Sum(‘Currency Rate'[End of Day Rate]) )
)
MEASURE ‘Internet Sales'[Internet Sales FORMATTED] =
FORMAT
(
[Internet Sales]
, VALUES(‘Currency'[Format String])
)
MEASURE ‘Internet Sales'[Internet Sales ALL MEMBER DOLLARS] =
IF
(
HASONEVALUE(‘Currency'[Currency Name])
, [Internet Sales FORMATTED]
, [Internet Sales FORMATTED](‘Currency'[Currency Name] = “US Dollar”)
)
EVALUATE
ADDCOLUMNS
(
FILTER
(
VALUES(‘Date'[Calendar Year])
, [Internet Sales] > 0
)
,”US Dollar”, [Internet Sales ALL MEMBER DOLLARS](
‘Currency'[Currency Name] = “US Dollar” )
,”United Kingdom Pound”, [Internet Sales ALL MEMBER DOLLARS](
‘Currency'[Currency Name] = “United Kingdom Pound” )
,”Brazilian Real”, [Internet Sales ALL MEMBER DOLLARS](
‘Currency'[Currency Name] = “Brazilian Real” )
,”Saudi Riyal”, [Internet Sales ALL MEMBER DOLLARS](
‘Currency'[Currency Name] = “Saudi Riyal” )
,”ALL MEMBER”, [Internet Sales ALL MEMBER DOLLARS]
)
ORDER BY ‘Date'[Calendar Year]

Here are some points about this query.

  • All the measures defined could be built into the tabular model for reusability.
  • It is a best practice in DAX to break out calculations into component measures for readability and supportability.  Hence the only measure that would be made visible to the users is “Internet Sales ALL MEMBER DOLLARS” (obviously with a more appropriate name).
  • There are various instances where the shorthand notation for the CALCULATE function is used.  The CALCULATE function allows us to override the filter context.  The following syntax forms are equivalent (the 2nd is the shorthand notation).
    • CALCULATE([Existing Measure],<filter1>,<filter2>…)
    • [Existing Measure](<filter1>,<filter2>…)
  • SUMX is used to create a row context for the rows from the ‘Currency Rate’ table inherited from filter context.  SUMX will iterate each row and apply the conversion calculation at the leaf level.
  • I could not find a way to use LCID in tabular, but we can use the FORMAT function and pass in the FormatString value from the currency table instead.
  • In multidimensional, we can use the following properties to hide the currency all member and choose the default currency of US dollars (summing multiple currencies for the all member obviously doesn’t make sense).
    • IsAggregatable=False removes the all member
    • DefaultMember set to the desired default currency

Given that tabular doesn’t have anything equivalent to these features at this time, we need to check if we are at the all level of the currency dimension using HASONEVALUE(‘Currency’[Currency Name]).  This is equivalent to setting the all member to US dollars.  This also covers the case where a user selects a multi-value filter on currency (for example US dollars and Japanese yen).

Currency Conversion: Formatting Symbols in Tabular

This article on currency conversion was written on my old blog in August 2006.  It still applies today for Multidimensional in 2012.  Update August 17 2013: see revised version for SSAS 2012.  One of the items covered is how to format the currencies so they look like the following rather than all use one currency’s symbol.

I created a couple of extra columns in my currency dimension table.  My examples use the LCID column for multidimensional and FormatString for tabular.

You can get the LCIDs from here.  An easy way to get the format strings is to select Format=Currency and set the currency symbol in the properties window of SSDT.  This internally sets the FormatString property to the custom string.

Multidimensional

Set the currency attribute’s ValueColumn property to the LCID column.

And then the MemberValue function in MDX script to set the Language property.

Language(([Currency].[Currency].Members, [Measures].[Internet Sales Amount])) = [Currency].[Currency].CurrentMember.MemberValue;

Tabular

I couldn’t find a way to use LCID in tabular, but we can use the FORMAT function and pass in the FormatString value from the currency table.

DEFINE
MEASURE 'Internet Sales'[Internet Sales] =
    SUMX
    (
        'Currency Rate'
        , CALCULATE( Sum('Internet Sales'[Sales Amount]) / Sum('Currency Rate'[End of Day Rate]) )
    )
MEASURE 'Internet Sales'[Internet Sales FORMATTED] =
    FORMAT
    (
        [Internet Sales]
        , VALUES('Currency'[Format String])
    )
MEASURE 'Internet Sales'[Internet Sales ALL MEMBER DOLLARS] =
    IF
    (
        HASONEVALUE('Currency'[Currency Name])
        , [Internet Sales FORMATTED]
        , [Internet Sales FORMATTED]('Currency'[Currency Name] = "US Dollar")
    )
EVALUATE
ADDCOLUMNS
(
    FILTER
    (
        VALUES('Date'[Calendar Year])
        , [Internet Sales] > 0
    )
    ,"US Dollar", [Internet Sales ALL MEMBER DOLLARS]( 'Currency'[Currency Name] = "US Dollar" )
    ,"United Kingdom Pound", [Internet Sales ALL MEMBER DOLLARS]( 'Currency'[Currency Name] = "United Kingdom Pound" )
    ,"Brazilian Real", [Internet Sales ALL MEMBER DOLLARS]( 'Currency'[Currency Name] = "Brazilian Real" )
    ,"Saudi Riyal", [Internet Sales ALL MEMBER DOLLARS]( 'Currency'[Currency Name] = "Saudi Riyal" )
    ,"ALL MEMBER", [Internet Sales ALL MEMBER DOLLARS]
)
ORDER BY 'Date'[Calendar Year]

In multidimensional, we can use the following properties to hide the all member and choose the default currency – summing multiple currencies obviously doesn’t make sense.

  • IsAggregatable=False removes the all member
  • DefaultMember set to the desired default currency

Given that tabular doesn’t have anything equivalent to these features at this time, we need to check if we are at the all level of the currency dimension using HASONEVALUE(‘Currency'[Currency Name]).  The above example sets the all member to US Dollars.  This also covers some clever clogs selecting a multi-value filter on currency.