Tag Archives: How

Tips & Tricks # 14: How Attribute Roles Work in MicroStrategy

Airport_Gates

Attribute Roles

The support of attribute roles is among the most requested features in the schema/modeling area. It is the result of a common modeling practice where two or more attributes are defined using the same lookup table and column. When these attributes are placed on the same report, the result may return no data. To address this issue, a unique table name alias is required.
This Tips & Tricks post demonstrates a business scenario where two possible solutions are available in MicroStrategy:
  1. Explicit logical table aliasing can be used to generate correct report Structured Query Language (SQL).
  2. MicroStrategy SQL Generation Engine automatically detects tables where the same column is used to create two different attributes.

Business Case Scenario

The Airport Example

A user defines two attributes that have the same definition but play different roles in the business model. In this example, attribute Origin Airport and Destination Airport are defined using the same Lookup Table and Column (Airport_ID).

TN5800-072-0108A.gif

Both attributes share the same forms, or information about them (Description, Location, etc.). In the fact table, however, a separate column exists for each of their roles (Origin_Airport_ID and Destination_Airport_ID).

TN5800-072-0108B.gif

If the user places both attributes on a report to obtain the number of flights originated from ‘MIA’ and arrived at ‘LGA’, an empty result set is returned. The SQL statement tries to obtain the description of an airport that is both ‘MIA’ and ‘LGA’ at the same time (Airport_ID = “MIA” AND Airport_ID = “LGA”), thus generating an empty result set as shown below:
Example

 

NOTE: The SQL for this template is similar to the following:

TN5800-072-0108D.gif

Returning no data as explained previously:

 

SOLUTIONS

1. Explicit Table Alias Definition

The recommended way to model attribute roles in MicroStrategy is using explicit table aliases. This approach gives project architects direct control over the specific attributes to be treated as different roles, as well as their mappings to logical tables. This level of control can ensure consistent Engine behavior.

  1. In MicroStrategy Desktop Schema Objects/Tables folder, right-click on the Table and select ‘Create Table Alias’.

     

  2. A logical table alias is created for LU_AIRPORT as LU_AIRPORT (1). The alias name can be modified. In this example, the alias table is named ‘LU_AIRPORT_ALIAS’.
The two attributes, Origin Airport and Destination Airport, can each have its own lookup table.
In this example, the Destination Airport attribute uses LU_AIRPORT_ALIAS as the primary lookup table. Be sure to uncheck LU_AIRPORT as a source table so no join will be made.
Form Definition for the ID

Form Definition for the DESC

TN5800-072-0108J.gif

The Origin Airport attribute’s definition remains the same, using LU_AIRPORT as the source table for both of its forms. LU_AIRPORT_ALIAS is not checked as a source table.
Form Definition for the ID

TN5800-072-0108K.gif

Form Definition for the DESC

TN5800-072-0108L.gif

Report SQL

TN5800-072-0108M.gif

2. Automatic Attribute Role Recognition

A VLDB property is available at database instance level in the Query Optimizations folder, called “Engine Attribute Role Options.” The option is disabled by default. When enabled, the engine detects columns that support more than one attribute in the same table and automatically creates a separate alias in memory only for each attribute. These automatic aliases are not saved into the metadata and cannot be edited in MicroStrategy Desktop. Thus, while the engine attribute role option makes it more convenient to use attribute roles, it also represents a loss of control over the schema representation that is ultimately used for SQL generation. As a result, some logical schema designs may function incorrectly when using automatic role recognition.
Example

 

REPORT SQL

TN5800-072-0108G.gif

Enabling/Disabling ‘Automatic Attribute Role Recognition’

Attributes are considered candidates for the unique table name alias feature if their attribute forms are defined off the same column of a lookup table. Due to the specific nature of the modeling design described above, and that users may want to create two attributes of the same definition, automatic attribute role recognition is an optional setting.

The Very Large Database (VLDB) ‘Engine Attribute Role Options’ property can be found under Project Configuration/VLDB Properties/Query Optimizations folder in the VLDB Settings.

IMPORTANT NOTES:

1. Attribute roles are intended for lookup tables. Since automatic role recognition effectively splits a logical table into several virtual logical tables, the option should not be applied when a fact table supports more than one attribute on the same column. In that case, some levels of analysis will not be available (that is, reports may return the “Fact does not exist at a level that can support the requested analysis” error). In general there should be no need to have attribute roles on fact tables. However, if it is necessary in a specific scenario, explicit table aliases should be used.
2. The automatic detection option will not work if the attributes in the roles situation are in the same hierarchy, meaning that a child attribute is shared. In the example above, the two airport attributes do not have a common child attribute.
———————————————————————————
Source: Jaime Perez, TN6197: How do Attribute Roles Work in MicroStrategy 8.x?, MicroStrategy Knowledgebase, ‎08-30-2001.

Using the R Integration functionality, how to perform Text Mining on a MicroStrategy report and display the result

Readers:Jaime Perez

Here is another great post in the MicroStrategy Community from Jaime Perez (photo, right) and his team. A lot of work when into the preparation of this post and it shows some great ways to use the “R” integration with MicroStrategy.

Contributors from Jaime’s team include:

ssonobe  ssonobe

Lili 

Joanne A 

Ohingst  Ohingst

Enjoy!

Michael

Text Mining Using R Integration in MicroStrategy

Users may wish to perform text mining using R on the result of any arbitrary MicroStrategy report and display the result. One of the problems that hinders the users from achieving it is that the number of output elements is not always consistent. For example, a report may have three attributes named ‘Age groups’, ‘Reviewer’, and ‘Survey feedback’ and the report might display four rows of feedback as follows:

01.jpg

If the above report result is sent to R as an input and the R script breaks down each sentence of the feedback into the term frequency that is grouped by the age groups, it will have 18 rows.

02.jpg

Since the number of output elements is greater than the number of the MicroStrategy report rows, the report execution will fail. Using the objects in the Tutorial project, this technical note (TN207734) describes one way to display the result of text mining on a MicroStrategy report, using the R integration functionality.

PREMISE:
– Following the instructions in TN43665, the MicroStrategy R Integration Pack has already been installed on the Intelligence Server.

The Steps Involved

STEP 1: Decide on the input values that need to be sent to R via R metrics
The first step is to decide on which data you wish to perform text mining. In this technical note, the sample report will let users select one year element, the arbitrary number of category elements, and specify the Revenue amount in prompts. The report will then display the value of the normalized TF-IDF (term frequency and inverse document frequency) for every word showing up in the qualified Item attribute elements, grouped by the Category elements.

A user may select the following values for each prompt and the report may look as shown below.

  • Year: 2012
  • Category: Books, Movies, and Music
  • Revenue: greater than $15,000

03.jpg

Eventually, the user may want to see the normalized TF-IDF for every word showing up in the Item attribute elements as shown below:

04.jpg

Since the final output displays each word from the Item attribute and it is grouped by the Category elements, the necessary input values to R are as follows

  • The elements of the Category attribute.
  • The elements of the Item attribute.

 

STEP 2: Create metrics to pass the input values to R

The input values to R from MicroStrategy must be passed via metrics. Hence, on top of the current grid objects, additional metrics need to be created. For this sample report, since the inputs are the elements of two attributes, create two metrics with the following definitions so that the elements are displayed as metrics.

Max(Category@DESC) {~}

Max(Item@DESC) {~}

05.jpg

 

STEP 3: R script – Phase 1: Define input and output variables and write R script to obtain what you wish to display in a MicroStrategy report

In the R script, define (1) a variable that receives the inputs from MicroStrategy and (2) a variable that will be sent back to MicroStrategy as the output as depicted below. Since the number of output elements must match with the number of input elements, it is defined as “output = mstrInput2” to avoid the errors. In other words, this script executes R functions to obtain the data that you wish to display in a MicroStrategy report, but the output is the same as the input. More details about how to display the result in a MicroStrategy report will be followed up later in this technical note.

06.jpg

 

In this technical note, after manipulating the input value, we assume that the variable named ‘norm.TF.IDF’ in the R script holds the values of the TF-IDF for each term.

07.jpg

 

STEP 4: Create tables in the data warehouse to store the value of your R output

In order to display the values of the ‘norm.TF.IDF’ defined in a MicroStrategy report, tables to hold the result need to be created in the data warehouse. In other words, additional report will later have to be created in MicroStrategy and it will extract the data from the database tables, which are created in this section.

In this specific example, the variable ‘norm.TF.IDF’ has the elements of words (terms) and categories and the values of the normalized TF-IDF. Considering the types of data, the first two should be displayed as attributes and the values of the normalized TF-IDF should be presented in a metric. Hence, two lookup tables to hold the term and category elements and one fact table need to be created to store all the data. On top of these tables, one relationship table is also required since the relationship between words and categories is many-to-many.

 

STEP 5: R script – Phase 2: Populate the tables in your R script

As previously mentioned, the variable named ‘norm.TF.IDF’ contains the values, which a user wishes to display in a MicroStrategy report as shown below.

07.jpg

 

In this R script, four more variables are defined from ‘norm.TF.IDF’, each of which contains the subset of data that will be inserted into the database tables.

 

tm_Category holds the unique elements of the Category.

10.jpg

 

tm_Word holds the unique elements of the Word (Term).

11.jpg

 

tm_Word_Cat stores the values of the many-to-many relationship.

12.jpg

 

tm_Fact contains the values of TF-IDF for every Word-Category combination.

13.jpg

 

In the R script, populate the database tables with the above four subsets of ‘norm.TF.IDF’.

# Load RODBC
library(RODBC)

# RODBC package: assign ch the connectivity information
ch <- odbcConnect("DSN_name")

# Delete all the rows of the tables
sqlClear(ch, "tm_Category", errors = TRUE)
sqlClear(ch, "tm_Word",     errors = TRUE)
sqlClear(ch, "tm_Word_Cat", errors = TRUE)
sqlClear(ch, "tm_Fact",     errors = TRUE)

# SQL: insert the data into tables; use parameterized query
sqlSave(ch, tm_Category, tablename = "tm_Category", rownames=FALSE, append=TRUE, fast = TRUE)
sqlSave(ch, tm_Word,  tablename = "tm_Word", rownames=FALSE, append=TRUE, fast = TRUE)
sqlSave(ch, tm_Word_Cat, tablename = "tm_Word_Cat", rownames=FALSE, append=TRUE, fast = TRUE)
sqlSave(ch, tm_Fact, tablename = "tm_Fact", rownames=FALSE, append=TRUE, fast = TRUE)

#Close the channel
odbcClose(ch)

 

STEP 6: Create and add an R metric, which implements the R script

The R script is done. It is time to implement this R script from MicroStrategy by creating an R script. In the deployR interface, open the R script and define the input and output that you specify in Step 3 as follows. Since the elements of the Category and Item attributes are characters, choose “String” as its data type. Likewise, since the output is the same as the mstrInput2, its data type is also set to string.

14.jpg

 

Create a stand-alone metric and paste the metric definition of the deployR utility. Then, replace the last parameters by the Category and Item metrics that you created in Step 2.

15.jpg

 

Add the R metric to the report.

15.2.png

 

The report and R will perform the following actions after adding the R metric
i. The report lets users select the prompt answers
ii. MicroStrategy sends the Category and Item elements to R via the R metric
iii. R performs text mining to calculate the TF-IDF based on the inputs
iv. R generates subsets of the TF-IDF
v. R truncates the database tables and populates them with the subset of the TF-IDF
vi. R sends the output(which is actuary the input) to MicroStrategy
vii. The report displays the values of all object including the R metric

 

STEP 7: Create MicroStrategy objects to display the data

From the tables created in Step 4, create the Word and Category attributes and the fact named weight. The object relationship is as depicted below.

08.jpg

09.jpg

 

Now, create a new report with these objects. This report will obtain and display the data from the database tables.

16.jpg

 

STEP 8: Utilize the report level VLDB properties to manipulate the order of the report execution jobs

There are currently two reports and let each of which to be named R1 and R2 as described below

  • R1: A report which prompts users to specify the report requirements and implements the R script executing text mining
  • R2: This report obtains the result of text mining from the database and display it

 

If the two reports are placed in a document as datasets as shown below, there is one problem: R2 may start its execution before R1 populates the database tables with the result of text mining.

17.jpg

 

In order to force R2 to execute its job after the completion of R1, the VLDB properties PRE/POST statements along with additional database table may be used. The table tm_Flag contains the value of 0 or 1. R2 is triggered when R1 sets the value of completeFlag to 1. The detailed steps are described below with the script for SQL Server.

 

i. Create another table in the database, which holds the value of 1 or 0

CREATE TABLE tm_Flag
(
   completeFlag int
)


INSERT INTO tm_Flag VALUES(0)

 

ii. In the VLDB property ‘Report Post Statement 1” of the R1 report, defines a Transact-SQL statement that changes the value of completeFlag to the value of 1.

DECLARE @query as nvarchar(100)
SET @query = 'UPDATE tm_Flag SET completeFlag = 1'
EXEC sp_executesql @query

 

iii. Define the VLDB property ‘Report Pre Statement 1’ in R2 so that it will check the value of completeFlag every second and loop until it turns to 1. After the loop, it will revert the value of completeFlag back to 0. After this Report Pre Statement, R2 will obtain data from the database, which has been populated by R1.

DECLARE @intFlag INT
SET @intFlag = (select max(completeFlag) from tm_Flag)

WHILE(@intFlag = 0)
BEGIN
	WAITFOR DELAY '00:00:01'
	SET @intFlag = (select max(completeFlag) from tm_Flag)
END

DECLARE @query as nvarchar(100)
SET @query = 'UPDATE tm_Flag SET completeFlag = 0'
EXEC sp_executesql @query

 

Activity Diagram

18_revised.png

 

 

Overall execution flow

  1. Answer prompts

19.png

 

2. Only the text mining result is displayed to users

20.png

 

Third Party Software Installation:

WARNING: The third-party product(s) discussed in this technical note is manufactured by vendors independent of MicroStrategy. MicroStrategy makes no warranty, express, implied or otherwise, regarding this product, including its performance or reliability.

 

Tips & Tricks #12: How to Troubleshoot Cross Joins in SQL Reports for the SQL Generation Engine 9.x

MicroStrategy Community Banner

Readers:

In my last blog post, I blogged about the new MicroStrategy Community. Jaime Perez, VP of Worldwide Customer Services, and his crew have come up with a better way for us to engage with MicroStrategy as well as his team.

Speaking of Jaime, last June, he posted this great tip on the MicroStrategy Knowledgebase site as a TechNote. I am reblogging it since it is one of the most frequent questions I get asked and I find it an extremely useful Tip & Trick. Also, this will give you an idea of the great stuff being posted in the MicroStrategy Community.

Best Regards,

Michael

MicroStrategy and Cross Joins

In some scenarios, one may encounter cross joins in the SQL View of a standard, SQL Report in MicroStrategy.  Cross joins appear when two tables do not have any common key attributes between them in which they can inner join.  As a result, the two tables essentially combine together to create one table that has all the data from both tables, but this results in poorer performance with a common effect of increased execution times.  Sometimes these execution times, and performance hits, can be very severe.  Therefore, it is important to understand some simple steps that can be performed to resolve a cross join, as well as some steps to understand why it may be appearing in the SQL View of the report.

One common occurrence of a cross join is when a report contains at least two unrelated attributes in the grid, and no metrics are present in order to relate the unrelated attributes via a fact table.  Such a occurrence can be resolved in a few ways:

  1. Create a relationship filter, set the output level as the unrelated attributes (or the entire report level), and then relate these by a Logical Table object
  2. Create a relationship filter, set the output level as the unrelated attributes (or the entire report level), and then relate these by a Fact object
  3. Add a metric to the report that uses a fact from a table in which both attributes can inner join to

This provides a pathway from the fact table to the lookup tables in which the unrelated attributes are sourced from.  The result is an inner join between the fact table and the lookup tables, which resolves the cross join between the two unrelated lookup tables.

Options 1 and 2 provide a means in which the report template can remain as only attributes, whereas Option 3 would have a metric on the report.  Option 3 may not be desired if a metric does not want to be placed on the report.  Keep in mind that other techniques can also be employed to have the metric on the report, but formatted to be hidden from display.

More common scenarios include cross joins between a fact table and a lookup table, and are typically surprising to a developer.  These situations can be a bit more tricky to troubleshoot and resolve, but here are a few techniques that can be employed to try to resolve the issue:

  1. In SQL View look at where the cross join appears, and between which tables the cross join appears
  2. Open up those tables in the Table Editor by navigating to the Schema Objects\Tables folder, and double-clicking the tables
  3. Select the Logical View Tab of both tables to see all the logical objects mapped to the table
  4. Take note of which attributes have a key icon beside them
  5. These key attributes denote attributes at the lowest level of their hierarchy presently mapped to the table and/or attributes that are in their own hierarchy (meaning they have no parents or children)
  6. The SQL Engine will join 2 tables on common key attributes only, so if none of the key attributes on either table exist on both tables, then a cross join should appear

This means that just because a Region attribute exists on Table_A and a Region attribute exists on Table_B does not necessarily mean that the SQL Engine will join on Region.  If Region has its child attribute on the table, then that attribute should be the key as it is the lowest level attribute of its particular hierarchy mapped to the table.  If Region exists on both tables, and is also a key attribute on both tables, then an inner join should take place on Region.

This essentially means that one can find a cross join, investigate the tables in which it appears, and verify if at least 1 common key attribute exists between the tables.  If not, then that should be the first path to investigate because a cross join is correct in that scenario.

Video

You can find a detailed video on how this issue is reproduced and resolved here: Tech Note 71019 . Steps to reproduce and resolve

Note

MicroStrategy Technical Support can assist with resolving cross joins in a specific report, however caution should be taken when resolving such issues.  In some scenarios, the cross join is resolved through modifications to the schema objects, which can have a ripple effect to all other reports in an environment.  For example, if a relationship is changed in the Region attribute to resolve a cross join in one report, this change will be reflected in all other reports that use Region, and potentially the hierarchy in which Region belongs.  As a result, the SQL View of one report will have the cross join resolved, but the SQL may have changed in other reports using Region or its related attributes.  This may or may not be desired.  MicroStrategy Technical Support may not be able to fully understand the impact of such a schema change to the data model, so before a change is made to the data model the consequences of such a change should be fully understood by the developer, and any changes made to the schema should be recorded.

References

[1] Jaime Perez, TN47356: How to troubleshoot cross joins in SQL Reports for the SQL Generation Engine 9.x, MicroStrategy Community, 06/24/2014, http://community.microstrategy.com/t5/Architect/TN47356-How-to-troubleshoot-cross-joins-in-SQL-Reports-for-the/ta-p/196989.

[2] MicroStrrategy Knowledgebase, Tech Note 71019 . Steps to reproduce and resolve,

How Data Blending Affects the Analytical Engine’s Behavior in MicroStrategy (Part 7)

MicroStrategy Analytics PlatformWith the release of MicroStrategy Analytics Enterprise 9.4.1, the Analytical Engine logic has been enhanced with respect to joining data from multiple datasets in a Report Services Document. One of the features that is available with this release is the ability to use objects (e.g., attributes, metrics) from multiple datasets in a single grid in a document.

If an attribute on a grid has elements that can be obtained from multiple datasets used in the document, the elements displayed will be from the global lookup table. Additionally, if one or more of the datasets containing the attribute has missing attribute form data or has different attribute form from the other datasets, the Analytical Engine will follow the rules noted below to compose the final output:

Rule 1:

If there is attribute form with null value, the Analytical Engine will use the non-null form value from other datasets instead of the null form.

Rule 2:

If several datasets have different attribute form information for the attribute element, the Analytical Engine will use the attribute form from the biggest dataset.

Rule 3:

If several datasets have different attribute form information for the attribute element, and those datasets have same number of rows, the Analytical Engine will use the first dataset in the document for the attribute form value (according to the dataset adding sequence).

NOTE: Users should note that the rules are applied for each individual attribute element in the result at the row level rather than at the dataset level.

Example 1:

Users may consider the following datasets – C01 is a dataset with Customer City, Customer and Order:

Part 7 - 1a

 

C02 is a dataset with Customer, Order and a profit metric. Users may note that the Customer attribute is missing the DESC form in the second dataset:

Part 7 - 2a

If a Report Services Document is built with both these datasets, and the attributes are placed on a grid, the following results may be seen. As noted in Rule 1, the Analytical Engine will display the non-Null values from C01 for the Customer attribute elements:

Part 7 - 3a

Example 2:

Now users may consider a different dataset as C02 – similar to the initial dataset, but here the Customer name (DESC) form contains values instead of NULLs. This time the values for the attributes are not consistent – see that Customer ID ‘1’ has different values for the DESC form for different Orders (1 & 6).

Customer Name Customer ID Order Profit
Customer D 1 1 100
Customer B 2 2 200
Customer C 3 3 300
Xia D 4 4 400
Kris Du 5 5 500
Customer A 1 6 610
Customer E 2 7 720
Customer F 6 8 860
Customer G 7 9 970
Customer H 8 10 1080

If a report is built for this dataset users will observe that the first attribute element value in the dataset is used as as the DESC form for the Orders 1 & 6 even if the value is different in subsequent rows (this is the same as previous Analytical Engine behavior). Part 7 - 5

When these datasets are used in the grid in a Report Services Document, the Analytical Engine will choose the attribute element values from dataset C02 to display in the attribute element values from. This is because of Rule 2 explained above.

Part 7 - 6a

Example 3:

Consider the following dataset:

Customer Name Customer ID Order Profit
Customer D 1 1 100
Customer E 2 7 720
Xia D 4 4 400
Kris Du 5 5 500
Customer G 7 9 970

A report built off this dataset appears as follows:

Part 7 - 8a

After replacing the dataset ‘C02‘ from the previous example with the new dataset, the following results are seen. As noted in Rule 3, because both C01 an C02 have the same number of rows, the elements displayed for the Customer attribute will be filled from from the first dataset to be added to the document – in this case C01. However for the first row in the results, where there is no corresponding customer in the dataset C01, Rule 1 will be applied and instead of a NULL value, the non-null Customer Name field ‘Customer G’ is picked from C02. (Rules are applied at the individual element level).

Part 7 - 9a

Next: Why are some metric values blank in documents using multiple datasets in MicroStrategy Analytics Enterprise 9.4.1

———————————————————————————-

References:

[1] MicroStrategy Knowledgebase, Engine behavior for grids on a Report Services Document or dashboard with multiple datasets where some attribute forms are missing or have different values the datasets in MicroStrategy Analytics Enterprise 9.4.1 and newer releases, TN Key: 45463, 03/13/2014, https://resource.microstrategy.com/support/mainsearch.aspx.

NOTE: You may need to register to view MicroStrategy’s Knowledgebase.

Tips & Tricks #10: How to Remove the Underlining from Hyperlinks Created on an Attribute or Metric

This is another one of those little tricks that can save you from pulling your hair out trying to figure it out.

By default, hyperlinks in a Report Services document are underlined. The underlining can not be removed by changing the format of the attribute or metric.

Tip 10-1

There is an Enhancement Request open with MicroStrategy on this, but MicroStrategy does currently provide a workaround.

Workaround to Remove the Underlining is to Clear the Default Link

Step 1: Right click the object that is linked and choose Edit Links.

Step 2: Click the Clear Default button while the link is highlighted.

Tip 10-2

 

Step 3: The underlining should now be removed as shown below.

Tip 10-3

CAVEAT: The only limitation to this workaround is that in order to now get to the link, the object has to be right clicked on. The user can not directly click on the object and be taken directly to the new page (see screenshot below).

Tip 10-4

Tips & Tricks #9: How Do Changes on the Source Report (Dataset) Get Reflected in MicroStrategy Report Services 9.x Documents

In MicroStrategy Report Services Documents, document datasets and their original source reports  (such as a grid report being used as a dataset) are not completely connected to each other. Depending on the changes made on the source report, it can be reflected differently on the document. Basically, the change can be divided into two types.

Type 1 – Formatting Changes

Formatting changes, for example, changing autostyles, thresholds, subtotals.

If a user chooses the option Add to Section without Formatting, the grid/graph showing on the document will not use the report’s stored formatting. Any formatting changes on the source report will not be reflected on the document.

Tip 9-1If a user chooses the option Add to Section with Formatting, the grid will be added with the current format of the report.  However, any formatting changes made to the source report AFTER the dataset has been included in the document will NOT be reflected on the document.

For example, a user disabled the subtotal (see screenshot below) for the original report after the report has been included as a dataset in a document, the document will still show the subtotals.

Tip 9-2

To force the document to recognize the report’s formatting changes, the user needs to delete the grid/graph from document section and add it again using the With Formatting option. By doing this, the latest formatting properties of the grid/graph on the source report are retrieved.

Type 2 – Adding/Removing Objects and Modifying Report Filters

Another type of changes made on the report involving adding/removing objects and modifying report filters.

Unlike the formatting change, this type of change does carry over from the source report to the document datasets.

For example, if users add/remove/modify report filters (see screenshot below), the change will be reflected on the data when running the document.

Tip 9-3

If an object is removed from the source report (see screenshot below), the user can see the change in the document’s Dataset Objects window. After the user runs the document, the object will be removed from grid/graph on the document.

Tip 9-4

If an object is added to the report (see screenshot below), the change will show in the document’s dataset objects window. However, the object will not be automatically added to the grid/graph. The user has to manually add the object to the grid/graph or add the dataset to the section again to make it show up.

Tip 9-5

NOTE:   As of MicroStrategy 9.0, a new feature was introduced where a user can add a dataset report to a Report Services document as a shortcut by selecting the Add to Section As a shortcut option, as shown below:

Tip 9-6

If the grid/graph is added to the document using this option, then the document would be updated automatically if ANY type of change is made on the source report.

Tips & Tricks #7: How to Enable or Disable Match Case Sensitivity for Prompts using MicroStrategy Web

Readers:

Believe it or not, this is a question I get asked a lot by clients. Based on their requirements or data, many clients do not want the Match Case Sensitivity check box preference enabled to begin with when they are prompted. Fortunately for us, the solution to this is fairly simple as I show below.

If you have a MicroStrategy question you would like me to answer or blog about, please e-mail it to me (include screenshots if it will help) to michael@dataarchaeology.net. I will do my best to answer your questions.

Best Regards,

Michael

Match Case Sensitivity Preference

When you run certain prompt types in MicroStrategy Web, the interface offers a search option followed by a Match Case Sensitivity check box.

This functionality allows users to search for specific elements within the prompt’s available answers. To allow even more flexibility, the check box allows the user to restrict the search results only to those that match the case entered. In the screenshot below, you can see the check box is enabled (checked) and case sensitivity will be matched.

Tip7-0

However, you can control whether this option is enabled or disabled by default for all prompts by setting the following preference in MicroStrategy Web.

This option is under Preferences, Project Defaults, Prompts.

Don’t forget to press the Apply button for the changes to take place.

Tip7-1a

Now, in the screenshot below, you can see the check box is disabled (unchecked) and case sensitivity will not be matched.

Tip7-2

 

Tips & Tricks #6: How the Incremental Refresh Option Works in MicroStrategy 9.4.x

Starting with MicroStrategy 9.2.1, Intelligent Cubes have a new feature to update information without republishing it. It is referred to as Incremental Refresh. There are different ways each Incremental Refresh type option works.

Defining an Incremental Refresh Report

Prior to MicroStrategy 9.2.1, if the data in an Intelligent Cube needed to be updated, users had to re-publish the Intelligent Cube,  either manually or using a schedule. This process will cause all the data for the Intelligent Cube to be loaded from the data warehouse into Intelligence Server’s memory, so that the existing data for the Intelligent Cube is overwritten.

MicroStrategy 9.2.1 introduced a new feature known as Incremental Refresh Options, which allow Intelligent Cubes to be updated based on one or more attributes by setting up incremental refresh settings to update the Intelligent Cube with only new data. This can reduce the time and system resources necessary to update the Intelligent Cube periodically.

For example, if a user has an Intelligent Cube that contains weekly sales data, the user may want this Intelligent Cube to be updated at the end of every week with the sales data for that week. By setting up incremental refresh settings, he can make it so that only data for one week is added to the Intelligent Cube, without affecting the existing data and without having to reload all existing data.

Users can select two types of objects for the incremental fetch: a report or a filter.

  • Filter: The data returned by a filter is compared to the data that is already in the cube. By default, the filter defined for the Intelligent Cube is used as the filter for the incremental refresh.
  • Report: The results of a report are used to populate the Intelligent Cube. By default, the report template used is the same as the Intelligent Cube’s template.

In order to set up an incremental refresh report, the user should first right-click on the Intelligent Cube and select Define Incremental Refresh Report:

This will bring up the Incremental Refresh Options editor:

Here, the user can define one of the following Refresh type options:

  • Update:  If new data is available, it is fetched and added to the Intelligent Cube, and if the data returned is already in the Intelligent Cube, it is updated where applicable.
  • Insert:  If new data is available, it is fetched and added to the Intelligent Cube. Data that was already in the Intelligent Cube is not altered.
  • Delete: The data that meets the filter or report’s definition is deleted from the cube. For example, if the Intelligent Cube contains data for 2008, 2009 and 2010, and the filter or report returns data for 2009, all the data for 2009 is deleted from the cube.
  • Update only: If the data available is already in the Intelligent Cube, it is updated where applicable. No new data is added to the Intelligent Cube.

The type of object used for the incremental fetch can be selected in the Advanced tab:

Users simply have to run the incremental fetch report, and this will automatically refresh the data in the Intelligent Cube.

Incremental Refresh Options Examples

In this example, the following database table is used. This is a transaction table for item, status, quantity sold (qty_sold) and transaction number.

Cube definition:

Report Objects:

– Item

– Status

– Transaction Number

– Quantity

Filter: Transaction Number greater than or equal to 100

Initial data:

Data is updated as below on the database side:

Line 2 – qty_sold number is updated

Line 3 – status is altered from confirmed to canceled

Line 4 – newly added

Line 5 – newly added

UPDATE

Insert new rows from report data and overwrite overlapping rows between old cube data and report data.

Line 2 – qty_sold number is updated.

Line 3 – Status canceled row is newly inserted, and line 4, the original data is not modified. For any change for any other attribute, a new line is added and the previous line also persists.

Line 5 – Newly added transaction is inserted.

And the new data with transaction_number 1 is not added because it does not meet the filter criteria to have transaction_number >= 100.

INSERT

Only insert new, non-overlapping rows from report data.

Line 2 – qty_sold number is NOT updated.

Line 3 – Status canceled row is newly inserted.

Line 5 – Newly added transaction is inserted.

And the new data with transaction_number 1 is not added.

DELETE

Remove overlapping rows from old cube data.

Delete Incremental Refresh report is not executed against the warehouse, and executed for Intelligent Cube with the following query. All the data meeting the criteria is deleted.

SQL

Delete from CUBE  IncrementalRefreshTestwhere [Transaction Number]@[transaction_date] >=  100

UPDATE ONLY

Only overwrite overlapping rows from report data.

Line 2 – qty_sold number is updated.

Summary

In summary, when defining an Incremental Refresh report, take the following behavior into consideration.

  • Update/Update only option does not compare all the attribute elements.
  • Delete option is performed on the Intelligent Cube, and data is not compared with the warehouse.

Tips & Tricks #4: How to Create N-Level Deep Nested Prompts in MicroStrategy Developer (Desktop)

This is something that I actually had the opportunity to use this week. It really came in handy and helped satisfy my client’s requirements.

Just a reminder as we discuss this tip, I am using MicroStrategy v9.4.1 and MicroStrategy Developer. Developer is the new name for MicroStrategy Desktop as of v9.4.1

This example demonstrates how to create a 3-level deep nested prompt that will prompt the user to select a year, then a quarter within that year, then a month within that quarter.

Prompt-in-prompt is a feature in which the answer to one prompt is used to define another prompt. This feature is only implemented for element list prompts. The following steps describes how to achieve this.

Create the Year Prompt

  1. Create the highest level filter first. This is a filter which contains a prompt on an attribute element list. Create a filter on the attribute “Year” and click “Prompt“.
  2. Click “prompt on attribute element list” and then click “Next” through the rest of the screens to accept the default values.
  3. Do not set any additional conditions. Save the filter as “Year Filter“.

Trick4-1

Trick4-2

From this level on, each filter will contain both a prompt on an element list and will have its element list filtered by the filter created one level above it. There is no limit to the number of such filters that can be layered.

Create the Quarter Prompt

  1. Create a filter on attribute “Quarter” and click “Prompt“.
  2. Select “Use a filter to reduce the number of elements” and select the “Year Filter” created in the previous step.
  3. Save this filter as “Quarter Filter” .

Trick4-3

Create the Month Prompt

  1. Follow the same basic procedure as above to create a Month filter. Create a filter on attribute “Month“.
  2. Click “Prompt“.
  3. Select “Use a filter to reduce the number of elements” and select the “Quarter Filter” created in the previous step.
  4. Save this filter as “Month Filter” .

Trick4-4

Try it Out

Place only the lowest level filter in the filter section of the report. In this case, the “Month Filter” will be placed in the filter section.

When the report is executed, it will prompt for Year, then for Quarters in the selected Year, then for Months in the selected Quarter, as shown in the following sequence of screen shots.

Trick4-5

Trick4-6

Trick4-7

Tips & Tricks #3: How to Define Custom Subtotal Displays in MicroStrategy Desktop

By default, when users apply subtotals in a report, the name of the subtotal is displayed in the subtotal line items that appear in the report. Users can use custom subtotals to give more control over the characteristics of a subtotal. Custom subtotals allow users to define custom subtotal line items that appear on the reports.

Users can make the subtotal name dynamic by typing special characters in the subtotal name field as listed in the following table.

Trick2-1

Trick2-2

To define a specific subtotal displays for a report like the one shown above, follow the steps below:

  1. Select Subtotals from the Data menu. The Subtotals dialog box opens. Clear the Totals check box to remove the standard subtotals.
  2. Click Advanced.
  3. Click New to create a custom subtotal.
  4. Type the following for the name: “Total for the #P #0”. Remember that P displays the parent attribute and 0 (the number zero, not the letter o) displays all the forms of the parent attribute. In this case, only one form exists for each, as shown below.

Trick2-3

 

All the metrics on the report are listed. Users can select the subtotal function to use for each. Total is correct for all of the metrics.

  1. Check the Total for the #P #0 subtotal (shown below).
  2. Click Advanced.
  3. Select Across level and then select the Region and the Employee as the levels.
  4. Click OK to save the new subtotal.
  5. Click OK to return to the Subtotals dialog box.
  6. Click OK.

Trick2-4

The report should now look like this.

Trick2-5