When I was a young boy, I loved to color with my big box of Crayola Crayons. I would pull out blank sheets of paper and create multi-colored masterpieces (at least my mother said so).
Crayola’s crayon chronology tracks their standard box, from its humble eight color beginnings in 1903 to the present day’s 120-count lineup. According to Crayola, of the seventy-two colors from the official 1975 set – sixty-one survive. 
A creative dataviz type who goes by the name Velociraptor (referred from here as “Velo”) created the chart below to show the historical crayonology (I just made that word up!) of Crayola Crayons colors.
Velo gently scraped Wikipedia’s list of Crayola colors, corrected a few hues, and added the standard 16-count School Crayon box available in 1935.
Except for the dayglow-ski-jacket-inspired burst of neon magentas at the end of the ’80s, the official color set has remained remarkably faithful to its roots!
Ever industrious, Velo also calculated the average growth rate: 2.56% annually. For maximum understandability, he reformulated it as “Crayola’s Law,” which states:
The number of colors doubles every 28 years!
If the Law holds true, Crayola’s gonna need a bigger box, because by the year 2050, there’ll be 330 different crayons! 
A Second Version
Velo was not satisfied with his first version, so he produced the second version below. 
A Third Version (and interactive too!)
Click through to the interactive version for a larger view with mouseover color names!
 Stephen Von Worley, Color Me A Dinosaur, The History of Crayola Crayons, Charted, Data Pointed, January 15, 2010, http://www.datapointed.net/2010/01/crayola-crayon-color-chart/.
 Stephen Von Worley, Somewhere Over The Crayon-Bow, A Cheerier Crayola Color Chronology, Data Pointed, October 14, 2010, http://www.datapointed.net/2010/10/crayola-color-chart-rainbow-style/.
While called the “Festival of Lights,” Diwali is most importantly a day to become aware of one’s “inner light.” In Hindu philosophy there is an idea of “Atman,” something beyond the body and mind which is pure, infinite and eternal. Today is a celebration of “good” versus “evil”; A day when the light of higher knowledge dispels ignorance. With this awakening comes compassion and joy.
The background story and practices vary region to region. Many people celebrate by lighting fireworks and sharing sweets and candies. Diwali is a holiday celebrated across a vast array of countries and religions. It is celebrated in India, Nepal, Sri Lanka, Myanmar, Mauritius, Guyana, Trinidad & Tobago, Suriname, Malaysia, Singapore and Fiji, by Hindus, Jains, Sikhs and Buddhists.
This informative infographic is from 2012, but I like the information about Diwali it provides and thought of sharing.
Source: Metal Gaia, Happy Diwali!, November 13, 2012, http://metal-gaia.com/2012/11/13/happy-diwali/.
Using the R Integration functionality, how to perform Text Mining on a MicroStrategy report and display the result
Here is another great post in the MicroStrategy Community from Jaime Perez (photo, right) and his team. A lot of work when into the preparation of this post and it shows some great ways to use the “R” integration with MicroStrategy.
Contributors from Jaime’s team include:
Text Mining Using R Integration in MicroStrategy
Users may wish to perform text mining using R on the result of any arbitrary MicroStrategy report and display the result. One of the problems that hinders the users from achieving it is that the number of output elements is not always consistent. For example, a report may have three attributes named ‘Age groups’, ‘Reviewer’, and ‘Survey feedback’ and the report might display four rows of feedback as follows:
If the above report result is sent to R as an input and the R script breaks down each sentence of the feedback into the term frequency that is grouped by the age groups, it will have 18 rows.
Since the number of output elements is greater than the number of the MicroStrategy report rows, the report execution will fail. Using the objects in the Tutorial project, this technical note (TN207734) describes one way to display the result of text mining on a MicroStrategy report, using the R integration functionality.
– Following the instructions in TN43665, the MicroStrategy R Integration Pack has already been installed on the Intelligence Server.
The Steps Involved
STEP 1: Decide on the input values that need to be sent to R via R metrics
The first step is to decide on which data you wish to perform text mining. In this technical note, the sample report will let users select one year element, the arbitrary number of category elements, and specify the Revenue amount in prompts. The report will then display the value of the normalized TF-IDF (term frequency and inverse document frequency) for every word showing up in the qualified Item attribute elements, grouped by the Category elements.
A user may select the following values for each prompt and the report may look as shown below.
- Year: 2012
- Category: Books, Movies, and Music
- Revenue: greater than $15,000
Eventually, the user may want to see the normalized TF-IDF for every word showing up in the Item attribute elements as shown below:
Since the final output displays each word from the Item attribute and it is grouped by the Category elements, the necessary input values to R are as follows
- The elements of the Category attribute.
- The elements of the Item attribute.
STEP 2: Create metrics to pass the input values to R
The input values to R from MicroStrategy must be passed via metrics. Hence, on top of the current grid objects, additional metrics need to be created. For this sample report, since the inputs are the elements of two attributes, create two metrics with the following definitions so that the elements are displayed as metrics.
STEP 3: R script – Phase 1: Define input and output variables and write R script to obtain what you wish to display in a MicroStrategy report
In the R script, define (1) a variable that receives the inputs from MicroStrategy and (2) a variable that will be sent back to MicroStrategy as the output as depicted below. Since the number of output elements must match with the number of input elements, it is defined as “output = mstrInput2” to avoid the errors. In other words, this script executes R functions to obtain the data that you wish to display in a MicroStrategy report, but the output is the same as the input. More details about how to display the result in a MicroStrategy report will be followed up later in this technical note.
In this technical note, after manipulating the input value, we assume that the variable named ‘norm.TF.IDF’ in the R script holds the values of the TF-IDF for each term.
STEP 4: Create tables in the data warehouse to store the value of your R output
In order to display the values of the ‘norm.TF.IDF’ defined in a MicroStrategy report, tables to hold the result need to be created in the data warehouse. In other words, additional report will later have to be created in MicroStrategy and it will extract the data from the database tables, which are created in this section.
In this specific example, the variable ‘norm.TF.IDF’ has the elements of words (terms) and categories and the values of the normalized TF-IDF. Considering the types of data, the first two should be displayed as attributes and the values of the normalized TF-IDF should be presented in a metric. Hence, two lookup tables to hold the term and category elements and one fact table need to be created to store all the data. On top of these tables, one relationship table is also required since the relationship between words and categories is many-to-many.
STEP 5: R script – Phase 2: Populate the tables in your R script
As previously mentioned, the variable named ‘norm.TF.IDF’ contains the values, which a user wishes to display in a MicroStrategy report as shown below.
In this R script, four more variables are defined from ‘norm.TF.IDF’, each of which contains the subset of data that will be inserted into the database tables.
tm_Category holds the unique elements of the Category.
tm_Word holds the unique elements of the Word (Term).
tm_Word_Cat stores the values of the many-to-many relationship.
tm_Fact contains the values of TF-IDF for every Word-Category combination.
In the R script, populate the database tables with the above four subsets of ‘norm.TF.IDF’.
# Load RODBC library(RODBC) # RODBC package: assign ch the connectivity information ch <- odbcConnect("DSN_name") # Delete all the rows of the tables sqlClear(ch, "tm_Category", errors = TRUE) sqlClear(ch, "tm_Word", errors = TRUE) sqlClear(ch, "tm_Word_Cat", errors = TRUE) sqlClear(ch, "tm_Fact", errors = TRUE) # SQL: insert the data into tables; use parameterized query sqlSave(ch, tm_Category, tablename = "tm_Category", rownames=FALSE, append=TRUE, fast = TRUE) sqlSave(ch, tm_Word, tablename = "tm_Word", rownames=FALSE, append=TRUE, fast = TRUE) sqlSave(ch, tm_Word_Cat, tablename = "tm_Word_Cat", rownames=FALSE, append=TRUE, fast = TRUE) sqlSave(ch, tm_Fact, tablename = "tm_Fact", rownames=FALSE, append=TRUE, fast = TRUE) #Close the channel odbcClose(ch)
STEP 6: Create and add an R metric, which implements the R script
The R script is done. It is time to implement this R script from MicroStrategy by creating an R script. In the deployR interface, open the R script and define the input and output that you specify in Step 3 as follows. Since the elements of the Category and Item attributes are characters, choose “String” as its data type. Likewise, since the output is the same as the mstrInput2, its data type is also set to string.
Create a stand-alone metric and paste the metric definition of the deployR utility. Then, replace the last parameters by the Category and Item metrics that you created in Step 2.
Add the R metric to the report.
The report and R will perform the following actions after adding the R metric
i. The report lets users select the prompt answers
ii. MicroStrategy sends the Category and Item elements to R via the R metric
iii. R performs text mining to calculate the TF-IDF based on the inputs
iv. R generates subsets of the TF-IDF
v. R truncates the database tables and populates them with the subset of the TF-IDF
vi. R sends the output(which is actuary the input) to MicroStrategy
vii. The report displays the values of all object including the R metric
STEP 7: Create MicroStrategy objects to display the data
From the tables created in Step 4, create the Word and Category attributes and the fact named weight. The object relationship is as depicted below.
Now, create a new report with these objects. This report will obtain and display the data from the database tables.
STEP 8: Utilize the report level VLDB properties to manipulate the order of the report execution jobs
There are currently two reports and let each of which to be named R1 and R2 as described below
- R1: A report which prompts users to specify the report requirements and implements the R script executing text mining
- R2: This report obtains the result of text mining from the database and display it
If the two reports are placed in a document as datasets as shown below, there is one problem: R2 may start its execution before R1 populates the database tables with the result of text mining.
In order to force R2 to execute its job after the completion of R1, the VLDB properties PRE/POST statements along with additional database table may be used. The table tm_Flag contains the value of 0 or 1. R2 is triggered when R1 sets the value of completeFlag to 1. The detailed steps are described below with the script for SQL Server.
i. Create another table in the database, which holds the value of 1 or 0
CREATE TABLE tm_Flag ( completeFlag int ) INSERT INTO tm_Flag VALUES(0)
ii. In the VLDB property ‘Report Post Statement 1” of the R1 report, defines a Transact-SQL statement that changes the value of completeFlag to the value of 1.
DECLARE @query as nvarchar(100) SET @query = 'UPDATE tm_Flag SET completeFlag = 1' EXEC sp_executesql @query
iii. Define the VLDB property ‘Report Pre Statement 1’ in R2 so that it will check the value of completeFlag every second and loop until it turns to 1. After the loop, it will revert the value of completeFlag back to 0. After this Report Pre Statement, R2 will obtain data from the database, which has been populated by R1.
DECLARE @intFlag INT SET @intFlag = (select max(completeFlag) from tm_Flag) WHILE(@intFlag = 0) BEGIN WAITFOR DELAY '00:00:01' SET @intFlag = (select max(completeFlag) from tm_Flag) END DECLARE @query as nvarchar(100) SET @query = 'UPDATE tm_Flag SET completeFlag = 0' EXEC sp_executesql @query
Overall execution flow
- Answer prompts
2. Only the text mining result is displayed to users
Third Party Software Installation:
WARNING: The third-party product(s) discussed in this technical note is manufactured by vendors independent of MicroStrategy. MicroStrategy makes no warranty, express, implied or otherwise, regarding this product, including its performance or reliability.
I received the following e-mail from the Kimball Group. Thought I would share.
Kimball Group Retiring on December 31, 2015
During the past three decades, we have worked with hundreds of clients, written thousands of pages, taught tens of thousands of students, and flown millions of miles. It’s been incredibly rewarding and challenging, but it will soon be time to move on. The members of the Kimball Group will retire at the end of December 2015.
We wanted to give you plenty of notice while there’s still time to engage us or enroll in our classes (or both).
- Kimball University Public Classes: Several Dimensional Modeling and DW/BI Lifecycle classes are scheduled for the remainder of this year. We’ll announce our 2015 “final tour” in mid-December.
- Kimball University Private Onsite Classes: Check out our onsite classes and contact Margy if you have questions.
- Kimball Group Consulting: Check out our consulting offerings and contact Bob if you have questions.
Stay tuned for more details during the next several months.
We have learned a tremendous amount from our clients, students and readers through the years and are extremely grateful for your business, intelligence, wit and kindness. We hope to see as many of you as we can during the coming year as we approach retirement.
Thanks and best regards,
Ralph, Julie, Margy, Bob, Joy and Nancy
Bryan Brandow (photo, right), a Data Engineering Manager for a large social media company, is one of my favorite bloggers out their in regards to thought leadership and digging deep into the technical aspects of Tableau and MicroStrategy. Bryan just blogged about triggering cubes and extracts on his blog. Here is a brief synopsis.
One of the functions that never seems to be included in BI tools is an easy way to kick off an application cache job once your ETL is finished. MicroStrategy’s Cubes and Tableau’s Extracts both rely on manual or time based refresh schedules, but this leaves you in a position where your data will land in the database and you’ll either have a large gap before the dashboard is updated or you’ll be refreshing constantly and wasting lots of system resources. They both come with command line tools for kicking off a refresh, but then it’s up to you to figure out how to link your ETL jobs to call these commands. What follows is a solution that works in my environment and will probably work for yours as well. There are of course a lot of ways for your ETL tool to tell your BI tool that it’s time to refresh a cache, but this is my take on it. You won’t find a download-and-install software package here since everyone’s environment is different, but you will find ample blueprints and examples for how to build your own for your platform and for whatever BI tool you use (from what I’ve observed, this setup is fairly common). Trigger was first demoed at the Tableau Conference 2014. You can jump to the Trigger demo here.
I recommend you click on the link above and give his blog post a full read. It is well worth it.
A Star Schema is a design that contains only one lookup table for each hierarchy in the data model instead of having separate lookup tables for each attribute. With only a single lookup table for each hierarchy, the IDs and descriptions of all attributes in the hierarchy are stored in the same table. This type of structure involves a great degree of redundancy. As such, star schema are always completely denormalized. Let’s review the star schema above based on the MicroStrategy Tutorial data model.
The schema contains only two lookup tables, one for each hierarchy. LU_LOCATION stores the data for all of the attributes in the Location hierarchy, while LU_CUSTOMER stores the data for all of the attributes in the Customer hierarchy. As a result, star schemas contain very few lookup tables-one for each hierarchy present in the data model. Each lookup table contains the IDs and descriptions (if they exist) for all of the attribute levels in the hierarchy.
Even though you have fewer tables in a star schema than a snowflake, the tables can be much larger because each one stores all of the information for an entire hierarchy. When you need to query information from the fact table and join it to information in the lookup tables, only a single join is necessary in the SQL to achieve the desired result.
Joins in a Star Schema
As an example, if you run the same report to display customer state sales, only one join between the lookup and fact table is required to obtain the result set as illustrated below.
To join the Customer State description (Cust_State_Desc) to the Sales metric (calculated from Sales_Amt) requires only one join between tables since the Customer State ID and description are both stored in the LU_CUSTOMER table. As a result, the query has to access only one lookup table to obtain all of the necessary information for the report.
Even though achieving this result set requires only a single join, star schemas do not necessarily equate to better performance. Depending on the volume of data in any one hierarchy, you may be joining a very large lookup table to a very large fact table. In such cases, more joins between smaller tables can yield better performance.
Characteristics of a Star Schema
The following is a list of characteristics of a star schema.
- Contains fewer tables (one per hierarchy)
- Contains very large tables (much larger than some forms of snowflake schemas due to storing all attribute ID and description columns)
- Store the IDs and descriptions of all the attributes in a hierarchy in a single table
- Requires only a single join when querying fact data regardless of the attribute level at which you are querying data
Source: MicroStrategy University, MicroStrategy Advanced Data Warehousing, Course Guide, Version: ADVDW-931-Sep13-CG.