Skip to content

Best practice working with AnalyticsCreator

Germo Görtz edited this page Mar 1, 2020 · 4 revisions

The recommendations are based on my subjective experience. Other ideas and additions are welcome.

check for a new version on each start

AC will not inform you about new versions. If you use your local copy you could be outdated.

If several developers are working on the same repository using different versions of AC this could be dangerous: When you attach an existing repository the version of AC and the version of the repository are compared. A downgrade of the repository could happen.

Launch AC from here: https://www.analyticscreator.com/AnalyticsCreator/publish.htm

Don't click "Install", but click the "launch" link: https://www.analyticscreator.com/AnalyticsCreator/AnalyticsCreator.application

You don't need to do visit this website every time. A small file is downloaded and you can find it in your download folder and start this file from there. It will check for a new version and download it. My is located here:

"d:\Users\ggoer\Downloads\AnalyticsCreator.application"

AC Settings

Only when you start AC you can change settings:

SQL Server Settings: LocalDB vs SQL Server

If you are the only developer and the features of SQL Server 2016 are sufficient, then LocalDB could be OK.

As soon as several developers should be able to access the repository at the same time, you need a real SQL Server instance.

I also want to use features of SQL Server 2017, that doesn't work with the standard SQL Server 2016 LocalDB.

Save repositories as Backup or Skript

Outdated. Currently only the Script method is supported. Backups can only be restored.

Using Backups to save and restore repositories was the only option for a long time:

  • [+] Backups are fast
  • [-] acces to file system on the server is required ("UNC path to store backup")
  • [--] it is not possible to restore a backup to a SQL Server instance with an older version
    and this is a big issue!

Skripts have been implemented to solve the backup issues:

  • [-] Using scripts is less performant than using backups
  • [+] Skrips are SQL code
    • [++] can be executed on any SQL Server version
    • can be read
    • can be compared

=> I suggest to use Skripts.

New projects

Configure hidden parameters (HIST_ID_FIELDNAME, HIST_DATEFROM_FIELDNAME, HIST_DATETO_FIELDNAME, VAULT_HUB_FIELDNAME)

Some parameters should be configured at the beginning as they affect automatically generated columns and code. They should not be changed later. Thats why they are not shown in the AnalyticsCreator!

You can see all default parameters which are not settable using this query:

SELECT
       Param_Name
     , Default_Value
     , Description
     , Settable
FROM
     INTERN.PARAMETERS
WHERE Settable = 0
ORDER BY
         Param_Name;

I don't like some default values, so I change change them directly in the repository.

  • HIST_ID_FIELDNAME
    • default: SATZ_ID
    • my default: Hist_Id
  • HIST_DATEFROM_FIELDNAME
    • default: DAT_VON_HIST
    • my default: HistDateFrom
  • HIST_DATETO_FIELDNAME
    • default: DAT_BIS_HIST
    • my default: HistDateTo
  • VAULT_HUB_FIELDNAME
    • default: VAULT_HUB_ID
    • my default: Tik (Time Independent Key)

Of course, you cannot change the values in the INTERN.PARAMETERS table.

The table CFG.PARAM_VALUES should be used to store the values. You can use a procedure to add or change entries.

EXEC CFG.SET_PARAM_VALUE
     @ParamName = N'HIST_ID_FIELDNAME'
   , @ParamValue = N'HistID';

EXEC CFG.SET_PARAM_VALUE
     @ParamName = N'HIST_DATEFROM_FIELDNAME'
   , @ParamValue = N'HistDateFrom';

EXEC CFG.SET_PARAM_VALUE
     @ParamName = N'HIST_DATETO_FIELDNAME'
   , @ParamValue = N'HistDateTo';

EXEC CFG.SET_PARAM_VALUE
     @ParamName = N'VAULT_HUB_FIELDNAME'
   , @ParamValue = N'Tik';

You can also edit the content of the table directly.

But much more easy is to use template repositories! Include also this hidden parameters in your template repositories and compare them against a new created repository: How to compare repositories

Template repositories

Use template repositories:

  • open a template containing your settings and default objects and use "Save AS ..."
  • existing new projects:
    • adapt them as required
    • change and add settings
    • compare template repositories against new created repositories: How to compare repositories

simple objects to compare by name

Some objects are used and referenced by Name. The name should be used to compare these objects, but not the ID

  • makros
    [CFG].[MACROS]
    compare by [MacroName]
  • object scripts
    [CFG].[OBJECT_SCRIPTS]
    the unique key used in AC is ([Table], [Name]),
    but [Table] can be NULL and for easy usage it is a good idea to use unique [Name] in to:
    compare by [Name]
  • parameters
    [CFG].[PARAM_VALUES]
    compare by [Param_Name]
  • skripts
    [CFG].[SCRIPTS]

simple objects to compare by ID

Some objects are used and referenced by ID. Comparing these objects the content per ID should be keept.

  • predfined transformations
    [CFG].[PREDEFINED_TRANSFORMATIONS]
    [PreTransID] need to be used to compare
    it the same [PreTransID] has different meaning in different repositories, you have an issue! Try to use the same predefined transformations in all your repositories.

Macros

Maros can be nested!

Makros are defined in the tree:

Makros can be used as a placeholder for code tables, transformations (, ...?)

  • to shorten the code
  • to ensure the usage of the same code in different places

Here is an example to generate a hashcode, a standard Macro "GetVaultHash" is used by AC, but the code can be modified. Here is my personal default which is different from the AC default:

CONVERT(CHAR(32), HASHBYTES('MD5', LOWER(CONCAT(N'',:1, '|~|', :2, '|~|', :3, '|~|', :4, '|~|', :5, '|~|', :6, '|~|', :7, '|~|', :8, '|~|', :9, '|~|', :10, '|~|', :11, '|~|', :12, '|~|', :13, '|~|', :14, '|~|', :15, '|~|', :16, '|~|', :17, '|~|', :18, '|~|', :19, '|~|', :20, '|~|'))), 2)
  • :1, :2, ... Parameters

todo:

  • what is the effect of "Language"?
    DAX, MDX, SSIS, T-SQL
    in the past I used only "T-SQL"
  • "Refernced Tabele"
    How to use this? any example?

placeholders

  • where these placeholders can be used?

{TableName}

example in SSAS Tabular measure:
COUNTROWS('{TableName}')

{ColumnName}; @this

  • both should be equivalent (are they?)
  • wie erfolgt die Auflösung in SSAS Tabular Definitionen?

reference to the current source row, in T-SQL this usage

MAX(@this)
OVER 
(PARTITION BY T1.Mandant, T1.Objekt, T1.ObjektTyp)

will generate this T-SQL code:

MAX([T1].[Datum_dt])
OVER 
(PARTITION BY T1.Mandant, T1.Objekt, T1.ObjektTyp)

if [T1].[Datum_dt] is the source column.

{ColumnNameFull}

full qualified OLAP column name:
[TableName].[ColumnName] for multidimensional OLAP
'TableName'[ColumnName] for tabular OLAP

{AggregationName}

  • maybe this is the SSAS Tabular aggregation (Sum, Average, Count, ...)?

{AggregationStatement}

  • maybe this is the SSAS Tabular aggregation statement?

SSAS Tabular Measure defintion

  • was bedeutet das?
    "Im Statement kannst du Aliases auch verwenden, was neben Macros auch viel Erleichterung bringt."

AC Cloud: Export and Import

Objects with all required predecessors can be exported and imported.

export

import

all required predecessors will be imported, used schemas will be created:

what happens if you import a persisted table a second time?

This happens. Bug or Feature?

It should not happen that a persisted table will persist different transformations at the same time.

Here is a list of related bugtracker entries:

todo:

I need to investigate, but I will do this after implementation of
0000015: Export and Import Repository to and from file

  • are used predefined transformations exported and imported?
  • are used macros exported and imported?

stored procedures existing in each repository

Each repository contains some usefull stored procedures to manipulate the repository.

ToDo: describe some of these procedures.

stored procedures existing in each generated DWH

[CFG].[COMPRESS_HISTORY]

CREATE PROCEDURE [CFG].[COMPRESS_HISTORY]
	@Schema sysname, @Table sysname, @Keys nvarchar(4000), 
	@IgnoreFields nvarchar(4000) = NULL, @ValidToMode bit = 0, @Do bit = 0, 
	@ID_Field sysname = 'SATZ_ID', @DateFrom_Field sysname = 'DAT_VON_HIST', @DateTo_Field sysname = 'DAT_BIS_HIST'	
	AS
BEGIN
/*
	This procedure can be used to compress the data in historizised table. 
	Initial State:
	PK	FROM	TO		FIELD1	FIELD2	FIELD3
	---------------------------------------
	1	01.01	05.01	A		B		C
	1	05.01	10.01	A		B		C
	1	10.01	20.01	A		B		C
	1	20.01	25.01	A		D		E
	1	25.01	30.01	A		D		E
	1	30.01	31.12	F		D		E

	Result State:
	PK	FROM	TO		FIELD1	FIELD2	FIELD3
	---------------------------------------
	1	01.01	20.01	A		B		C
	1	20.01	30.01	A		D		E
	1	30.01	31.12	F		D		E

	Parameters:
	@Schema: table schema
	@Table: table name
	@Keys: comma-separated list of PK fields, for example 'InvoiceID,PosNr'
	@IgnoreFields: comma separatet list of column names which will be ignored during comparison. 
			If you don't need to track changes in the specific field, you can add this field to the list 
			and the field change history will be deleted
	@Do: 1 - make changes, 0 - output SQL Command only, default - 0
	@ValidToMode: 1 if the DateFrom of the next record is equal to DateFrom of previous recors, 
			otherwise 0 (see HIST_VALID_TO_MODE parameter), default - 0
	@ID_Field: name of surrogate key field, default - SATZ_ID
	@DateFrom_Field: name of DateFrom field, default  DAT_VON_HIST
	@DateTo_Field: name of DateTo field, default - DAT_BIS_HIST
	
	Example:
	EXEC CFG.COMPRESS_HISTORY 'STG', 'ORGANIZATION_HIERARCHY', 'ID'
*/

Bugtracker

use the Bugtracker