[EN] Azure Data Factory V2 – Incremental loading with configuration stored in a table – Complete solution, step by step.

This post explains things that are difficult to find even in English. That’s why I will break my rule and will not write it in my native language! Po wersję polską zapraszam do google translate :>

Introduction


Loading data using Azure Data Factory v2 is really simple. Just drop Copy activity to your pipeline, choose a source and sink table, configure some properties and that’s it – done with just a few clicks!

But what if you have dozens or hundreds of tables to copy? Are you gonna do it for every object?

Fortunately, you do not have to do this! All you need is dynamic parameters and a few simple tricks 🙂

Also, this will give you the option of creating incremental feeds, so that – at next run – it will transfer only newly added data.

Mappings

Before we start diving into details, let’s demystify some basic ADFv2 mapping principles.

  • Copy activity doesn’t need to have defined column mappings at all,
  • it can dynamically map them using its own mechanism which retrieves source and destination (sink) metadata,
  • if you use polybase, it will do it using column order (1st column from source to 1st column at destination etc.),
  • if you do not use polybase, it will map them using their names but watch out – it’s case sensitive matching!
  • So all you have to do is to just keep the same structure and data types on the destination tables (sink), as they are in a source database.

Bear in mind, that if your columns are different between source and destination, you will have to provide custom mappings. This tutorial doesn’t show how to do it, but it is possible to pass them using „Get metadata” activity to retrieve column specification from the source, then you have to parse it and pass as JSON structure into the mapping dynamic input. you can read about mappings in official documentation: https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping

String interpolation – the key to success

My entire solution is based on one cool feature, that is called string interpolation. It is a part of built-in expression engine, that simply allows you to just inject any value from JSON object or an expression directly into string input, without any concatenate functions or operators. It’s fast and easy. Just wrap your expression between  @{ ... } . It will always return it as a string.

Below is a screen from official documentation, that clarifies how this feature works:

Read more about JSON expressions at https://docs.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions#expressions

 

So what we are going to do? :>


Good question 😉

In my example, I will show you how to transfer data incrementally from Oracle and PostgreSQL tables into Azure SQL Database.

All of this using configuration stored in a table, which in short, keeps information about Copy activity settings needed to achieve our goal 🙂

Adding new definitions into config will also automatically enable transfer for them, without any need to modify Azure Data Factory pipelines.

So you can transfer as many tables as you want, in one pipeline, at once. Triggering with one click 🙂

 

Every process needs diagram :>

 

 

Basically, we will do:

  1. Get configuration from our config table inside Azure SQL Database using Lookup activity, then pass it to Filter activity to split configs for Oracle and PostgreSQL.
  2. In Foreach activity created for every type of database, we will create simple logic that retrieves maximum update date from every table.
  3.  Then we will prepare dynamically expressions for SOURCE and SINK properties in Copy activity. MAX UPDATEDATE, retrieved above, and previous WATERMARK DATE, retrieved from config, will set our boundaries in WHERE clause. Every detail like table name or table columns we will pass as a query using string interpolation, directly from JSON expression. Sink destination will be also parametrized.
  4. Now Azure Data Factory can execute queries evaluated dynamically from JSON expressions, it will run them in parallel just to speed up data transfer.
  5. Every successfully transferred portion of incremental data for a given table has to be marked as done. We can do this saving MAX UPDATEDATE in configuration, so that next incremental load will know what to take and what to skip. We will use here: Stored procedure activity.
This example simplifies the process as much as it is possible. Remember, in your solution you have to implement logic for every unsuccessful operation. You can achieve that using On Failure control flow with some activities (chosen depending on your needs) and timeout/retry options set individually for every activity in your pipeline.

 

About sources

I will use PostgreSQL 10 and Oracle 11 XE installed on my Ubuntu 18.04 inside VirtualBox machine.

In Oracle, tables and data were generated from EXMP/DEPT samples delivered with XE version.

In PostgreSQL – from dvd rental sample database: http://www.postgresqltutorial.com/postgresql-sample-database/

 

I simply chose three largest tables from each database. You can find them in a configuration shown below this section.

 

Every database is accessible from my Self-hosted Integration Runtime. I will show an example how to add the server to Linked Services, but skip configuring Integration Runtime. You can read about creating self-hosted IR here: https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime.

 

About configuration

In my Azure SQL Database I have created a simple configuration table:

Id is just an identity value, SRC_name is a type of source server (ORA or PG).

SRC and DST tab columns maps source and destination objects. Cols defines selected columns, Watermark Column and Value stores incremental metadata.

And finally Enabled just enables particular configuration (table data import).

As Andy rightly noted in the comment below this post, it is possible to use „Cols” also to implement SQL logic, like functions, aliases etc. The value from this column is rewritten directly to the query (more precisely – concatenated between SELECT and FROM clause). So you can use it according to your needs.

 

This is how it looks with initial configuration:

Create script:

 

EDIT 19.10.2018

Microsoft announced, that now you can parametrize also linked connections!

https://azure.microsoft.com/en-us/blog/parameterize-connections-to-your-data-stores-in-azure-data-factory/

Let’s get started (finally :P)


Preparations!

Go to your Azure Data Factory portal @ https://adf.azure.com/

Select Author button with pencil icon:

 

Creating server connections (Linked Services)

We can’t do anything without defining Linked Services, which are just connections to your servers (on-prem and cloud).

  1. Go to   and click 
  2. Find your database type, select and click 
  3. Give all needed data, like server ip/host, port, SID (Oracle need this), login and password.
  4. You can  if everything is ok. Click Finish to save your connection definition.
Some types of servers, such as PostgreSQL or MySQL, require separate .NET drivers. Check your server type here in Microsoft Docs and search for Prerequisites to match your scenario.

I have created three connections. Here are their names and server types:

 

Creating datasets

Creating linked services is just telling ADF what are connection settings (like connection strings).

Datasets, on the other hand, points directly to database objects.

BUT they can be parametrized, so you can just create ONE dataset and use it passing different parameters to get data from multiple tables within same source database 🙂

Source datasets

Source datasets don’t need any parameters. We will later use built-in query parametrization to pass object names.

  1. Go to   and click + and choose 
  2. Choose your datataset type, for example 
  3. Rename it just as you like. We will use name: „ORA”
  4. Set proper Linked service option, just like this for oracle database: 
  5. And that’s it! No need to set anything else. Just repeat these steps for every source database, that you have.

In my example, I’ve created two source datasets, ORA and PG

As you can see, we need to create also the third dataset. It will work as a source too, BUT also as a parametrizable sink (destination). So creating it is little different than others.

Sink dataset

Sinking data needs one more extra parameter, which will store destination table name.

  1. Create dataset just like in the previous example, choose your destination type. In my case, it will be Azure SQL Database.
  2. Go to , declare one String parameter called „TableName”. Set the value to anything you like. It’s just dummy value, ADF just doesn’t like empty parameters, so we have to set a default value.
  3. Now, go to , set Table as dynamic content. This will be tricky :). Just click „Select…”, don’t choose any value, just click somewhere in empty space. The magic option „Add dynamic content” now appears! You have to click it or hit alt+p. 
  4. „Add Dynamic Content” windows is now visible. Type: „@dataset().TableName” or just click „TableName” in „Parameters” section below „Functions”.
  5. The table name is now parameterized. And looks like this: 

 

Parametrizable PIPELINE with dynamic data loading.


Ok, our connections are defined. Now it’s time to copy data :>

 

Creating pipeline

  1. Go to you ADF and click PLUS symbol near search box on the left and choose „Pipeline„: 
  2. Reanme it. I will use „LOAD DELTA„.
  3. Go to Parameters, create new String parameter called ConfigTable. Set value to our configuration table name: load.cfg . This will simply parametrize you configuration source. So that in the future it would be possible to load a completely different set of sources by changing only one parameter :>
  4. In case you missed it, SAVE your work by clicking „Save All” if you’re using GIT or „Publish All” if not ;]

 

Creating Lookup – GET CFG

First, we have to get configuration. We will use Lookup activity to retrieve it from the database.

Bear in mind, that lookup activity has some limits. Currently, the maximum number of rows, that can be returned by Lookup activity is 5000, and up to 2MB in size. Also max duration for Lookup activity before timeout is one hour. Go to documentation for latest info and updates.
    1. Drag and drop  into your pipline
    2. Rename it. This is important, we will use this name later in our solution. I will use value „GET CFG„.
    3. In „Settings” choose 
    4. Now, don’t bother TableName set to dummy :> Just in „Use Query” set to „Query„, click „Add dynamic content” and type:
    5. Unmark „First row only„, we need all rows, not just first. All should look like this:

 

Creating Filters – ORA CFG & PG CFG

Now we have to split configs for oracle and PostgreSQL. We will use Filter activity on rows retrieved in „GET CFG” lookup.

  1. Drag and drop twice.
  2. Rename the first block to „ORA CFG„, second to „PG CFG„.
  3. Now go to „ORA CFG„, then „Settings„.
  4. In Items, click Add dynamic content and type:  @activity('GET CFG').output.value . As you probably guess, this will point directly to GET CFG output rows 🙂
  5. In Condition, click Add dynamic content and  type: @equals(item().SRC_name,'ORA') . We have to match rows for oracle settings. So we know, that there is a column in config table called „SRC_name„. We can use it to filter out all rows, except that with value ‚ORA’ 🙂 .
  6. Do the same with lookup activity „PG CFG„. Of course, change the value for a condition.

It should look like this:

Creating ForEach – FOR EACH ORA & FOR EACH PG

Now it’s time to iterate over each row filtered in separate containers (ORA CFG and PG CFG).

  1. Drag and drop two  blocks, rename them as „FOR EACH ORA” and „FOR EACH PG„. Connect each to proper filter acitivity. Just like in this example:  
  2. Click „FOR EACH ORA„, go to „Settings„, in Items clik Add dynamic content and type:  @activity('ORA CFG').output.value . We are telling ForEach, that it has to iterate over results returned in „ORA CFG”. They are stored in JSON array.
  3. Do this also in FOR EACH PG. Type:  @activity('PG CFG').output.value
  4. Now, you can edit Activities and add only „WAIT”  activity to debug your pipeline. I will skip this part. Just remember to delete WAIT block at the end of your tests.

 

Inside ForEach – GET MAX ORA -> COPY ORA -> UPDATE WATERMARK ORA

Place these blocks into FOR EACH ORA. Justo go there, click „Activities” and then 

Every row, that ForEach activity is iterating over, is accessible using @item() .

And every column in that row, can be reached just by using  @item().ColumnName .

Remember, that you can surround every expression in brackets @{ }  to use it as a string interpolation. Then you can concatenate it with other strings and expressions just like that:  Value of the parameter WatermarkColumn is: @{item().WatermarkColumn}

 

GET MAX ORA

  1. Go to „GET MAX ORA„, then Settings
  2. Choose your source dataset „ORA„, Use Query: „Query” and click Add dynamic content
  3. Type  SELECT MAX(@{item().WatermarkColumn}) as maxd FROM @{item().SRC_tab} . This will get a maximum date in your watermark column. We will use it as RIGHT BOUNDRY for delta slice.
  4. Check if  First row only is turned on.

It should look like this:

 

COPY ORA

Now the most important part :> Copy activity with a lot of parametrized things… So pay attention, it’s not so hard to understand but every detail matters.

Source

  1. In source settings, choose Source Dataset to ORA, in Use query select Query.
  2. Below Query input, click Add dynamic content and paste this:

Now, this needs some explanation 🙂

 

 

  • ORA CFG output has all columns and their values from our config.
  • We will use SRC_tab as table name, Cols as columns for SELECT query, WatermatkColumn as LastChange DateTime column name and WatermarkValue for LEFT BOUNDRY (greater than, >).
  • GET MAX ORA output stores date of a last updated row in the source table. So this is why we are using it as a RIGHT BOUNDRY (less than or equal, <=)
  • And the tricky thing, ORACLE doesn’t support implicit conversion from the string with ISO 8601 date. So we need to extract it properly with TO_DATE function.

So the source is a query from ORA dataset:

 

Sink

Sink is our destination. Here we will set parametrized table name and truncate query.

  1. Select 
  2. Parametrize TableName as dynamic content with value:  @{item().DST_tab}
  3. Also, do the same with Pre-copy script and put there:  TRUNCATE TABLE @{item().DST_tab}

It should look like this:

 

Mappings and Settings

All other things should just be set to defaults. You don’t have to parametrize mappings if you just copy data from and to tables that have the same structure.

Of course, you can dynamically create them if you want, but it is a good practice to transfer data 1:1 – both structure and values from source to staging.

 

UPDATE WATERMARK ORA

Now we have to confirm, that load has finished and then update previous watermark value with the new one.

We will use a stored procedure. The code is simple:

Create it on your Azure SQL database. Then use it in ADF:

  1. Drop  into project, connect constraint from COPY ORA into it. Rename as „UPDATE WATERMARK ORA” and view properties.
  2. In SQL Account set 
  3. Now go to „Stored Procedure”, select our procedure name and click „Import parameter”.
  4. Now w have to pass values for procedure parametrs. And we will also parametrize them. Id should be  @{item().id}  and NewWatermatk has to be:  @{activity('GET MAX ORA').output.firstRow.MAXD} .

 

And basically, that’s all! This logic should copy rows from all Oracle tables defined in the configuration.

We can now test it. This can be done with „Debug” or just by triggering pipeline run.

If everything is working fine, we can just copy/paste all content from „FOR EACH ORA” into „FOR EACH PG„.

Just remember to properly rename all activities to reflect new source/destination names (PG). Also, all parameters and SELECT queries have to be redefined. Luckily PostgreSQL support ISO dates out of the box.

Source code


Here are all components in JSON. You can use them to copy/paste logic directly inside ADF V2 code editor or save as files in GIT repository.

Below is source code for pipeline only. All other things can be downloaded in zip file in „Download all” at the bottom of this article.

Pipeline

 

Download all

IncrementalCopy_ADFv2.zip

 

23 myśli na temat “[EN] Azure Data Factory V2 – Incremental loading with configuration stored in a table – Complete solution, step by step.

  1. Dear Mr. Pawlikowski,
    This is the best post on ADF v2, that I found so far on the internet. I have passed the link to this blog post to a couple of colleagues in Bengaluru, India.
    Thank you very much for sharing your knowledge.

  2. Hi, thank you for this blog post – it’s really good. I was looking around for a way to simplify and use configurations for loading data from several source databases and this is perfect. I would also say that the „columns” config could actually also contain the SQL itself (if you needed to use any functions on the source data while loading or even join multiple tables together in the source query).

    Anyway, great job on this and very helpful.

  3. Very Impressive Azure Tutorial. The content seems to be pretty exhaustive and excellent and will definitely help in learning Azure Tutorial.I’m also a learner taken up Azure Training and I think your content has cleared some concepts of mine. While browsing for Azure Course on YouTube i found this fantastic video on Azure Course. Do check it out if you are interested to know more on Azure Tutorial.:-https://www.youtube.com/watch?v=8_0qGTdHZSs&t=51s

  4. Hi Michał ,

    Thanks for your sharing. While when I follow your steps, don’t know why there is error in CopyORA activity, which showing no source dataset found. While when I did a simply copy activity, it’s no problem to define the same source dataset.

    We had tried troubleshooting for many ways, but not able to resolve it, can you help to give some suggestion? Thank you very much. =)

    „Code”: 11000,
    „Message”: „‚Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The column ‘UPDATE_DATE1’ as defined in the source DataSet could not be located in the actual source. Check the configuration to ensure that all columns in the source DataSet does exist in the actual source.,Source=Microsoft.DataTransfer.ClientLibrary,'”,

      1. Hi Michał ,
        Thanks for the reply. „UPDATE_DATE1” is the column in Oracle source table, which aims to compare data in ORA Copy Activity. We did test the column exist, and previous activity also had refereed the dataset successfully. It will be helpful if you can give some suggestion to trouble shooting.

      1. Oh.. Sorry for hearing that, and hoping your getting better, and recovery soon. =)
        The issue is just mapping error, after we adjust to the correct column name, the issue resolved. Thank you.

  5. It’s an awesome post and really very detailed thanks for writing this. I am new to ADF and currently Working on a project to load multiple JSON files (in diff structures) to target tables using ADF v2 so can that also be dynamically build like it? If you have answer please throw some light

    1. Devendra Kumar, oh well, honestly it depends.
      Everything that has „Add dynamic content” can be parametrized.
      And if you are asking about dynamic content remapping it also depends on the structure of json files. They can have a set of objects or arrays, can have a lot of tables defined inside or just only one. Unfortunately, there will be always a problem with semi-structured files, which requires to parse and check their structure 🙁
      If the schema is the same for all files – i think it will be possible to do it dynamically. If not – well, hard to say 😐

    2. Look also at this site:
      https://docs.microsoft.com/en-us/azure/data-factory/supported-file-formats-and-compression-codecs#json-format

      Head to the example starting with:
      Sample 2: cross apply multiple objects with the same pattern from array

      Note this:

      If the structure and jsonPathDefinition are not defined in the Data Factory dataset, the Copy Activity detects the schema from the first object and flatten the whole object.
      If the JSON input has an array, by default the Copy Activity converts the entire array value into a string. You can choose to extract data from it using jsonNodeReference and/or jsonPathDefinition, or skip it by not specifying it in jsonPathDefinition.

      It means that it can be controlled, but it will be quite of challenge 😐

  6. Hi Michal, thank you for great explanation, looking on your example I was able to create ADF loading on prem oracle tables to data lake gen2 into blobs containers. Did you come across the issue loading tables as files incrementally instead of overnighting previous loads?
    Bruce.

    1. Hi Bruce.
      I’m afraid you must describe it a little widely 🙂
      I did not use Data Lake Storage as a sink, but as far as I understand this is not a case (everything is working fine with copying and detecting delta on Oracle side?)

      So maybe you just want to know how to handle deltas (incremental load) in a structures like Data Lake Storage?
      As far as I know ADLS does not have any mechanism to apply new portion of data into already existing file. So you have to implement it as a deltas partitioned by folder path and file names. So every new portion of data will sink into different folder and different file, but automatically 🙂

      Look at this article and it’s point number 2:
      https://www.blue-granite.com/blog/four-tips-for-using-azure-data-factory-to-load-your-data-to-azure-data-lake-store

      Choose your scenario.

      Then look at documentation of ADLS (by the way, are you using v1 or v2?)
      https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-azure-datalake-connector#the-partitionedby-property

      You have to use partitionedBy property to store every incremental in a separate path/file, then – depending on your architecture – implement a mechanism that will parse all the data and make what you want in.ex. particular type of slowly changing dimension or history table or maybe current snapshot? Sky is the limit ;D

      Summing it up:
      make your incremental loads from Oracle sink in ADLS with partitioning, it will put them in different paths and files. Then use them as you want with a tool as you like (U-SQL, Spark, Data Warehouse T-SQL etc…)

    1. I believe fileName can be parametrized same as folderPath. They are just expression fields.

      „fileName”: {
      „value”: „EBC.rpt_BriefingActivitySummary.tsv”,
      „type”: „Expression”
      },
      „folderPath”: {
      „value”: „@concat(‚/Snapshots/EBC/rpt_BriefingActivitySummary/’, formatDateTime(pipeline().parameters.scheduledRunTime, ‚yyyy’), ‚/’, formatDateTime(pipeline().parameters.scheduledRunTime, ‚MM’), ‚/’, formatDateTime(pipeline().parameters.scheduledRunTime, ‚dd’), ‚/’)”,
      „type”: „Expression”
      }

  7. Hi Michal, you are absolutely correct, fileName declared as pipeline parameter and filled with value at sink destination. Folder path can be simplified:

    „folderPath”: {
    „value”: „@concat(‚/dev-raw-data-zone/oracle_erp_full_tables/’,formatDateTime(pipeline().parameters.windowStart, ‚yyyy/MM/dd’))”,
    „type”: „Expression”
    }
    Want to thank you again, your example is a most complete, understandable and comprehensive learning guideline I was able to find on line.
    Bruce.

Dodaj komentarz