Connecting to Azure Data Lake Storage Gen2 from PowerShell using REST API – a step-by-step guide

This article describes how to connect using the access key. I also prepared another article showing how to connect to ADLS Gen2 using OAuth bearer token and upload a file. It is definitely easier (no canonical headers and encoding), although it requires the application account. You can read more here: http://sql.pawlikowski.pro/2019/07/02/uploading-file-to-azure-data-lake-storage-gen2-from-powershell-using-oauth-2-0-bearer-token-and-acls/

 

 

Introduction


Azure Data Lake Storage Generation 2 was introduced in the middle of 2018. With new features like hierarchical namespaces and Azure Blob Storage integration, this was something better, faster, cheaper (blah, blah, blah!)  compared to its first version – Gen1.

Since then, there has been enough time to prepare the appropriate libraries, thanks to which we could connect to our data lake.

Is that right? Well, not really…

Ok, it’s August 2019 and something finally has changed 🙂

Ok, it’s February 2020 and finally, MPA is in GA!

Microsoft introduced a public preview of Multi Protocol Access (MPA) which enables BLOB API on hierarchical namespaces.

Currently it works only in West US 2 and West Central US regions works in all regions and of course has some limitations (click here).

You can read about the details below:

https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-multi-protocol-access

ADLS Gen2 is globally available since 7th of February 2019. Thirty-two days later, there is still no support for the BLOB API, and it means no support for az storage cli or REST API. So you gonna have problems here:

“Blob API is not yet supported for hierarchical namespace accounts”

Are you joking?

And good luck with searching for a proper module in PS Gallery. Of course, this will change in the future.

As a matter of fact – I hope, that this article will help someone to write it 🙂 (yeah, I’m too lazy or too busy or too stupid to do it myself 😛 )

So for now, there is only one way to connect to Azure Data Lake Storage Gen2… Using native REST API calls. And it’s a hell of the job to understand the specification and make it work in the code. But it’s still not rocket science 😛

And by the way. I’m just a pure mssql server boy, I need no sympathy 😛 So let me put it in that way: web development, APIs, RESTs and all of that crap are not in my field of interest. But sometimes you have to be your own hero to save the world… I needed this functionality in the project, I read all the documentation and a few blog posts. However, no source has presented how to do it for ADLS Gen2 😮

So now I’ll try to fill this gap and explain this as far as I understood it, but not from the perspective of a professional front-end/back-end developer, which I am definitely not!

 

ADLS Gen REST calls in action – sniffing with Azure Storage Explorer


I wrote an entire article about How to sniff ADLS Gen2 storage REST API calls to Azure using Azure Storage Explorer.

Go there, read it, try it for yourself. If you need to implement it in your code just look at how they are doing it.

 

Understanding Data Lake REST calls


If you want to talk to ADLS Gen2 endpoint in his language, you have to learn “two dialects” 😛

  • REST specification dedicated only for Azure Data Lake Storage Gen2 (URL with proper endpoints and parameters), documented HERE.
  • GENERAL specification for authenticating connections to any Azure service endpoint using “Shared key”, documented HERE.

Knowing the first one gives you the ability to invoke proper commands with proper parameters on ADLS. Just like in console, mkdir or ls.

Knowing the second – just how to encrypt those commands. Bear in mind that no bearer token (see the details in a green box at the beginning of this page…), no passwords or keys are used for communication! Basically, you prepare your request as a bunch of strictly specified headers with their proper parameters (concatenated and separated with new line sign) and then you “simply” ENCRYPT this one huge string with your Shared Access Key taken from ADLS portal (or script).

Then you just send your request in pure http(s) connection with one additional header called “Authorization” which will store this encrypted string, along with your all headers!

You may ask, whyyyyy, why we have to implement such a logic?

The answer is easy. Just because they say so 😀

But to be honest, this makes total sense.

ADLS endpoint will gonna receive your request. Then it will also ENCRYPT it using our secret Shared Key (which once again, wasn’t transferred in the communication anywhere, it’s a secret!). After encryption, it will compare the result to your “Authorization” header. If they will be the same it means two things:

  • you are the master of disaster, owner of the secret key that nobody else could use to encrypt the message for ADLS
  • there was no in-the-middle injection of any malicious code and that’s the main reason why this is so tortuous…

How to encrypt – it is just a different kettle of fish… Because you have to use keyed-Hash Message Authentication Code (HMAC) prepared with SHA256 algorithm converted to Base64 value using a hash table generated from your ADLS shared access key  (which, by the way, is the Base64 generated string :D) But no worries! It’s not so complicated as it sounds, and we have functions in PowerShell that can do that for us.

 

Authorization Header specification


Ok, let’s look closer to the Azure auth rest specification here: https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key

We are implementing an ADLS gen2 file system request, which belongs to “Blob, Queue, and File Services” header. If you are going to do this for Table Service or if you would like to implement it in “light” version please look for a proper paragraph in docs.

Preparing  Signature String


Quick screen from docs:

We can divide it into 4 important sections.

  1. VERB – which is the kind of HTTP operation that we are gonna invoke in REST (in.ex. GET will be used to list files, PUT for creating directories or uploading content to files or PATCH for changing permissions)
  2. Fixed Position Header Values – which must occur in every call to REST, obligatory for sending BUT you can leave them as an empty string (just “” with an obligatory end of line sign)
  3. CanonicalizedHeaders – what should be there is also specified in the documentation. Basically, you have to put all available “x-ms-” headers here. And this is important – in lexicographical order by header name! For example: "x-ms-date:$date`nx-ms-version:2018-11-09`n"
  4. CanonicalizedResourcealso specified in docs. Here come all lowercase parameters that you should pass to adls endpoint according to its own specification. They should be ordered exactly the same as it is in the invoked URI. So if you want to list your files recursively in root directory invoking endpoint at https://[StorageAccountName].dfs.core.windows.net/[FilesystemName]?recursive=true&resource=filesystem" you need to pass them to the header just like this: "/[StorageAccountName]/[FilesystemName]`nnrecursive:true`nresource:filesystem"

Bear in mind that every value HAS TO BE ended with new line sign, except for the last, that will appear in the canonicalized resource section. They also need to be provided as lowercase values!

In examples above, I’m using `n which in PowerShell is just a replacement for \n

 

So if I want to list files in the root directory first I declare parameters:

 

Then I need to prepare a date, accordingly to specification. And watch out! Read really carefully this “Specifying the Date Header” section in the docs! It’s really important to understand how Date headers work! The most important part is:

Ok, creating it in PowerShell, also I’m preparing new line and my method as variables ready to be used later:

 

So what about fixed headers? For listing files in ADLS and with defined x-ms-date I can leave them all empty. It would be different if you would like to implement PUT and upload the content to Azure. Then you should use at least Content-Length and Content-Type . For now it looks like this:

 

  • Implementing file upload is much more complex than only listing them. According to the docs, you have to first CREATE file then UPLOAD a content to it (and it looks like you can do this also in parallel 😀 Happy threading/forking!)
  • Don’t forget about the required headers. UPDATE gonna need more than listing.
  • You can also change permissions here, see setAccessControl
  • There is also a huuuge topic about conditional headers. Don’t miss it!

Encrypting Signature String


Let’s have fun 🙂 Part of the credits goes to another blog with an example of a connection to storage tables.

First, we have to prepare an Array of Base64 numbers converted from our Shared Access Keys:

 

Then we create HMAC SHA256 object, we gonna fill it with our numbers from access key.

 

Now we can actually encrypt our string using ComputeHash function of our HMAC SHA256 object. But before encryption, we have to convert our string into the byte stream.

As specification requires it from us, we have to encode everything once again into Base64.

 

And use it as auth header, along with storage account name before it.

Also here we have to add ordinary headers with date of sending the request and API version.

Remember to use proper version (a date) which you can find in ADLS Gen2 REST API spec:

 

That’s it!

 

Now you can invoke request to your endpoint:

 

Result:

It works!

 

Now small debug in Visual Studio Code, adding a watch for $result  variable to see a little more than in console output:

From here you can see, that recursive=true did the trick, and now I have a list of 4 objects. Three folders in root directory and one file in FOLDER2, sized 49MB. Just rememebr that API has a limit of 5000 objects.

Easy peasy, right? 😉 Head down to the example scripts and try it for yourself!

Example Scripts


 

 

List files


This example should list the content of your root folder in Azure Data Lake Storage Gen2, along with all subdirectories and all existing files recursively.

 

 

 

List files in directory, limit results, return recursive


This example should list the content of your requested folder in Azure Data Lake Storage Gen2. You can limit the number of returned results (up to 5000) and decide if files in folders should be returned recursively.

 

 

 

Create directory (or path of directories)


This example should create folders declared in PathToCreate variable.

Bear in mind, that creating a path here, means creating everything declared as a path. So there is no need to create FOLDER1 as first, then FOLDER1/SUBFOLDER1 as second. You can make them all at once just providing the full path "/FOLDER1/SUBFOLDER1/"

 

 

Rename file/folder (or move to the other path)


This example should rename given path to another, you can also move files with that command (hierarchical namespaces, path to file is just only metadata)

Example parameters:

 

And the most requested code 😀

 

 

 

Delete folder or file (simple, without continuation)


This example should delete file or folder, giving you also an option to delete folder recursively.

Bear in mind, that this example uses simple delete without continuation. To learn what it is and how to handle it please refer section describing parameter “continuation” in the documentation here (click)

Usage example:

 

 

List filesystems


This example lists all available filesystems in your storage account. Refer to $result.filesystems  to get the array of filesystem objects.

 

 

Create filesystem


This example should create filesystem provided in FilesystemName parameter. Remember, that filesystem names cannot have big letters or special chars!

 

 

Get permissions from filesystem or path


It will return a raw value of x-ms-acl property like: user::rwx,group::r-x,other::—
In Path parameter:
If you want get permission for filesystem just use “/”.
If you want to get permissions for a given path use it without leading “/” in.ex.: “Folder1/Folder2/File.csv”
 

Set permissions on filesystem or path


In Path parameter:
If you want set permission for filesystem just use “/”.
If you want to set permissions for a given path use it without leading “/” in.ex.: “Folder1/Folder2/File.csv”
Example of setting “Other” the Default –X (execute) permission (set it as $PermissionString  var):
user::rwx,group::r-x,other::--x,default:other::--x
 

Good luck!

102 thoughts on “Connecting to Azure Data Lake Storage Gen2 from PowerShell using REST API – a step-by-step guide

  1. Hi MIchal,
    Is there a way I can get list of all the files/folders to which a user has access.
    I want to take username as a argument and list all the files for that user to which he has access.

    Thanks

    1. Hi Irfan.
      Unfortunately REST API does not have such possibility by default. You always get the list of files and directories in a context of the connection.
      In my example the context is handled by access key, so it means that you view files in a context of something like admin account.
      Path object definition contains “permissions” property. It stores the permissions for different “layers” of security.

      You can parse it to read what permission is set on the object. Of course these permissions can be granted from different layers. I mean from “OTHER”, owner, group or explicitly granted for the given user. So it’s not impossible to implement, but it will not be easy 🙁

      The second option, if the connection can be made by the user (so he or she will provide a password or token from the context), you can use bearer authentication (OAuth 2.0) to connect to ADLS Gen2 REST API. I did not try it, but I know, that it is possible and Azure Storage Explorer and azcopy is using it when user does not have the permission on the service level but have on ACL level.

      Some examples using curl: https://social.msdn.microsoft.com/Forums/sqlserver/ja-JP/dc102604-bdb7-47be-8de4-dc47a42e31a4/azure-data-lake-gen2-rest-api?forum=AzureDataLake

      1. Thanks a lot for the response.
        While setting permissions for a file it seems to overwrite the previous permissions.
        If I have given access to UserA and I run the script again to give access to UserB, then UserB will have access but not UserA, looks like it overwrites the access properties. How can I avoid this situation?

        $URI = “https://$StorageAccountName.dfs.core.windows.net/” + $FilesystemName + “/” + $Path + “?action=setAccessControl”

        Thanks,
        Irfan

  2. Hi Michal,

    I was working on moving files ( upload) from local to Azure ADL Gen2 storage account. Thanks for your detailed blog, I am able to create the directories. But can you please help me in uploading files to those newly created folders on the Azure ADL Gen2 storage account
    I know uploading the files to BLOB are relatively easy to do but not sure how to do the same in ADL Gen2.

    Thanks & Regards,
    JD

    1. Hi JD,
      I do not have much time lately, anyway I have in my plans to prepare examples on how to connect to ADLS Gen2 using OAuth2 token for service principal. Then Ill prepare another example of file uploading. Probably Ill do it during this weekend.

      If you want to start doing this by yourself, the basic approach is to create a file (like you create directory but with different header “resource:file”), then use Path – Update api with action=append to position 0 and body filled with the content of your file. Then you need to flush the content, so once again Path – Update but with action=flush and position = 0.

      I’m still not sure what about maximum request size, so what is the maximum possible file size that REST API will accept in the input. There is an error: “413 Request Entity Too Large” which indicates, that this could be an issue. And with Path – Update and proper position (offset in bytes) you can upload files as many connections, in chunks… Anyway, I will try to test it, but I do not have much experience in REST file uploading so not sure how to handle this… For example azcopy (and Azure Storage Explorer) always sends file to ADLS Gen2 in many small chunks. If you dont bilive it, just look into logs in your .azcopy catalog after the upload 😀 (it’s in yor c:/Users/[your_account]/)

  3. Thanks for the tutorial

    But how about reading the file inside the data lake store gen2 so that I could use the file in local computer for some processing.

    Any help would be highly appreciated.

  4. Hi, Thanks for the gr8 write up, I do have one question, am trying to set permission for all the child folder and files, and added recursive=true in the above set permission code, but still cant make it work. What am i doing wrong ?

    1. Merin Kumar SP,
      unfortunately it will not work.
      The “recursive” parameter is dedicated only to delete folder process.
      Update path API does not support any recursive update as you can see here:
      https://docs.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update

      This can be handled only manually, so you need to traverse all subfolders and files using own algorithm and set permissions on every level.

      In the other hand, accordingly to this article:

      https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-use-hdfs-data-lake-storage

      you can use hdfs cli on adls gen2. And hdfs cli supports -R option in setfacl command.
      Detials: https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cdh_sg_hdfs_ext_acls.html#concept_urf_fhk_q4__section_krq_pwv_ls

  5. Hi the article is the best, helped a lot in figuring out headers. I followed the same approach to put a file but getting error. Following is the code I am using. Could you please let me know what am I doing wrong.
    $date = [System.DateTime]::UtcNow.ToString(“R”)
    $n = “`n”
    $method = “PUT”
    $stringToSign = “$method$n” #VERB
    $stringToSign += “$n” # Content-Encoding + “\n” +
    $stringToSign += “$n” # Content-Language + “\n” +
    $stringToSign += “$n” # Content-Length + “\n” +
    $stringToSign += “$n” # Content-MD5 + “\n” +
    $stringToSign += “$n” # Content-Type + “\n” +
    $stringToSign += “$n” # Date + “\n” +
    $stringToSign += “$n” # If-Modified-Since + “\n” +
    $stringToSign += “$n” # If-Match + “\n” +
    $stringToSign += “$n” # If-None-Match + “\n” +
    $stringToSign += “$n” # If-Unmodified-Since + “\n” +
    $stringToSign += “$n” # Range + “\n” +
    $stringToSign +=

    “x-ms-date:$date” + $n +
    “x-ms-version:2018-11-09” + $n #

    $stringToSign +=

    “/$StorageAccountName/$FilesystemName” + $n +
    “recursive:false” + $n +
    “resource:file”#

    $StorageAccountName = “ctmsdatafiles”
    $FilesystemName = “itsscsqlbackup”
    #$foldername=”CTMSDataFiles”
    $filename=”outputfile.txt”
    $AccessKey=”******************************************************************************”
    $sharedKey = [System.Convert]::FromBase64String($AccessKey)
    $hasher = New-Object System.Security.Cryptography.HMACSHA256
    $hasher.Key = $sharedKey
    $signedSignature = [System.Convert]::ToBase64String($hasher.ComputeHash([System.Text.Encoding]::UTF8.GetBytes($stringToSign)))

    $authHeader = “SharedKey ${StorageAccountName}:$signedSignature”

    $headers = @{“x-ms-date”=$date}
    $headers.Add(“x-ms-version”,”2018-11-09″)
    $headers.Add(“Authorization”,$authHeader)
    $headers.Add(“Content-Length”, 0)
    #$stringToSign
    $URI = “https://$StorageAccountName.dfs.core.windows.net/” + $FilesystemName + “/one.txt” + “?resource=file”

    $URI
    $headers
    $method
    $result=””
    [System.Net.WebRequest]::DefaultWebProxy.Credentials = [System.Net.CredentialCache]::DefaultCredentials

    $result = Invoke-RestMethod -method $method -Uri $URI -Headers $headers -UserAgent ([Microsoft.PowerShell.Commands.PSUserAgent]::InternetExplorer)
    $result
    I am getting the error while trying to create file with following message
    {“error”:{“code”:”AuthenticationFailed”,”message”:”Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including
    the signature.\nRequestId:182d84a3-701f-0034-275b-3dc389000000\nTime:2019-07-18T11:25:56.2893901Z”}}

    1. Hi Siva,
      sorry for a late reply. I’m quite busy this month 🙂
      I do not have this possibility to reproduce your example, BUT so far I can tell, that you are not using proper request and method. This should not be PUT but PACTH and this should not be resource=file but action. It’s kinda different that was implemented in BLOB API (AFAIK) (putting is not enough, you need to create the file first, then patch it, then flush it)

      Look, I wrote an article on how to upload the file using Service Principal, ACL rights and of course powershell. Everything is explained over there: http://sql.pawlikowski.pro/2019/07/02/uploading-file-to-azure-data-lake-storage-gen2-from-powershell-using-oauth-2-0-bearer-token-and-acls/

      Try to use it in your example. Hope that helps. Regards, m.

  6. Hi Michał,
    thanks for the good article. and do you know how to update the file(which is not empty) using rest api? I’m always getting position incorrect error.

    1. Hi Ivan,
      I did not try to upload to already flushed file… But this is interesting. Keep me informed if you will succeed (or fail)
      In theory you need to get the file size in bytes of that file first from ADLS Gen2. So the number of the upload position will be the last byte of the file + 1, right? Then you need to PATCH an additional content using that position. And flush it with the the total as position: BYTESIZE_ORIGINAL + BYTESIZE_ADDITIONAL. Give it a try 🙂

      1. Oh, and if you mean if there is a possibility to overwrite the file.. You can empty the file by using PATH-CREATE without If-None-Match: “*” clause. Of course if there is no lease already taken on it.

  7. I mentioned the first scenario, and it works now!

    but remember, when action=update, the position is the file length, not file_length + 1(the file means file in adls gen2).

    Thank you for your help.

  8. Hey Michał, kudos to you, that’s great work!
    But how do you set permissions for a specific user? I need more granular permissions than owner, group and other.
    Any idea?
    Thanks, keep up the good work!

    1. Hi Mark,
      thanks! I am happy that I can help 🙂

      So setting permissions PowerShell code is in my examples section. You can try it: [click here].

      Now, you need to prepare a complete value for $PermissionString.
      I could ask you to read about it in the hdfs documentation (like here: [click])… But guess what, there is a better way! 😀

      Do you know Azure Storage Explorer? You can use it to access your storage account and… sniff REST commands using developer tools 🙂 That’s it!
      Just start network recording log, then set permissions for the user, as you would like to do it in your code, and read the “x-ms-acl” header value sent to ADLS gen2.
      That’s is exactly what you will pass to the $PermissionString. You can learn and understand ACLs faster with this trick 🙂

      For example, setting permissions for a dummy GUID like this: [click for screen]
      makes x-ms-acl like this: [click for screen]

      code:
      x-ms-acl:user::rwx,default:user::rwx,group::r-x,default:group::r-x,other::---,default:other::---,mask::rwx,default:mask::r-x,user:00000000-0000-0000-0000-000000000000:rw-,default:user:00000000-0000-0000-0000-000000000000:--x

      As you can see it requires you to always get the ACL first and parse it accordingly to your needs because without that you can overwrite permissions settings 😐
      But at least you know now how to add other users and how it will look like when you will experiment with other permissions. Just sniff them in your ASE ;]

      I wrote an article about how to sniff REST API in ASE. Read it here:
      http://sql.pawlikowski.pro/2019/03/09/how-to-sniff-adls-gen2-storage-rest-api-calls-to-azure-using-azure-storage-explorer/

Leave a Reply