RBAC in AZURE and how to consulting the configuration

RBAC (Roled based access control) is a security feature used to control access based on user roles in an organization, that is, considering its functions within the organization. In large organizations is a classic way to organize permits, based on the competences, authority and responsibility of a job.

A RBAC attribute is the dynamism, because the access control function is given to a role and integration in that role of a person can change over time, like the permissions associated with a role. It is opposed to classical methods of access where access permissions are granted or revoked to a user object to object.

In AZURE we have a RBAC implementation for resources and a number of predefined roles. The roles in AZURE can be assigned to users, groups, and applications, and at the level of subscriptions, resource groups, or resources. As we see the options are vast.


There are three basic roles: owner, contributor or partner, and reader. The owner has full access to resources, including permissions to delegate access to others. The contributor is equal to the owner but can not grant access to others. The reader can only see resources.

Of these three roles inherit another set of roles for specific resources. In this link is a full list of roles based on Azure and its functions.

However you can generate as many roles with custom permissions as necessary. To create them can be done via Azure PowerShell, Azure client line interface (CLI), or the API REST. In this link you have more information and examples of how to do it.

Access to the list of permissions for each role

One way to check what permissions each role have, is through the portal AZURE. You enter into a subscription, resource group or resource, and you will see an icon like two peoples at the top right:


Selecting it, the users panel appears. Click Role:


And the list of available roles will appear:


Select the role that interests you to check their permissions, and the Members Role tab appears with a button to see the list of permissions:

20160524_RBAC_AZURE_Paso05Once on the list we can expand information for each group of actions by clicking on the corresponding entry:


And within it each individual action:


At this level is useful the information that provides the icon to learn more on each input with an explanation of each share representing:


To learn more about how to create, delete or consult the members of each roles, you can consult the following link.

Load balancing two Azure WebAPP with nginx

In the previous post we saw how to install a ngin-x server. One of the capabilities that have ngin-x is to be a powerful proxy server, used as a load balancer. In this post we will see how to use it to balance the load of two WebAPPs (could be as many as were necessary). This scenario presents a feature that requires slightly modify the normal procedure for this operation.

We start from a linux machine with NGIN-x installed, as seen in the previous post.

In addition we will create two simple WebAPPs, with a message that differentiates each of them, for example, as shown in the following images:



Then we will set up ngin-x following the normal guidelines. We entered the linux server console and edit the configuration file with nano for example:

sudo nano /etc/nginx/nginx.conf

And modify the script so it looks like the following code:

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
     worker_connections 768;
     # multi_accept on;

http {
     upstream bloqueprimerproxy {
          server xxURL1xx.azurewebsites.net;
          server xxURL2xx.azurewebsites.net;

     server {
          listen 80;
          server_name   localhost;

          location / {
               proxy_pass http://bloqueprimerproxy;
               proxy_set_header  X-Real-IP  $remote_addr;

Where xxURL1xx.azurewebsites.net and xxURL2xx.azurewebsites.net are the URLs of the two WebAPPs to balance.

We save the code and restart the NGIN-x service:

sudo service nginx restart

The above script would be the normal way to balance two WEBs with ngin-x. But if we tried now we get the following error:


This is because Azure App Service uses cookies to ARR (Application Request Routing). You need to ensure that the proxy passes the header correctly to the WebAPP so that it identifies the request correctly.

For this we edit again the configuration file and leave it as follows:

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
     worker_connections 768;
     # multi_accept on;

http {
     upstream bloqueprimerproxy {
         server localhost:8001;
         server localhost:8002;

     upstream servidor1 {
         server xxURL1xx.azurewebsites.net;

     upstream servidor2 {
         server xxURL2xx.azurewebsites.net;

     server {
          listen 80;
          server_name   localhost;

          location / {
               proxy_pass http://bloqueprimerproxy;
               proxy_set_header    X-Real-IP    $remote_addr;

     server {
          listen 8001;
          server_name   servidor1;

          location / {
               proxy_set_header Host xxURL1xx.azurewebsites.net;
               proxy_pass http://servidor1;

     server {
          listen 8002;
          server_name   servidor2;

          location / {
               proxy_set_header Host xxURL2xx.azurewebsites.net;
               proxy_pass http://servidor2;

Where as before xxURL1xx.azurewebsites.net and xxURL2xx.azurewebsites.net are the URLs of the two webapps to balance.

In this script we apply a double proxy, so that we balance the input against the same ngin-x, attacking the ports 8001 and 8002, which headed to the webapps, but adding to the header the real WebAPP url.

After recording the script and restart the ngin-x service, if we navigate to the ngin-x server, we see that we are balanced from one to another web without problem.

To learn more about balancing modes available on ngin-x you can see this link.


Installing Nginx on an Azure Linux Ubuntu 16.04 VM

In this post we will see how to install nginx on a Ubuntu Linux 16.04 LTS virtual machine on Azure. This is one of the best HTTP servers and reverse proxy, and also an IMAP/POP3 proxy. It is open source.

Let’s assume that we have deployed the Linux virtual machine on a basic state. Otherwise, as summary, the steps are:

– Create a virtual machine from the gallery with Ubuntu 16.04. You can see my post about creating Linux VM.
– Change the default ssh port. You have instructions to do it in Azure in my post about it.
– Upgrading the system, connecting to a console session and running:

sudo apt-get update
sudo apt-get upgrade

This step is always recommended before installing a package (except production servers with previous production packages, that you have to consider whether or not it is convenient).

As we will install an HTTP server, if you have got a previous http server like Apache, you have to uninstall it to prevent conflicts.

Once the machine is ready to install nginx, from the ssh console run:

sudo apt-get install nginx

And finally we start the nginx service with:

sudo systemctl start nginx

Check that the service is active with:

sudo service nginx status

It provides service information that will be similar to the following screen:


Now, we have installed nginx, with its default settings to port 80. If we go to the machine, trhought that port, the next page appears:


For more information about nginx you can find it on this link.

The 25 largest files with Powershell

All we have encountered more than once out of space in our hard disks, usually at the most inopportune moment, and we had to dedicate ourselves to delete something to continue working.

Here I present a small script in Powershell which one we will find your largest files on your disk. Accepts two parameters:

  • The path where we want to check the size of the files. It can be an entire drive such as c:\ or, which is often more effective, a specific path such as the path where we keep our documents. The script searches recursively in the specified folder and its subfolders.
  • The second parameter is the number of files to display, starting with the largest. Normally between 25 or 50 largest usually left over something.

To use the script copies the following code (without the line numbers):

$Ruta = Read-Host 'Please, enter the route'
$NumFicheros = Read-Host 'Number of files to return'

Get-ChildItem -Path $Ruta -WarningAction SilentlyContinue -ErrorAction SilentlyContinue -Recurse -Force -File | `
 Select-Object @{Name='Ruta';Expression={($_.FullName)}},@{Name='Tamaño';Expression={($_.Length/1MB)}} | `
 Sort-Object -Property Tamaño -Descending | `
 Select-Object -First $NumFicheros | Format-table Ruta, {$_.Tamaño.ToString("000000.00")} -HideTableHeaders 

Inside an empty text file and name it 25Ficheros.ps1

The important thing is to have the extension ps1. We have to have Powershell installed on our system. If we have Windows 10 installed, we have it already. If not, install it from following this link.

To execute it, you must click with the right mouse button on the file created with the script and select the option Run with Powershell.


If this is the first time you execute a Powershell script, it will tell you if you want to change the execution policy, as by default the scripts executions are not allowed. You will indicate yes, and you are requested for the two execution parameters.


Write both data and press ENTER after each item. The script will begin its work and after a moment (the more generic route to search in, the longer it takes) you will obtain the results in two columns. To the left, the file name with full path, and on the right the megas occupying the file.


For example, for my directory C:\Windows\System32 (and all subfolders), these are my top 25 files:


Press Enter again to close the window.

I hope you find it useful the script. For any comments do not hesitate to contact via links on social networks or email.


Assign OneDrive for business folder to a removable drive

OneDrive, the Microsoft storage unit in the cloud currently offers in its enterprise edition 1Tb of storage. It is a high quantity for a regular use. However, unlike his personal version, it is not allowed to change the local folder to a removable drive. By default, the business version is in the user path or if you modify it, in a path in a non-removable drive. In the personal version, during the installation process, you can either select a folder within or without a removable drive for the local files.

In the words of Microsoft, the two OneDrive are different products really, who share the name, hence the different behavior.

It might not seem like a problem at first, but with a terabyte of possible storage, devices such as Windows tablets, with 32Gb or 64Gb storage units, take 10 or 20 gigas, that could be in a memory card, can mean the difference between being able or not to use the system.

In such cases, have an SD card always inserted in the tablet is essential as a support unit. And it is ideal for local OneDrive storage unit.

The following solution allows this operation, but we must clarify that although functional, is not an official Microsoft solution, with all that involve. I’m not responsible for any problems that may arise, either.

The first step is to create a folder in a non-removable drive of our system. For example, at the root of C, called SD:




The folder must be empty to continue, so we should not copy anything inside. Now we open the Disk Manager and look for the removable drive. Press the right button on it and select the option to change the drive letter:


Click on the option to add route:



And select the folder created at first and then press accept:



Now the folder is a mount point of the removable unit. We just have to unlink OneDrive, if we have it linked, and relink, changing during the initialization process the local path, using the folder (not the removable drive). We observe that it does not put any constraint on the use of that folder and it begin normally synchronization. Files are sent to the removable drive, not taking space in non-removable drive.

If we want to remove the drive, we must unlink the OneDrive account first.

Changing SSH and XRDP ports in a Azure Linux virtual machine


A basic safety recommendation is to change the default connection ports of a system for the various available communications services. Let’s see how to change the ssh and xrdp ports on a Azure Linux virtual machine.

Change ssh port

Immediately after creating the virtual machine, the default port is 22. You can connect to the machine through its public IP or DNS with a client like Putty through that port. Edit the configuration file with nano for example:

sudo nano /etc/ssh/sshd_config

And we change where it says port 22 by the value we want (eg I put 40167):


Now to restart the ssh service, run:

sudo service ssh restart

We close the remote session that we are running, that still go through the port 22. Now we need to edit the security rule in the control panel of the virtual machine to reflect the change in port. To do this, we look for the machine in our Azure subscription, for example, in my case it is called f23uh4733:


Click on the entry safety rules option:


And we double click on the current rule for port 22:


And you must modify the value of the port 22 to port defined in the configuration file:


Pressing save after modification. The rule will take a few seconds to be applied.

Installing a remote desktop and xrdp port change

Now we will install a remote desktop. This will be necessary if Linux is a server image for example. Keep in mind that xrdp since Ubuntu 12.04LTS does not support Gnome Desktop, so we’ll use xfce.

First we install xrdp, executing the following command at the terminal:

sudo apt-get install xrdp


After the installation of xrdp, we must install xfce, running the command:

sudo apt-get install xfce4


The next step is to configure xrdp to use xfce. Run the following command:

echo xfce4-session >~/.xsession


Once installed the desktop, we will change the default port for remote connection. We use an editor, for example nano, to modify the xrdp configuration file. Run the command:

sudo nano /etc/xrdp/xrdp.ini

And modify the port with the desired value, in this case for example the port 40168:


We record the changes and restart the xrdp service to take effect, using the following command:

sudo service xrdp restart


Once you have configured the port, as before, we need to create the security rule that allows us to access. To do this we return to the list of rules of entry, and click the add button:


And we add a rule indicating the destination port that we have set in the previous step:


Press save button and wait for the rule to apply. After, we can open a remote desktop connection to the machine by the port:


We have to identify us with a UNIX user. If you have not created any, the administrator user serve us:


And we access the Linux desktop machine:



Azure VM’s public direction

Each virtual machine we deploy in Azure, by default, has assigned a public IP, through which we can access it. You can later modify both access ports as restrict, in certain cases, public access.

IP and DNS of a virtual machine

To access the public IP of a virtual machine created in ARM model, open the panel of the machine from the list of virtual machines:


In the main panel the public IP appears, and if it was configured, your DNS. If the DNS appears undefined, you can specify one by clicking on the link:


In the Public IP panel, we can see the address and easily copy both IP and DNS.


If you click Settings, you will access to specific IP options. We can establish a static IP to the virtual machine (default is dynamic) and define a DNS domain within our geographic region domain:


Una vez guardados los cambios, en segundos que se habían aplicado y estarán a disposición del público.



Creating a Linux VM in Azure

Within the Azure marketplace we have multiple images ready to deploy. Among them are several distributions of Linux created by several companies, with several preinstalled packages if necessary.

Creating a Linux virtual machine

Let’s see the entire process of provisioning a virtual machine (IaaS) with an image of Canonical Ubuntu Server 15.10.

Step 1

We entered our Azure subscription and click on virtual machines:


Step 2

Click on add new virtual machine:


Step 3

We search and select the image Ubuntu Ubuntu Server 15.10 Canonical:


Step 4

The description of the VM image is showed, and we can choose whether we want in classic mode or resource manager. We will choose resource manager. You can see the differences on this link. Press create to start the process of provision:


Step 5

Now, you can fill the basic data of the virtual machine, with special attention to the geographical area of deployment and the resource group to which to assign. Select the location closest to where you want to give the service or one where you have all your virtual data center.

With regard to the resource group, remember that everything you bring inside will not restart simultaneously in the mantenimience operations, so its use is for high availability situations.

In this step you will define the root user and password, so please assure that the data is correct.

After filling all press accept.


Step 6

You must now select the size, which defines the cost of the machine. Choose the one you needed depending on the estimated use. The DS series, with SSDs are suitable for LAMP services for example.


Step 7

In this step you will configure additional options, such as network, storage type and others. When you finish, please press accept. If you do not yet know these concepts in Azure, the default options will be fine to start.


Step 8

A summary of the process is presented and a final confirmation is requested. If all is well, press accept and begin to supply the machine. If not, you can go back to correct it.

Creación VM Ubuntu

In the notification area you will have a notice of the process progress, as well in the main panel.

Once the deployment is complete, which may take about 5 to 10 minutes, you can connect via SSH with a client like Putty, using the public IP of the machine and against port 22, with root user that was defined in the basic options in step 5.

However this default setting is not the safest. In a next post we will see how to change the default ports and install a desktop for remote access. Later we will see how to configure the server to make a LAMP stack.