Menu

Query and transfer data between Azure SQL databases

Query and transfer data between Azure SQL databases

In the DevOps world, there are some situations where you need to have a way to transfer data from different Azure SQL databases

For example, let's say you have a client who inadvertently deleted a huge amount of records from a table in the production database. With the help of the Point in Time backups, we could restore the database to an earlier point in time in which those records still exists. This process will create a new database with the restored contents and then we could swap names between these databases by renaming them. However, there is a caveat of performing this action. The restored database will lose all the Point in Time history of the original database.

A better approach would be to restore the deleted records to the original database. However, this is not as simple as regular SQL Server databases. Both databases can't see each other even if they're on the same server and, for example, you can't use Linked servers in Azure SQL to connect them.

You could have the option to export those records into SQL scripts but even then, there are some cases where the amount of data to be restored is very large thus creating those scripts wouldn't be the best way to solve this issue.

This is where Elastic Queries come to the rescue:

t-sqlqueries

 

With this technology, we could easily manage data in and out from our databases.

Requirements

 Walkthrough

Let's imagine that the client truncated all the records of the year 2016 from the Estimates table in our invoices database.

The first step should be to connect to the Azure Portal and perform a Point In Time restore of the cloudinvoices database from a date and time where those records still exists. We will restore this into a database called invoices2016.

 

These would be the steps we need to perform to restore those records:

 

Create database master key and scoped credentials

The master key will work as a "vault" to save credentials in the database and the scoped credentials are the database credentials that are going to be used for connecting to your restored database.

Syntax:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password';

CREATE DATABASE SCOPED CREDENTIAL <credential_name> WITH IDENTITY = '<username>', SECRET = '<password>'

Example:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'p@ssw0rd';

CREATE DATABASE SCOPED CREDENTIAL dbCredential WITH IDENTITY = 'user01', SECRET = 'p@ssw0rd';

 

Create the external data source

The external data source  is where we set the parameters along with the previously created credential to allow the connection between databases.

Syntax:

CREATE EXTERNAL DATA SOURCE <data_source_name> WITH

(

    TYPE = RDBMS,

    LOCATION = '<fully_qualified_server_name>',

    DATABASE_NAME = '<remote_database_name>',

    CREDENTIAL = <credential_name>

)

Example:

CREATE EXTERNAL DATA SOURCE restoredDatabase WITH

(

    TYPE=RDBMS,

    LOCATION='cloudinvoices.database.windows.net',

    DATABASE_NAME='invoices2016',

    CREDENTIAL= dbCredential

);

 

Create the external tables

You should keep in mind that external tables are extensions of the external database. Think of them like linked tables. Meaning that we can't have two tables with the same name in our production database even if one is a regular table and the other is an external table.

In our example, if we want to connect to the Estimates table in the restored database, we would need to rename it first ( for example, Estimates2016 ) so it doesn't conflict with the existing one.

Syntax:

sp_rename ‘object_name’, 'new_name';

Example - in the restored database:

sp_rename 'Estimates', 'Estimates2016';

 

Now we can create our external table:

Syntax:

CREATE EXTERNAL TABLE [table_name]  (

    { <column_definition> } [ ,...n ]

)    

    { WITH ( <rdbms_external_table_options> ) }

)

Example:

CREATE EXTERNAL TABLE [Estimates2016] (

     [EstimateId] uniqueidentifier NOT NULL,

     [ClientId] int NOT NULL,

     [Year] int NOT NULL,

     [Amount] decimal NOT NULL

)

WITH (

     DATA_SOURCE = restoredDatabase

);

 

After we perform the required steps to create the connectivity between both databases we can then do the actual work needed to restore the missing records. For example:

 

INSERT INTO ESTIMATES 

SELECT * FROM ESTIMATES2016 WHERE YEAR = 2016; 

 

 

 

Upon finishing this task we should do some clean up:

DROP EXTERNAL TABLE Estimates2016;

DROP EXTERNAL DATA SOURCE restoredDatabase;

DROP DATABASE SCOPED CREDENTIAL dbCredential;

DROP MASTER KEY;

 

That’s it! I hope you find this new technology useful for these kind of scenarios. Keep in mind that this could help you in many more cases: querying external databases, cross-database querying for horizontal partitioning ( sharding ), querying multiple databases for reporting in PowerBI, and so on.

 

More info regarding Elastic Queries:

 

See you in the next post. Never stop learning.

 

 

Categorías
 
Posts relacionados
Escalado automático en Azure: qué es, cómo implementarlo y beneficios que nos ofrece
Publicado por Carolina César Piepenburg  |  03 abril 2024

Examinaremos en las herramientas específicas de Azure que permiten el escalado automático, cómo se realiza en cada una de ellas y los beneficios que nos aportan.

Leer más
Cómo desarrollar modelos de IA en menos tiempo
Publicado por Carolina César Piepenburg  |  05 enero 2024

Descubre cómo crear modelos de inteligencia artificial en menos tiempo con Azure Cloud Platform y Azure Machine Learning

Leer más
HTAP y Azure SQL para el ahorro de costos en la gestión energética
Publicado por Intelequia  |  02 junio 2022

HTAP y Azure SQL permiten ofrecer a los proveedores energéticos una solución para el análisis y administración de redes eléctricas inteligentes.

Leer más