SQLSaturday 764 – Slovakia – Bratislava

Hello folks / Dobrý deň, priatelia

Next Saturday (06/23th ) is time for Slovakia host the SQLSaturday, and I’m happy to say that I’ll be speking :).

It will be my first time visiting Slovakia, so I’m looking forward for it :), I have 2 sessions :), the first one will be “How to build solution for High Availability/Disaster Recovery” and the second one “High availability for SQL Server using Azure as DR site“.

So, in case you are arund Bratislava by Saturday 23th, show up at the event, it will be a pleasure to meet you there.

As you know, SQLSaturday is totally free 🙂 so hurry up and register your self

The schedule:

Room Borneo Madagascar
08:00 AM – 09:00 AM Registration
09:00 AM – 09:15 AM Welcome
09:15 AM – 10:15 AM
10:15 AM – 10:30 AM Coffee Break
10:30 AM – 11:30 AM
11:30 AM – 11:45 AM Coffee Break
11:45 AM – 12:45 PM
12:45 PM – 01:45 PM Lunch Break
01:45 PM – 02:45 PM
02:45 PM – 03:00 PM Coffee Break
03:00 PM – 04:00 PM
04:00 PM – 04:15 PM Coffee Break
04:15 PM – 05:15 PM
05:15 PM – 05:45 PM Raffle

See you there!

Anúncios

SQLSaturday Italy – Catania

sqlsat728_header

Hello folks / Ciao amici

I’ll be speaking at SQLSaturday Catania, Italy on May 19th.

It will be my second time speaking in Italy, I really love that country, This time I’ll talk about SMB 3 and SQL Server.

So if your are from Cantania or will be in town in May 19th, show up at the Event it will be a pleasure to meet you there.

As you know, SQLSaturday is totally free 🙂 so hurry up and register your self

The schedule:

9:00 AM – 09:30 AM Registrazioni
09:30 AM – 10:00 AM Keynote
10:05 AM – 11:05 AM

SQL Server on Linux

Danilo Dominici

Level: Intermediate

11:05 AM – 11:25 AM Coffee break
11:25 AM – 12:25 PM
12:30 PM – 01:30 PM
01:30 PM – 02:30 PM Lunch break
02:30 PM – 03:30 PM
03:35 PM – 04:35 PM
04:35 PM – 04:55 PM Coffee break
04:55 PM – 05:55 PM
05:55 PM – 06:05 PM Conclusioni

See you there!

AlwaysOn – Distributed AG

Hello Friends!

Distributed availability groups was introduced in SQL Server 2016.

Note
DAG” is not the official abbreviation for distributed availability group, because the abbreviation is already used for the Exchange Database Availability Group feature. This Exchange feature has no relation to SQL Server availability groups or distributed availability groups.

The DAG came to allow us to spans two separates AG (configured on two different Windows Failover Clustering). Also, the Availability group that participates at DAG do not need to be hosted at the same location, they can be a  physical, virtual, on-premises, in the public cloud, or anywhere that supports an availability-group deployment. As long as two availability groups can communicate, you can configure a distributed availability group with them.

Opposite of traditional AG that has resources configured at Windows Server Failover Clustering (WSFC), the DAG do not store resources at WSFC, all information about DAG it is stored at SQL Server

We can use the DAG for this three main usage scenarios

  • Disaster recovery and easier multi-site configurations
  • Migration to new hardware or configurations, which might include using new hardware or changing the underlying operating systems
  • Increasing the number of readable replicas beyond eight in a single availability group by spanning multiple availability groups

How it works?

DAG will provide to the application the possibility to connect to a read-only replica with different listener.
As mentioned before, each server will have their own WSFC (also can be at different domains)

dag1

As we can see at above image, we can create a DAG between 2 AG (or more), the the AG will consider the listener as nodes, so it is not possible to create DAG between the server name, we must use the virtual name (as CAP on FCI)

In this scenario, the Global Primary will send the logs to Primary node at second AG (called as forwarder) and this forwarder will send the logs to

You can configure the data movement in distributed availability groups as synchronous or asynchronous. However, data movement is slightly different within distributed availability groups compared to a traditional availability group. Although each availability group has a primary replica, there is only one copy of the databases participating in a distributed availability group that can accept inserts, updates, and deletions. As shown in the above image, AG1 is the primary availability group. Its primary replica sends transactions to both the secondary replicas of AG1 and the primary replica of AG2. The primary replica of AG2 is also known as a forwarder. A forwarder is a primary replica in a secondary availability group in a distributed availability group. The forwarder receives transactions from the primary replica in the primary availability group and forwards them to the secondary replicas in its own availability group.

Considerations

It is important to know that DAG will keep just one replica writable, the remains replica it is Read-only!

We also we can create DAG from DAG, this off course will create some complexity for DBA’s administrations.

dag2

Implementation instructions

Implementation prerequisites

  • We can only create DAG between AG and should use the listener as Nodes.
  • Distributed availability groups can not be configured with Standard edition or mix of Standard and Enterprise edition.

Technical instructions

To create a DAG you can follow this example:
OBS: Note that the code are using automatic seeding

--At primary
CREATE AVAILABILITY GROUP [<name of distributed AG>]
WITH (DISTRIBUTED)
AVAILABILITY GROUP ON ‘<name of first AG>’ WITH
( LISTENER_URL = ‘tcp://<name of first AG Listener>:5022’,
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC ),
‘<name of second AG>’ WITH
( LISTENER_URL = ‘tcp:// <name of second AG Listener>:5022’,
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC );

At secondary AG, now let’s join it to DAG

ALTER AVAILABILITY GROUP [<name of the distributed AG>]
join
AVAILABILITY GROUP ON
‘<name of the first AG>’ WITH
( LISTENER_URL = ‘tcp://<name of first AG Listener>:5022’,
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = MANUAL  ),
‘<name of the second AG>’ WITH
( LISTENER_URL = ‘tcp:// <name of second AG Listener>:5022’,
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC );

Limitations

  • Only manual failover is supported for a distributed availability group. In a disaster recovery situation where you are switching data centers, you should not configure automatic failover (with rare exceptions).
  • You need monitor network latency at a different layer for the data transport. The difference is that each WSFC cluster maintains its own availability.
  • We recommend asynchronous data movement, because this approach would be for disaster-recovery purposes.
  • If you configure synchronous data movement between the primary replica and at least one secondary replica of the second availability group, and you configure synchronous movement on the distributed availability group, a distributed availability group will wait until all synchronous copies acknowledge that they have the data.

Monitoring DAG

We can use the Dashboard at SSMS to monitor DAG as we do for traditional AG, and also we can query the DMV’s

SELECT ag.[nameas 'AG Name', ag.is_distributed, ar.replica_server_name as 'Underlying AG', ars.role_desc as 'Role', ars.synchronization_health_desc as 'Sync Status'
FROM    sys.availability_groups ag,
sys.availability_replicas ar,      
sys.dm_hadr_availability_replica_states ars      
WHERE   ar.replica_id = ars.replica_id
and     ag.group_id = ar.group_id
and ag.is_distributed = 1

 


References

SQLSaturday #707 Italy – I’m going

sqlsat707_header

Ciao amici!

Next Friday February, 17th,2018 Pordenone, Italy will host next SQL Saturday #707 edition.

It will be perfect geek Saturday,  learning and making network from the Data Platform Experts.

My session will start at 03:40PM, So, if you are like me and want to explore Italy 🙂 , join us on 17th Feb.

It is a free event, all the details can be found here: http://www.sqlsaturday.com/707/eventhome.aspx

See you there!

Marcelo Fernandes

How to test your workload before SQL Migration

Hi Friends,

Today I’ll talk about how to test SQL Workloads before a SQL Migration or to test new features. for this article, let’s suppose that I have a SQL 2012 Instance and I have been asked to switch to SQL 2016, before moving in, I want to test my workloads on SQL 2016 to be sure that I won’t have performance problems with or breaking changes.
The goal is:
– Capture workload from SQL2012
– Replay the workload on SQL2016
– Generate reports with results.

We have some ways to reach the goal, for example:
– Backup from SQL2012 and Restore on SQL2016, change the application connection to SQL2016;
– Use Distributed Replay do simulate workload.
– etc….

The first option we will have downtime for business, since we will point the application just for test and we don’t know what will happen. this can also be dangerous if we have third party integration (we use other application to share or get information, probably we will have data loss to roll back to SQL2012)

The second option is good option, since SQL 2012 we can use Distributed Replay, although this is a command line tool, will just have the replay and you will spend some time to crate the command to capture and replay, we still need to create the report manually.

Now we have a fantastic tool DEA (Database Experimentation Assistant) that is new tool to capture workload form A server and replay on B server. DEA support previous version of SQL Server (2005 and above), so if you want to upgrade a very old instance, you can just use this tool to be a step ahead of possible issues (performance, breaking changes, etc.).

DEA will also provide a very nice report with analysis metric for compatibility errors, degraded queries, query plans, etc., this will allow us to be more confident to do a successful upgrade.

How to use DEA

First step, download the DEA and install it.

The second step, is to setup Distributed Replay, behind the scenes DEA uses Distributed Replay, DEA will provide a user-friendly interface to capture and replay the workloads.

After DR and DEA Setup, we must to have a backup from our Database (initial position), we will restore this backup on new Server, so we will have the same point to replay the workload.

After backup you can start the DEA you should receive a welcome screen like this:

On the left side we have the menu, the first step is to start a workload capture on Source server, in this case SQL2012, so just click on camera icon.

Just click on +New Capture.

At this screen you must enter the Trace Name, Duration (starting for 5 min up to 3 hours), Source SQL Instance, Source Database name, Path to save the trace and a confirmation check box that you made a backup before start the capture (we will restore this backup on SQL2016 before replay, so SQL will compare apples with apples)

You can check the status while SQL is running the trace

After workload capture, we will be able to replay the workload on destination server (SQL2016), to start replaying the workload, click on Play icon on left menu and click over +New Replay:

Now before start replaying the workload, we must restore the backup that we executed before starting the trace (initial position) after restore, you must inform the Replay Name, Distributor Controller Machine, the Path with source trace (the same that we informed on capture trace screen), target SQL Server name, Path to save the trace during the replay (trace on SQL2016) and the check box to inform that we performed a database restore (initial position)

We need to wait for replay, during this phase we can check the status:

In the end, you should have a screen like this:

Now it is time for report, just click on Report icon on left menu and type the name of the server to host the Report and click over connect button:

Click on button +New Analysis Report, if is the first time and you do not have R for Windows and R Interop installed, you will receive a screen informing that you must install it.

After R setup, click over Try Again button and you will receive a screen to setup the Report Analysis, type a name for Report, the source trace file (SQL2012) and Target trace file (SQL2016) and click on Start

Wait for analysis completion

After the analysis, you will receive a dashboard report comparing the Source and Target server, you can click on graphs to drill down into the details, in this case will click on green area.

The detailed report will show all query text and the duration analysis, we also can click on it to see more details.

In this detailed report we will see the performance comparison between the executions, information about compatibility errors (breaking changes) and execution plans.

So that’s all for now, what do you think? Now it is easy to replay workloads before an upgrade isn’t it?

These articles may help you:
Setup Distributed Replay
Distributed Replay Docs
Distributed Replay Troubleshooting
DEA Capture Trace FAQ
DEA Replay FAQ
DEA Report Analysis FAQ
DEA Solution architecture to compare workloads

Thanks

Marcelo Fernandes

24 Hours of PASS: Portuguese

 

Olá Amigos,

Nesta semana teremos mais uma edição do 24 horas PASS Português, com profissionais de SQL dos países que falam português. Teremos 24 sessões sobre os mais variados temas relacionados a plataforma de dados da Microsoft.

A minha sessão será no dia 29/11 as 20:00 horário de Brasilia (22:00 horário de Portugal)  o tema será”Alta disponibilidade para o SQL Server usando Azure como DR

Este evento é online e gratuito, unindo as comunidades de SQL Server quem tem como comum o Português como idioma. O número de pessoas por sessão é limitado, por isso corra para garantir o seu lugar!

https://www.pass.org/24hours/2017/portuguese/registration.aspx

Você também pode ver a grade de palestras aqui: http://www.pass.org/24hours/2017/portuguese/Schedule.aspx

 

 

SQL Pass Summit 2017 – Review

Ola amigos,

No período de 30/Outubro a 03/Novembro tive o prazer te participar do PASS Summit 2017, esta foi a minha 5ª participação, mas este ano foi bem especial, pois foi a primeira vez como palestrante :).

Como era de se esperar, estava bem ansioso e nervoso como a minha sessão. Foi uma sessão lightning talks de 10 minutos, mas queria fazer algo realmente bom dentre os 10 minutos.

Foi minha primeira palestra nos Estados Unidos :), O tema foi “High Availability for SQL Server Using Azure as DR Site in 10 Min”. Consegui entregar minha sessão e tive bons feedbacks de alguns participantes (uma dupla de chinesas vieram falar comigo e até fizeram fotos 🙂 )

Também participei pela segunda vez na mesa Birds of a feather (Ask the experts), trocando conhecimentos em inMemory e High Availability. Alias esta é uma excelente dica para fazer networking e praticar o inglês 🙂

 

O evento deste ano foi bom, senti falta de algumas sessões deep dive, mas no geral foi um bom evento.

Minha percepcão do evento foi:

  • 50% das palestras estavam relacionados a BI/AI
  • ~80% das palestras estavam relacionados ou mencionaram Cloud
  • 10 palestras dedicas ao tema linux sem contar keynote e outras palestras que certamente mencionaram o tema.

No Keynote Bob Ward e Conor Cunningham fizeram uma demo de “Persistent Memory” com o SQL no Linux , uma nova tecnologia da HP DL380, (detalhes neste documento da HP)

Logo, com base nas palestras e keynote, Eu diria que quem ainda não fala a língua de Cloud, AI, BI, Linux etc… terá dificuldades em um futuro bem próximo. O mundo muda e devemos mudar juntamente com ele para nos mantermos atualizados!

E para finalizar, no ultimo dia de evento teve até neve em Seattle! 🙂

Regards,
Marcelo Fernadnes