Thinking out of the box – SQL Server FCI using the File Share (SMB 3.0) as Storage option [Part 1]

Types of Storage for SQL Server

Before we begin with the “SMB 3” Let us first understand some types of storages that we can use in SQL Server and the difference between them.

There are several types of storage that can be used with SQL Server:

  • Internal disks – SSDs or traditional HDDs
  • Storage card PCI-E
  • Direct-attached storage (DAS)
  • Storage area networks (SAN)
  • Server Message Block (SMB) 3.0 file shares
    • Available in Windows Server 2012 and 2012 R2
    • SQL Server supports the use of the file shares from the 2012 version (stand-alone or FCI)

As you can see Microsoft allows us to use several types of storage for the installation of our SQL Server and storage our databases.

There are also some technologies that allow us to use Failover Clustering without the need for shared storage:

  • iSCSI Target Server
  • SMB FileShare (using SMB 3.0 or 3.02)
  • Cluster Shared Volume (CSV)

In this article we will focus on SMB which we can use in a SQL Server stand-alone or cluster (FCI)


Storage HDD vs SSD

I/O bottlenecks can occur in our environment so it’s imperative to understand what type of disk we use and the performance that is offered by these, so let’s compare the performance between the SSD disks and mechanical disks (HDD):

  • HDD – performance for sequential I/O acceptable
    • 100-200 MB/sec. per disc
  • HDD – low performance for random I/O
    • 100-200 IOPS per disk
  • SSD – good performance for sequential I/O
    • SAS/SATA 6Gbps can reach up to 550 MB/sec. per disc
    • SAS/SATA 3Gbps can reach up to 275MB/sec. per disc
    • PCI-E card can reach up to 6.5 GB/sec.
  • SSD – great for random I/O
    • SAS/SATA 6Gbps can reach up to 100.000 IOPS
    • PCI-E storage cards can reach up to 1.3 million IOPS

As you can see traditional HDDs do not have good performance in random I/O. SSD has outstanding performance for random I/O, and PCI-E is best for this type of activity.

Note that the slower disk is cheaper and the disk with better performance more expensive, so for some scenarios of I/O bottleneck the simple exchange of the spinning disks for a SSD or PCI-E cards isn’t always feasible due to cost.

Comparing I/O performance

To compare the performance of the technologies mentioned above we ran a simple I/O test on a shared storage in a data center which is a very common scenario in today’s environments. Unfortunately, not every company has a capital to dedicate a storage device for a particular service but that’s OK. I also ran the same test on a SSD disk and on a dedicated high-end storage array for SQL.

As we can see we have 3 types of solutions where the cheapest is the storage shared with SAS disks and more expensive (and I mean really expensive J) is the high-end storage which can run you millions of dollars.

Note: Testing conducted with the DiskSpd tool with the following configuration:

diskspd-b8K-d180-o16-t4-h-r-w25-L-Z1G-D:\iotest.dat c10G


Reading the above table, we see that the best solution is the high-end storage that costs too much but if you’re looking closely at the SSD disk you can see that performance similar to the high-end storage but the SSD disk costed me only $150 dollars which is not so far away in price for shared storages offered by most of datacenters.

Looking at only the total I/O column we have:



In this case the cost-benefit of the SSD disks is more advantageous to the shared disk.

Choosing the Storage based on Workload

The following are a list of tips that will help you choose the best hard drive for your SQL Server environment based on workloads:

  • SSD offers better performance for random I/O
    • It also offers better sequential performance compared to HDD
    • SSD is more expensive than the HDD (per GB)
    • The price of SSD is becoming more affordable
  • HDD offers a reasonable performance for sequential I/O
    • HDD have a bad performance for random I/O (Which can be “masked” through a controller’s cache)
    • Flash-based caching can provide better performance for the HDD
  • SSD is the best choice if “the cost” is not a problem
    • Recommended for environments with high overhead of random I/O
    • Recommended for environments that have I/O bottlenecks


3.0 SMB File Shares

We have seen in previous threads that SQL supports basically two types of disks: the HDD and SSD, but how about installing SQL Server into a shared folder on a File Server?

In the past a lot of people would be scared by this idea, because historically the statistics of the SMB (Server Message Block) aren’t very good, we can highlight some negative points that many infrastructure administrators cited about the SMB:

  • File shares are slow
  • The connection with a file share can fail
  • SMB consumes too much CPU

In the new SMB 3 these negatives points were treated or disposed of, below are some key points about SMB 3 that will be detailed throughout this chapter:

  • Supports I/O using multiple concurrent network cards
  • Fault tolerant of network interfaces
  • Integrated with Windows Failover Clustering
  • Windows Server 2012 supports “Direct SMB”, which allows the use of network cards supported by the “Remote Direct Memory Access (RDMA)”
    • RDMA requires SMB Multichannel
    • RDMA offers high speed with low latency and low CPU consumption

FCI SQL Server using the File Share (SMB 3.0) as a storage option

Windows Server 2012 was made available the new version of the Server Message Block SMB 3.0 and upgraded to 3.2 in Windows 2012 R2.
Only SQL Server 2012 or higher supports SMB as a storage solution SQL Server Standalone or FCIs.


Figure 1.1: Example of using File Server environment as a Storage option


In Figure 1.1 we illustrate a SQL Server clustered environment using a File Server (SMB) as storage solution for SQL.

You can see that we are using the SOFs (Scale-Out File Server) which is a new file storage solution and a SQL Server cluster with 3 nodes

This is a solution for enterprise-level storage because it is scalable, reliable and always available.

It provides easy provisioning and management because we can use the familiar tools of Microsoft SCCM, PowerShell to manage the environment.

It also uses the latest network technologies (Convergent, Ethernet RDMA) which provides us greater flexibility,

The solution also reduces capital and operational costs because we have a storage that can be used by other services.

An SMB solution offers us some features we don’t have in conventional solutions:

  • SMB Transparent Failover – continuous availability if one node fails
  • SMB-Scale-out – Automatically balanced Active / Active file server clusters
  • Direct SMB (SMB over RDMA) – Low latency, high throughput, low CPU usage
  • Multichannel SMB – Increased network throughput and fault tolerance
  • SMB Encryption – Secure data transmission, without the costly PKI infrastructure
  • VSS for SMB File Shares – Backup and Restore using an existing VSS framework
  • SMB PowerShell, VMM Support – Manageability and support through the System Center and PowerShell

SMB Transparent Failover

One of the components that doesn’t have much protection in a traditional cluster are disks, if there is a failure in the storage or in the middle of communication between the storage and the server (Fiber, HBA, etc.) the SQL Server service that is clustered will fail! SMB 3 has a great feature that allows us to perform the application failover using SMB (in the case of our example SOFs) with zero downtime for SQL Server, there is a small and fast I/O delay during a failover but that’s it.

The transparent fail over can be planned or unplanned which means we increase the level of availability of our application as a result. This does not mean that if by chance a SQL Server fail over occurs this process will be transparent to the application, SQL Server has downtime, what has no down time is the file server that we are using as an alternative storage. We eliminate the protection issue between the storage communication and the SQL Server.


Figure 1.2: simulation of Transparent Failover in SMB 3

In Figure 1.2 we have an illustration of the transparent fail over process, it shows a SQL Server cluster that has been installed in the file server directory (\\fileserver\SQLtest) and in this context we note that the service was active on the first node and was moved (failed over) to another node and during this the SQL Server service is not affected, it remains online. This type of protection is only available with the SMB 3.

Direct SMB (SMB over RDMA)

Before we begin with the SMB over the Direct Memory Access (RDMA) let us first understand what the RDMA.

RDMA is a protocol that allows access to the memory of a remote computer. That is, if you have a network card with RDMA support the SMB client has direct access to the memory of the SMB server making the file transfer extremely fast and with very low CPU consumption.

The benefits of RDMA are:

  • Low latency
  • High yield
  • Zero copy capability
  • OS/by-pass

The following technologies leverage RDMA Hardware

  • InfiniBand
  • iWARP RDMA over TCP/IP:
  • ROCE: RDMA over Ethernet convergence


Figure 1.3: illustration of the operation of the RDMA in SMB 3


SMB Scale out

With Scale-out File Server (SOFs) you can share the same folder on multiple nodes in a cluster. In the example in Figure 1.2 we have a two-node file server cluster using the scale-out SMB protocol, a computer running Windows Server 2012 R2 or Windows Server 2012 can access file server on either of two nodes. This is possible because of the new Windows Failover Cluster features along with the File Server SMB 3.0 protocol. With this in place you can now say that SOFs will always be online and in case of increased demand we can simply add new servers in the cluster and all this in a production environment with the operations totally transparent to the applications

The main benefits provided by SOFs Server include:

  • File Server Active-Active All cluster nodes may accept and serve the SMB client requests with transparent fail over between nodes.
  • Increased bandwidth The maximum bandwidth of a share corresponds to the sum of the width of all nodes of cluster file server. In previous versions of Windows Server, the total bandwidth was limited to the bandwidth of a single cluster node. You can increase the total bandwidth by adding new nodes.
  • CHKDSK with zero downtime There was a significant improvement made to CHKDSK in Windows Server 2012 that drastically reduces the amount of time that a system is off-line for repairs the CSVs (Clustered Shared Volumes) eliminating the offline phase.
  • Clustered Shared Volumes Cache CSVs in Windows Server 2012 introduce support for a read cache significantly improving performance.
  • Simpler management With SOFs you can add new CSVs and create multiple shares. You no longer need to create multiple Cluster File Servers each with separate cluster disks.
  • Automatic Rebalancing of SOFs’s clients In Windows Server 2012 R2, automatic rebalancing improves the scalability and manageability of SOFs. SMB client connections are controlled by shares rather than by server so clients are redirected to the node with better access to the volume of that share thereby significantly improving performance and reducing traffic between servers.


SMB Multichannel

SMB Multichannel is another great feature of SMB 3.0 since it provides us increased network throughput and fault tolerance.

Let’s imagine two scenarios where we have a 4-core environment and a 10 GB network card without Multichannel technology and another environment with Multichannel technology.

In the first scenario, let’s imagine that we have a session with several I/Os we would see a high use of only one core while we have 3 without use. This is because in the old version of SMB we did not have Multichannel support and SMB only creates 1 TCP connection.

If I run the same test in the second scenario with SMB 3 supporting Multichannel we will see the distribution of the use of cores, this happens because SMB 3 detects that the network card has the Receive Side Scaling (RSS) feature and creates multiple connections TCP allocating the load between the CPUs.

You do not need to make changes or settings in your environment to use Multichannel, SMB 3 detects RSS and automatically creates multiple connections.

Figure 1.4 illustrates this behavior of SMB Multichannel.



Figure 1.4: comparative environmental behavior with and without Multichannel
(image kindly provided by Jose Barreto [])


Let’s imagine 2 other scenarios, 1 without SMB 3 with two file servers the first with two network cards with RSS support and another with two network cards without RSS. The other scenario is exactly the same as the first one but with SMB 3 Multichannel support, image 1.5 illustrates this.

In the scenario without SMB 3 Multichannel support, although we have two network cards in both file server clusters, we do not have the automatic fail over that is present only in SMB 3, let’s imagine that we have a session with several I/Os in this if we use only 1 network card (both in the cluster with RSS support and the other without RSS support) and we would have the single-core high-power scenario due to the creation of only 1 TCP connection.

In the scenario with SMB 3 Multichannel support, we have automatic fail over and we can use all available bandwidth on the server as Multichannel will use the two network cards to increase bandwidth. In the cluster with the network cards without RSS support, but with the SBM 3, Multichannel will use the 2 network cards by opening only one connection in each of them.


Figure 1.5 illustrates this behavior of SMB Multichannel with multiple network adapters.


Figure 1.5: Comparing behavior of environments with and without Multichannel using multiple NICs (image kindly provided by Jose Barreto [])


The key points of the Multichannel SMB are:

Full Throughput

  • Bandwidth aggregation with multiple NICs
  • Uses multiple cores of the Cpu when using Receive Side Scaling (RSS)

Automatic failover

  • SMB Multichannel implements fault detection, end-to-end
  • Enjoy teaming NICs if present (but not require)

Automatic configuration

  • SMB detects and uses multiple network paths

SMB performance Multichannel

Jose Barreto who is Principal Program Manager at Microsoft and helped in the development of SMB 3 did a great performance testing with the SMB3 with the following settings:

  • Windows Server 2012 using 4 10GbE NICs
  • Linear scaling of bandwidth:
    • NIC 1-1150 MB/sec
    • 2 NICs-2330 MB/sec
    • 3 NICs-3320 MB/sec
    • 4 NICs-4300 MB/sec.
  • Network cards with support for RSS (Receive Side Scaling)
  • Bandwidth for small I/O cause bottlenecks in CPU

The result obtained with the tests were spectacular, note the performance with the Multichannel


Figure 1.6: SMB Performance Multichannel (image kindly provided by Jose Barreto [])


We note that for small I/Os we do not get much advantage by increasing the amount of network card, the higher I/O the gain is significant as we increase the number of network cards, reaching about 4500 Mb/s with 4 network cards for I/O size greater than 16384

Remember the test table comparing some types of storage that we discussed at the beginning of the this article?


Obviously the test I did with Diskspd was different from the test  that José Barreto ran, but it serves as a parameter to break some paradigms, note the MB/s column of storage vs. SMB 3.

I’m not telling you to leave the SAN and go to SMB 3.0 tomorrow as this would have a certain financial and bureaucratic impact, but it’s worth a test in your environment, isn’t it? And how about considering SMB3 for new projects?

SMB 3 is another alternative that we have of storage with the differential of performance, availability and scalability. So “think out of the box!

As shown, we have greater disk protection with Transparent Failover and we use the full bandwidth available on the servers.

PS: It’s very important to pay attention that the environment used here had an specif network card, the normal nics will works, but if you have a very transactional environment you should have a nic with RDMA and RSS support.



Marcelo Fernandes


Token Bloat – Cannot generate SSPI context

Olá amigos, há um tempo atrás enfrentei o famoso problema SSPI atípico e sempre adiava este post, bom finalmente decidi escrever o artigo. Espere que o ajude.


Um usuário sempre conectou ao SQL Server sem problemas, mas um dia ao tentar conectar-se ao mesmo serviço de SQL ele recebeu a seguinte mensagem:


Não houve nenhuma mudança por parte de infraestrutura e DBAs no servidor, o usuário simplesmente não consegue conectar mais, e o mais intrigante é que alguns usuários conseguem conectar-se e outros não.

Bom, na grande maioria das vezes o erro SSPI Context é gerado por falta de SPN em uma tentativa de conexão via kerberos com o SQL Server e para resolver este problema basta criar o SPN.

Listando os SPN atuais:

No prompt de comando digite o seguinte comando

SETSPN –L <domínio\conta_de_serviço_do_SQL>


Nota: Se ao executar o comando setspn –L você não receber o SPN para a sua instância de SQL Server, você deve criar o SPN e você está recebendo o erro de SSPI Context, possivelmente o seu problema será resolvido com a criação manual do SPN com o seguinte comando:

SETSPN – A MSSQLSvc/fqdn <domínio>\<conta de serviço do SQL>

SETSPN – A MSSQLSvc/fqdn:1433 <domínio>\<conta de serviço do SQL>

Ex. SETSPN –A MSSLSvc/ contoso\sqlServiceAcc

Maiores informações no artigo:


Como podemos observar no resultado do SETSPN –L no meu ambiente eu tenho os SPNs corretamente criados!

O que justifica algumas conexões conseguirem conectar-se, pois o SPN está ok, e então por que algumas conexões recebem o SSPI?

Para responder esta questão precisamos analisar o ticket do kerberos, executando a ferramenta tokensz você poderá evidenciar o problema de token bloat com a seguinte sintaxe

tokensz.exe /compute_tokensize /user:<usuário_com_erro_sspi>


Como podemos notar na imagem acima, o tamanho máximo do Token é 12000 (12K) e o token do usuário contoso\usuarioSQL que estou tentando conectar-se ao SQL e estou recebendo o erro SSPI contexto ultrapassou o limite.

Um token basicamente contém os grupos e permissões de um usuário e é criado no momento do logon e este token é repassado aos outros serviços/servidores conforme o usuário necessita autenticar-se para consumir os serviços (para maiores detalhes sobre kerberos consulte o artigo

Pois bem, se em um token temos a ACL (Access control List) vamos investigar os usuários criados:

Os dois usuários são membros do grupo SQLacesso o qual foi criado o login no SQL Server, mas o segundo usuário é membro de mais algumas centenas de grupos.

Até o Windows 2008 R2 o tamanho máximo default de um token é de 12K a partir do Windows 2012 este valor foi alterado para 48K

Dependendo do tamanho de seu ambiente este limite de 12K é facilmente alcançado com um usuário pertencendo a aproximadamente 150 grupos.

Em uma empresa multinacional onde como boa prática são criados diversos grupos para controlar acessos a diversos compartilhamentos, servidores, aplicações etc. este limite é alcançado muito rapidamente.


Qual o tamanho de cada grupo?

Para calcular o tamanho do token utilizamos a seguinte formula:

TokenSize = 1200 + 40d + 8s


D = (grupos domínio local + grupos universais externos)

S = (grupos globais + Universal)

Existem diversos scripts que automatizam este cálculo, abaixo um bem simples que encontrei no artigo

# Always credit where due - this was found via
#Gets max token size
#Run with .\get_tokensize.ps1 -Username "domain\username"
#tokensize = 1200 + 40d + 8s
$domain = ($username.split("\"))[0]
$user = ($username.split("\"))[1]
Import-Module ActiveDirectory
$rootdse = (Get-ADDomain $domain).distinguishedname
$server = (Get-ADDomain $domain).pdcemulator
$usergroups = Get-ADPrincipalGroupMembership -server $server $user | select distinguishedname,groupcategory,groupscope,name
$domainlocal = [int]@($usergroups | where {$_.groupscope -eq "DomainLocal"}).count
$global = [int]@($usergroups | where {$_.groupscope -eq "Global"}).count
$universaloutside = [int]@($usergroups | where {$_.distinguishedname -notlike "*$rootdse" -and $_.groupscope -eq "Universal"}).count
$universalinside = [int]@($usergroups | where {$_.distinguishedname -like "*$rootdse" -and $_.groupscope -eq "Universal"}).count
$tokensize = 1200 + (40 * ($domainlocal + $universaloutside)) + (8 * ($global + $universalinside))
Write-Host "
Domain local groups: $domainlocal
Global groups: $global
Universal groups outside the domain: $universaloutside
Universal groups inside the domain: $universalinside
Kerberos token size: $tokensize"

Outros scripts úteis

Como Resolver

Você tem duas maneiras de resolver o problema, aumentando o default do token size ou excluindo grupos desnecessários.

Para aumentar o token size, você pode seguir as etapas do KB

Em cada estação:

  1. Inicie Regedt32.exe
  2. Localize a chave (HKLM\System\CurrentControlSet\Control\Lsa\Kerberos\Parameters)
  3. No menu Edit clique em NEW / DWORD e utilize os parâmetros abaixo:

Nome: MaxTokenSize


Base: Decimal

Valor: 48000


  1. Feche o Editor de Registro

Se executarmos novamente a ferramenta Tokensz teremos o seguinte resultado:



Leituras adicionais




2015 Microsoft MVP Virtual Conference


Olá amigos!

Gostaria de convidá-los para o grande evento que a Microsoft e os MVPs estão organizando, e que acontecerá nos dias 14 e 15 de maio, a partir das 12h00 (Horário de Brasilia)

Junte-se aos MVPs do Brasil, Estados Unidos e Latam que estarão compartilhando o seu conhecimento em sessões práticas e gratuitas, com cenários do mundo real e últimas novidades sobre Tecnologia Microsoft.

steveO MVP Virtual Conference trará 95 sessões com conteúdo para IT Pros, Developers e Consumer experts, planejadas para ajudá-lo a navegar no momento mobile-first Cloud first. E para abrir o evento, teremos o prazer de receber Steve Guggenheimer, Microsoft VP of Developer Platform.

Por que atender ao MVP V-Conf? A conferência está dividida em 5 trilhas simultâneas:  IT PRO (English), DEV (English), Consumer (English), Sessões em Português e em Espanhol, temos sessões para todas as audiências!

Inscreva-se e garanta a sua vaga!

A conferência será acompanhada pelos nossos canais sociais, e você pode acompanhar em tempo real, seguindo @mvpaward e usando a hashtag #MVPvConf.

Confira a agenda e programe-se:



A minha sessão será no dia 15/05 as 05:00 PT (21:00 horário de Brasília) e será moderada pelo MVP Nilton Pinheiro (Twitter | Site)


SQL Server 2014: Alta Disp. na Prática com AlwaysOn Failover Cluster Instances


Olá pessoal… é com grande  prazer que compartilho mas uma realização em minha vida profissional. Após um longo caminho e dedícação de mais de um ano, estamos lançando em co-autoria com meu amigo Nilton Pinheiro(@nilton_pinheiroSite | LinkedIn) o livro “SQL Server 2014: Alta Disponibilidade na Prática com AlwaysOn Failover Cluster Instances“…

Para quem me conhece um pouco sabe que “Alta Disponibilidade” é um tema que gosto muito, mas nunca imaginei que um dia lançaria um livro, realmente é um grande sonho se realizando, foi um trabalho longo  e dedicamos muito tempo e carinho neste projeto.

Tomamos como base a série de videos que o Nilton lançou no TechNet SQL Server Failover Clustering End-to-End, este projeto tomou mais força quando nosso amigo Fábio Gentile  (Facebook | LinkedIn) e Premier Field Engineer (PFE), topou ser o revisor técnico 🙂

Como o próprio nome diz, este será um livro totalmente prático! Quando pensamos no livro, pensamos em fazer um livro que fosse didático, objetivo e prático. Então, dedicado a estudantes, Administradores de Banco de Dados ou Redes e simpatizantes da plataforma de banco de dados Microsoft SQL Server, o livro abordará os principais aspéctos referentes à implementação de um ambiente de alta disponibilidade com SQL Server e orientará quanto à execução de todos os passos para a implementação e configuração de um cluster com dois nós rodando Windows Server 2012 R2 e suportando duas instâncias do SQL Server 2014 em configuração multi-instances.

Para tornar esta abordagem prática possível o livro cobrirá passo-a-passo a criação de um ambiente de laboratório com máquinas virtuais criadas no Hyper-V e abordará a configuração de redes, criação e configuração de discos no Windows e no cluster, quórum e a instalação de duas instâncias do SQL Server 2014 em configuração multi-instances. O objetivo é que ao final do livro você tenha absorvido todo o conhecimento necessário para a implementação de um SQL Server 2014 AlwaysOn Failover Clustering!

Há…e se você não possuir acesso ao Windows Server 2012 R2 ou SQL Server 2014, não se preocupe pois no livro o guiamos para fazer o download das versões de avalição, o que o permitirá usar os produtos por 180 dias.

Então, o que posso dizer é…ao adquirir este livro esteja preparado não apenas para lê-lo, mas principalmente para aprender colocando a mão na massa. Se você busca um livro que seja prático e lhe proporcione a base necessária para a implementação de ambientes de alta disponibilidade com SQL Server 2014, certamente este será o livro que você procura :).

Não tenho dúvidas de que este será um livro único, até porque, no Brasil somos bastante carentes de livros sobre o assunto e estou certo de que ele lhe proporcionará um excelente aprendizado.

Faça um download do  sumário do livro!

O livro já está disponível para compras Online através dos sites das livrarias Cultura e Curitiba!

Em breve estará também em outras livrarias como Saraiva, mas para aqueles que estavam esperando a disponibilização para compras Online, agora já é possível 🙂

Segue abaixo os links para as respectivas livrarias:

Livraria Cultura:

 Livraria Curitiba:,product,LV376835,3429.aspx

Um abraço e bons estudos!!
Marcelo Fernandes

2015 MVP Virtual Conference


Galera, de 14 e 15 de Maio será realizado o MVP Virtual Conference, o MVP V-Conf é uma conferência online, onde as sessões serão apresentadas por MVPs da Região América (Norte, Central e sul) e o Keynote, será entregue pelo Corporate VP de DX, Steve Guggenheimer.

Faça já a sua inscrição

Eu apresentarei uma sessão no dia 15, falarei sobre o Quórum dinâmico, abaixo a agenda completa do evento (horário em Pacific Time, você poderá converter no Time Zone Converter ):

Marcelo Fernandes

Error to install SQL Server 2008 on Windows 2012

Olá Amigos…

Decidi escrever este post em inglês devido as origens dos acessos que estou recebendo em meu blog, você poderá ver este artigo em português no


Hello Friends…

This week I faced an interesting error… My PFE friend Alex Rosa (blog) helped me to fix this problem and I’m sharing this problem / solution because it might help you.

The environment and Goal

A Windows 2012 R2 on cluster with 2 nodes and my goal was to setup a SQL Server 2008 in cluster.

The problem

Firstly, I must say that since Jul/14 SQL 2K8 is no longer supported, SQL2K8 is in Extended Support.

When SQL Server 2008 was released there wasn’t Windows 2012, so SQL Server was made based on Windows 2k8, I’m not saying that SQL 2k8 is not supported on Windows 2012, SQL is supported on Windows 2012. but some functionality were made based on Win2K8.

During the installation of SQL2K8 on Win12 on the first node I faced this error message:

error_clusterI do have my Cluster on online status and working well and I ran the cluster validation successfully, but during the SQL setup I faced a message error saying that my setup couldn’t check my cluster service, bellow we have part of the detail of the Report Setup Validation:

InstallFailoverClusterGlobalRules: SQL Server 2008 Setup configuration checks for rules group ‘InstallFailoverClusterGlobalRules’
  SQLNODE1 Cluster_IsOnline Verifies that the cluster service is online. Failed The SQL Server failover cluster services is not online, or the cluster cannot be accessed from one of its nodes. To continue, determine why the cluster is not online and rerun Setup. Do not rerun the rule because the rule cannot detect a cluster environment.
  SQLNODE1 Cluster_SharedDiskFacet Checks whether the cluster on a computer has at least one shared disk available. Failed The cluster on this computer does not have a shared disk available. To continue, at least one shared disk must be available.
  SQLNODE1 Cluster_VerifyForErrors Checks if the cluster has been verified and if there are any errors or failures reported in the verification report. Failed The cluster either has not been verified or there are errors or failures in the verification report. Refer to KB953748 or SQL Server Books Online for more information.

I also tried running the setup from the command line using the SKIP_RULES:

Setup /SkipRules=Cluster_VerifyForErrors /Action=InstallFailoverCluster

And I got the same error…

Looking at the detail of the first error, the message says that the SQL couldn’t verify the cluster service, the cluster was online but setup couldn’t access my cluster.

I did some research on the web and I found this article:

The author (Rob-MSFT) gave me a clue:

… These are deprecated features (Failover Cluster Command Interface (cluster.exe) and Failover Cluster Automation Server) in Windows Server 2012 but are made available, as there are still some applications that may need them, SQL Server being one of them.  Installing it may be necessary for any legacy scripts you have built on the old Cluster.exe command line interface. …

During the SQL Server 2008 installations the setup tries to check cluster service using a deprecated feature!

Checking my cluster installations using the articles from Rob I have this:

Get-WindowsFeature RSAT-Cluster*


 These is the features installed by default when we install the Cluster Feature.

The Solution

We just need to enable the Failover Cluster Automation Server, to achieve this you just need to run the PowerShell command bellow.

Install-WindowsFeature -Name RSAT-Clustering-AutomationServer


After these steps we’ll have success on our setup:


PS: You need to run these steps on all Cluster nodes.