a
I am a highly motivated and organised professional with more than ten years of experience as a Database Specialist and Architect or designer.
Bring Me a Coffee - NZ$ 5

How to – What to do

Tourism recovery mode

It’s wonderful to hear that LinkedIn Editors invited me! Pursuing a passion is an incredible thing. At this time, I have been invited by Mitchell Van Homrigh to contribute about how tech can solve tourism pain points. Below, you can find the points that woke up my interest.

 

Like many others, the tourism sector has been significantly impacted by the COVID-19 pandemic (it looks like that pharmaceutic ambition planned to destroy everything in its path, with the unique intention to get a couple of millions of dollars more in its revenue, another discussion for late). As we move towards recovery, technology could play a crucial role in reviving the industry. From my perspective, as a Data Management Engineer, I would like to highlight two things. Virtual Reality and Data Analytics.

  1. Virtual Reality (VR) Tours: The airline industry and people are recovering their health and “boom” stamina. For those planning tourism to any country or airport, VR tours can provide immersive experiences. The idea is to offer a VR tour of the destination airport, along with information on what to do before or after disembarking or boarding, such as where to go, what documentation is required, and emergency services information. When travelling to another country, it’s easy to feel lost because you don’t know how far the connection flight or migration/customs procedure is. VR tours can help passengers be more agile and reduce their time in the airport, improving the customer experience.

  1. Data Analytics: Tourism businesses can gain insight into customer behaviour and preferences by utilising data analytics to create more personalised experiences. This can lead to various benefits, such as predicting demand, improving pricing availability, and optimizing inventory for financial optimisation. In fact, a study showed that nine hotels experienced a 22% average increase in revenue after implementing pricing optimisation. Additionally, data analytics can inform funding decisions and shape business development based on detailed user preferences. By identifying potential customers at different stages of the trip planning process, analytics can help businesses target specific groups. Finally, data analytics can improve marketing campaign effectiveness by identifying the best channel to reach customers.

The innovative concept of virtual reality tours can potentially transform how we explore airports and destinations. In a world recovering from a pandemic, these immersive experiences offer a much-needed solution for travellers seeking a hassle-free and efficient journey. Imagine arriving in a new country and embarking on a virtual reality tour that guides you through customs procedures, connecting flights, and emergency services. This game-changing technology has the power to revolutionise the travel industry and create an even more satisfying experience for customers.

Regarding data, Analytics acts as a compass for tourist companies, guiding them in the right direction. By harnessing the power of data, businesses can gain valuable insights into customer behaviors and preferences, allowing them to create personalized experiences. Data analytics is the key ingredient for achieving operational efficiency and financial success, from predicting demand and optimizing pricing to shaping business strategies based on user preferences. This is a theoretical concept and a proven technique for boosting revenue and engaging consumers.

In summary, the COVID-19 outbreak had a major impact on the tourism industry, challenging its ability to adapt and recover. Looking ahead, it is clear that technology can play a critical role in supporting the industry. As a data management engineer, I believe virtual reality (VR) and data analytics are promising game changers. In this post-pandemic era, where the race to recovery is fierce, those who leverage technology wisely will survive and thrive, delivering the best service and experiences to passengers worldwide. The future of tourism is tech-infused, and it’s time to take that leap into a brighter, more innovative tomorrow.

 

AI and Data Management: A Powerful Partnership

Artificial intelligence (AI) is transforming every industry and every aspect of our lives. From healthcare to education, from entertainment to finance, AI is enabling new possibilities and creating new value. But AI is not a magic wand that can solve any problem without any effort. AI depends on data – lots of data – to learn, improve, and deliver accurate and reliable outcomes. And data needs management – effective management – to ensure its quality, accessibility, security, and integration.

What exactly does a Data Management Engineer do?

Data management is the process of collecting, organizing, storing, processing, analyzing, and sharing data in a way that supports business goals and complies with regulations. Data management involves various tasks such as data classification, cataloguing, quality control, security enforcement, and data integration. These tasks are often labour-intensive and error-prone when done manually or with traditional tools.

How AI can help to do our job more efficiently?

This is where AI can help. AI can automate and simplify tasks related to data management across discovery, integration, cleansing, governance, and mastering. AI can improve data understanding and identify privacy and quality anomalies. AI can also enhance data security by detecting threats and enforcing policies. AI can enable data integration by matching records across sources and resolving conflicts.
By using AI for data management, Database Specialist can benefit from:
  1. Improved productivity: AI can reduce the time and effort required for data management tasks by automating repetitive or complex operations.
  2. Increased accuracy: AI can improve the quality and consistency of data by identifying errors or inconsistencies and correcting them.
  3. Enhanced scalability: AI can handle large volumes of data from various sources without compromising performance or reliability.
  4. Greater agility: AI can adapt to changing business needs or regulatory requirements by learning from feedback or new information.
  5. Higher value: AI can unlock the potential of data by providing insights or recommendations that support decision-making or innovation.
Some examples of how AI can be useful for data management are:
  • Classification: AI can extract relevant information from documents, images, videos, or other media using natural language processing (NLP), computer vision (CV), or speech recognition techniques.
  • Cataloguing: AI can help locate data by indexing it based on metadata or content analysis using NLP or CV techniques.
  • Quality: AI can reduce errors in the data by validating it against rules or standards using NLP or machine learning (ML) techniques.
  • Security: AI can keep data safe from unauthorized access or misuse by applying encryption or masking techniques using ML techniques.
  • Integration: AI can help merge data from different sources by matching records based on similarity measures using ML techniques.
  • Data Cleansing: AI can automatically identify and correct errors in data by using machine learning algorithms. For example, AI can identify and remove duplicate data, identify outliers, and standardize data.
  • Data Governance: AI can help enforce data governance policies by identifying and flagging any inconsistencies or violations in the data. AI can also provide recommendations for policies based on the analysis of the data.
  • Data Discovery: AI can help discover hidden patterns and insights in large datasets that are difficult for humans to uncover. For example, AI can identify trends and correlations between different data sets and provide recommendations for further analysis.
  • Data Mastering: AI can help improve data accuracy and completeness by merging, standardizing, and de-duplicating data from different sources. This can help reduce data inconsistencies and improve data quality.
AI and data management are a powerful partnership that can enable organizations to leverage their most valuable asset – their data – in a more efficient and effective way. By combining human expertise with machine intelligence, organizations can achieve better outcomes faster while reducing costs and risks. If you want to learn more about how AI can help you with your data management challenges, please contact me, and we will be happy to assist you.
Regards;

Configure Data Access – Column Level Encryption

Introduction

In today’s world, data security is more critical than ever. The ability to encrypt data at the column level is a critical capability in any modern database engine with the increasing number of data breaches, organizations must protect their sensitive information from unauthorized access. One way to accomplish this is through column-level encryption in SQL Server also applicable in Oracle. In this article, we will discuss how to implement column-level, including examples and use cases, while emphasizing the importance of security and performance efficiency.

What is Column-Level Encryption?

Column-level encryption is a method of encrypting individual columns within a database table. This type of encryption protects sensitive information such as credit card numbers, social security numbers, and personal information. When implemented correctly, column-level encryption can provide a higher level of security than database-level encryption.

To understand and implement encryption in SQL Server you need to understand its encryption hierarchy and key management architecture. Layers of encryption are protected by preceding layers of encryption that can use asymmetric keys, certificates, and symmetric keys.

Extensible Key Management SQL Server EKM
enables the encryption keys that protect the database files to be stored outside of the SQL Server environment such as a smartcard, a USB device, and the EKM module of Hardware Security Module (HSM). It also helps secure the SQL Server instance from database administrators because they will not necessarily have access to the external EKM/HSM module.

Service Master Key The Service Master Key (SMK)
is the root of the database engine’s encryption hierarchy and is generated automatically the first time it is needed to encrypt another key. By default, the SMK is encrypted using the Windows data protection API (DPAPI) at the operating system level, which uses the local machine key. The SMK can only be opened by the Windows service account that created it, or by a principal that knows the service account name and its password.

Database Master Key
The Database Master Key (DMK) is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database. When created it is encrypted using AES_256 and a password you provide. Query the [sys].[symmetric_keys] catalog view to get information about the DMK.

Asymmetric Key
An asymmetric key consists of a private and corresponding public key. Asymmetric encryption is computationally more expensive, but more secure than symmetric encryption. You can use an asymmetric key to encrypt a symmetric key within a database.

Symmetric Key
A symmetric key is a single key that uses encryption.
Symmetric encryption is generally used over asymmetric encryption because it is faster and less computationally expensive.
Certificate Certificates are a digitally signed security object that contain a public (and optionally a private) key for SQL Server, which can generate certificates. You can also use externally generated certificates, and just like with asymmetric keys, certificates can be used in asymmetric encryption.

Implementing Column-Level Encryption in SQL Server

SQL Server offers two options for column-level encryption: symmetric and asymmetric encryption. Let’s take a look at both options.

Symmetric Encryption
Uses the same key to encrypt and decrypt data. This type of encryption is faster and more efficient than asymmetric encryption. However, it is less secure since the same key is used to encrypt and decrypt the data.

To implement symmetric encryption in SQL Server, follow these steps:

  • Step 1: Create a Master Key

The first step is to create a master key that will be used to protect the symmetric key. You can create a master key using the following T-SQL command:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'VeryStrongPasswd';
  • Step 2: Create a Symmetric Key

The next step is to create a symmetric key that will be used to encrypt and decrypt the data. You can create a symmetric key using the following T-SQL command:

CREATE SYMMETRIC KEY MySymmetricKey
WITH ALGORITHM = AES_256
ENCRYPTION BY PASSWORD = 'MyStrongPassword';
  • Step 3: Encrypt the Column

Once you have created the symmetric key, you can use it to encrypt the column using the following T-SQL command:

UPDATE MyTable SET MyColumn = ENCRYPTBYKEY(KEY_GUID('MySymmetricKey'), MyColumn);

To decrypt the column, you can use the following T-SQL command:

SELECT CONVERT(varchar, DECRYPTBYKEY(MyColumn)) AS MyColumn
FROM MyTable;

Asymmetric Encryption
Uses a public key to encrypt the data and a private key to decrypt it. This type of encryption is more secure than symmetric encryption but is slower and less efficient.
To implement asymmetric encryption in SQL Server, follow these steps:

  • Step 1: Create a Master Key

The first step is to create a master key that will be used to protect the asymmetric key. You can create a master key using the following T-SQL command:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MyStrongPassword';
  • Step 2: Create an Asymmetric Key

The next step is to create an asymmetric key pair. You can create an asymmetric key pair using the following T-SQL command:

CREATE ASYMMETRIC KEY MyAsymmetricKey
WITH ALGORITHM = RSA_2048 ENCRYPTION BY PASSWORD = 'MyStrongPassword';
  • Step 3: Create a Certificate

Once you have created the asymmetric key pair, you can create a certificate using the following T-SQL command:

CREATE CERTIFICATE MyCertificate
AUTHORIZATION dbo FROM ASYMMETRIC KEY MyAsymmetricKey;
  • Step 4: Encrypt the Column

Finally, you can use the certificate to encrypt the column using the following T-SQL command:

UPDATE MyTable
SET MyColumn = ENCRYPTBYCERT(CERT_ID('MyCertificate'), MyColumn);

Check this out another exercise!

USE tempdb; GO 

-- Create sample table 

CREATE TABLE Employees (EmployeeID INT PRIMARY KEY,EmployeeName VARCHAR(300),Position VARCHAR(100), Salary VARBINARY(128) ); GO 

-- Create SMK 

CREATE SYMMETRIC KEY SMK_Emp WITH ALGORITHM = AES_256 ENCRYPTION BY PASSWORD = 'Pa$$w0rd'; GO 

-- Open SMK 

OPEN SYMMETRIC KEY SMK_Emp DECRYPTION BY PASSWORD = 'Pa$$w0rd'; GO 

-- Verify open keys 

SELECT * FROM sys.openkeys; GO 

-- Insert data 

INSERT Employees VALUES (1, 'Marcus', 'CTO', ENCRYPTBYKEY(KEY_GUID('SMK_Emp'),'$100000')); INSERT Employees VALUES (2, 'Christopher', 'CIO',ENCRYPTBYKEY(KEY_GUID('SMK_Emp'),'$200000')); INSERT Employees VALUES (3, 'Isabelle', 'CEO',ENCRYPTBYKEY(KEY_GUID('SMK_Emp'),'$300000')); GO 

-- Query table with encrypted values 

SELECT * FROM Employees; GO 

-- Query table with decrypted values 

SELECT *, CONVERT(VARCHAR, DECRYPTBYKEY(Salary)) AS DecryptedSalary FROM Employees; GO 

-- Close SMK 

CLOSE SYMMETRIC KEY SMK_Emp GO 

-- Query table with decrypted values after key SMK is closed 

SELECT *, CONVERT(VARCHAR, DECRYPTBYKEY(Salary)) AS DecryptedSalary FROM Employees; GO 

-- Clever CTO updates their salary to match CEO's salary 

UPDATE Employees SET Salary = (SELECT Salary FROM Employees WHERE Position = 'CEO') WHERE EmployeeName = 'Marcus'; GO 

-- Open SMK and query table with decrypted values 

OPEN SYMMETRIC KEY SMK_Emp DECRYPTION BY PASSWORD = 'Pa$$w0rd'; SELECT *, CONVERT(VARCHAR, DECRYPTBYKEY(Salary)) AS DecryptedSalary FROM Employees; GO 

-- Cleanup (Optional) 
DROP TABLE Employees; 
DROP SYMMETRIC KEY SMK_Emp; 
GO
Need more Review? Encryption authenticators
For more information on authenticators SQL Server supports a number of different system functions that support encryption, decryption, digital signing and validation of digital signatures. To familiarize yourself with these functions visit https://technet.microsoft.com/enus/library/ms365192(v=sql.105).aspx.

 

Skill No 2 – Manage Backup and Restore Databases

Introduction:

In the era of digitalisation, data has become the backbone of every business. As a result, data security and reliability have become a top priority for businesses, and losing data can lead to disastrous consequences. Therefore, it is essential to manage backup and restore databases to ensure the safety of business-critical data.

Design a Backup Strategy:

A backup solution is essential for small to middle-sized businesses to ensure that their data is protected and can be recovered in the event of a disaster or system failure. A good backup solution should be reliable, easy to use, and cost-effective.

One option for small to middle-sized businesses is to use a cloud-based backup solution. Cloud-based backup solutions are typically easy to set up and use, and they offer a high level of reliability and security. With a cloud-based backup solution, data is automatically backed up to a remote server, which is stored securely and can be accessed and restored as needed.

Another option for small to middle-sized businesses is to use a local backup solution. This involves backing up data to an external hard drive or other storage device that is kept on-site. Local backup solutions can be cost-effective and offer fast data recovery times, but they require more management and maintenance than cloud-based solutions.

To design a backup strategy, one should consider the frequency of data changes, the volume of data, and the recovery time objective (RTO). In addition, it is essential to have a backup strategy that aligns with your business needs. Some of the critical factors to consider while designing a backup strategy are:

  1. Frequency of backup: How frequently do you need to back up your database?
  2. Backup types: Which types of backup do you need to take, full, differential or incremental?
  3. Backup location: Where to store the backup? Locally or on the cloud?
  4. Recovery time objective (RTO): What is the maximum duration of downtime that you can afford, and how quickly do you need to restore data?
  5. Recovery Point Objective (RPO) The RPO defines the maximum acceptable amount of data loss following a disaster incident. The RPO is commonly expressed in minutes.
  6. Recovery Level Objective (RLO) The RLO defines the granularity of the data that needs to be restored following a disaster incident.

To design a backup strategy, you need to take into account several factors, including:

  • The size of your database: A larger database will take longer to back up. Your database might grow to a size where the backup operation can no longer be completed within an appropriate maintenance window. You might have to use different hardware or redesign your database and backup strategy at that stage.
  • The structure of your database files A database that consists of a single primary data file will be difficult to back up within an appropriate maintenance window as it gets larger. You have no choice but to back up the database in its entirety. Alternatively, if the database consists of multiple secondary data files, these files can be backed up individually at different frequencies.
  • The speed/throughput of the network and storage.
  • How heavily utilised is the processor subsystem.
  • The volume of data modifications in the database.
  • The size of the data modifications in the database.
  • The type of data modifications in the database, for example if they are predominantly updated or insert operations.
  • How compressible is the data in the database. Does backup compression consume additional processor resources?
  • Whether point-in-time recovery is required If your organization requires a database to be recovered to a specific point-in-time, you will have no choice but to implement log backups.
  • The recovery objectives are defined. Your RPO, RTO, and RLO are critical to your backup strategy.
  • How the transaction log (archive logs in Oracle) is managed.
  • The database’s recovery model.
  • The importance of the data within the database. Some databases might not be important to your organization; they may be used in a staging or development environment, or they can be a replica of a production system. In such cases, there might be no business requirement to back up the database at all.

Types of backups:

SQL Server and Oracle support three types of backups: full, differential, and incremental.

  1. Full Backup: A full backup includes the entire database and is taken regularly. It is the most comprehensive backup and takes time and space to complete.
  2. Differential Backup: A differential backup includes only the changes made since the last full backup. It takes less time and space than a full backup.
  3. Incremental Backup: An incremental backup includes only the changes made since the last backup, whether a full or differential backup. It takes less time and space than both full and differential backups.

Manage Transaction Logs:

Transaction logs are records of all the transactions made on a database. Managing transaction logs is critical for database recovery. The transaction logs should be regularly backed up and monitored for unusual growth or issues. In Oracle database for example, this concept call archived log files.

Why the transaction logs grows are so large?

A Shorter Answer:

You probably either have a long-running transaction running (Index maintenance? Big batch delete or update?), or you are in the “default” (more below on what is meant by default) recovery mode of Full and have not taken a log backup (or aren’t taking them frequently enough).

If it is a recovery model issue, the simple answer could be to Switch to Simple recovery mode if you do not need point-in-time recovery and regular log backups. Many people, though, make that their answer without understanding recovery models. Read on to understand why it matters and then decide what you do. You could also just start taking log backups and stay in Full recovery.

There could be other reasons, but these are the most common. This answer begins to dive into the most common two reasons and gives you some background information on the why and how behind the reasons as well as explores some other reasons.

A longer Answer:

What scenarios can cause the log to keep growing? There are many reasons, but usually, these reasons are of the following two patterns: There is a misunderstanding about recovery models, or there are long-running transactions.

recovery model is a database property that controls how transactions are logged, whether the transaction log requires (and allows) backing up, and what restore operations are available. Three recovery models exist simple, full, and bulk-logged. Typically, a database uses the full recovery model or simple recovery model. A database can be switched to another recovery model at any time. We are not talking about of bulk-logged recovery mode because is not part of the normal University’s architecture.

Recovery in General Concept

  • Crash/Restart Recovery

One purpose of the transaction log file is for crash/restart recovery. For the rolling forward and rolling back of work that was either done (rolling forward/redo) before a crash or restart and the work that was started but not finished after a crash or restart (rolling back/undo). The transaction log’s job is to see that a transaction started but never finished (rolled back or crash/restart happened before the transaction was committed). In that situation, It is the log’s job to say, “Hey… this never really finished, let’s roll it back” during recovery. It is also the log’s job to see that you did finish something and that your client application was told it was finished (even if it hadn’t yet hardened to your data file) and say, “Hey… this really happened, let’s roll it forward, let’s make it like the applications think it was” after a restart. Now there is more, but that is the main purpose.

  • Point-in-Time Recovery

The other purpose for a transaction log file is to be able to give us the ability to recover to a point in time due to an “oops” in a database or to guarantee a recovery point in the event of a hardware failure involving the data and/or log files of a database. Suppose this transaction log contains the records of transactions that have been started and finished for recovery. In that case, SQL Server can and does then use this information to get a database to where it was before an issue happened. But that isn’t always an available option for us. For that to work, we have to have our database in the right recovery model and take log backups.

Recovery Models

  • Simple Recovery Model

With the above introduction, it is easiest to talk about Simple Recovery model first. In this model, you are telling SQL Server: “I am fine with you using your transaction log file for crash and restart recovery…” (You really have no choice there. Look up ACID properties and that should make sense quickly.) “…but once you no longer need it for that crash/restart recovery purpose, go ahead and reuse the log file.”

SQL Server listens to this request in Simple Recovery and it only keeps the information it needs to do crash/restart recovery. Once SQL Server is sure it can recover because data is hardened to the data file (more or less), the data that has been hardened is no longer necessary in the log and is marked for truncation – which means it gets re-used.

  • Full Recovery Model

With Full Recovery, you are telling SQL Server that you want to be able to recover to a specific point in time, as long as your log file is available or to a specific point in time that is covered by a log backup. In this case when SQL Server reaches the point where it would be safe to truncate the log file in Simple Recovery Model, it will not do that. Instead It lets the log file continue to grow and will allow it to keep growing, until you take a log backup (or run out of space on your log file drive) under normal circumstances.

Recovery Models Source Info

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server?view=sql-server-2017

Example No 1 When your switch from Simple to Full

There are rules and exceptions here. We’ll talk about long-running transactions in depth below.

But one warning to keep in mind for Full Recovery Mode is this: If you just switch into Full Recovery mode but never take an initial Full Backup, SQL Server will not honour your request to be in the Full Recovery model. Your transaction log will continue to operate as it has in Simple until you switch to Full Recovery Model AND take your first Full Backup.

Full Recovery Model without log backups is bad.

That’s the most common reason for uncontrolled log growth. Answer: Being in Full Recovery mode without having any log backups. Real common mistake.

Why is this such a common mistake?

Why does it happen all the time? Because each new database gets its initial recovery model setting by looking at the model database.

The model’s initial recovery model setting is always Full Recovery Model – until and unless someone changes that. So you could say the “default Recovery Model” is Full. Many people are unaware of this and have their databases running in Full Recovery Model with no log backups; therefore, a transaction log file is much larger than necessary. This is why it is important to change defaults when they don’t work for your organisation and its needs)

Full Recovery Model with too few log backups is bad.

You can also get yourself in trouble here by not taking log backups frequently enough. Taking a log backup a day may sound fine, it makes a restore require less restore commands, but keeping in mind the discussion above, that log file will continue to grow and grow until you take log backups. That point should not be a problem with Maint – DatabaseBackup – ALL_DATABASES – LOG job.

How do I find out what log backup frequency I need? Answer that you need respond when you are designing your solution.

You need to consider your log backup frequency with two things in mind:

  1. Recovery Needs– This should hopefully be first. In the event that the drive housing your transaction log goes bad or you get serious corruption that affects your log backup, how much data can be lost? If that number is no more than 10-15 minutes, then you need to be taking the log backup every 10-15 minutes, end of discussion.
  2. Log Growth– If your organisation is fine with losing more data because of the ability to recreate that day easily, you may be fine to have a log backup much less frequently than 15 minutes. Maybe your organization is fine with every 4 hours. But you must look at how many transactions you generate in 4 hours. Will allowing the log to keep growing in those four hours make too large of a log file? Will that mean your log backups take too long?

Querying the sys.databases catalog view, you can see information describing why your log file may be waiting on truncate/reuse.

There is a column called log_reuse_wait with a lookup ID of the reason code and a log_reuse_wait_desc column with a description of the wait reason. From the referenced books and online article are the majority of the reasons (the ones you are likely to see and the ones we can explain reasons for. What does means these columns?

select name, log_reuse_wait_desc, log_reuse_wait from sys.databases
  • 0 = Nothing
    What it sounds like.. Shouldn’t be waiting
  • 1 = Checkpoint
    Waiting for a checkpoint to occur. This should happen and you should be fine – but there are some cases to look for here for later answers or edits.
  • 2 = Log backup
    You are waiting for a log backup to occur. Either you have them scheduled and it will happen soon, or you have the first problem described here and you now know how to fix it
  • 3 = Active backup or restore
    A backup or restore operation is running on the database
  • 4 = Active transaction
    There is an active transaction that needs to complete (either way – ROLLBACK or COMMIT) before the log can be backed up. This is the second reason described in this answer.
  • 5 = Database mirroring
    Either a mirror is getting behind or under some latency in a high performance mirroring situation or mirroring is paused for some reason
  • 6 = Replication
    There can be issues with replication that would cause this – like a log reader agent not running, a database thinking it is marked for replication that no longer is and various other reasons. You can also see this reason and it is perfectly normal because you are looking at just the right time, just as transactions are being consumed by the log reader
  • 7 = Database snapshot creation
    You are creating a database snapshot, you’ll see this if you look at just the right moment as a snapshot is being created
  • 8 = Log Scan
    I have yet to encounter an issue with this running along forever. If you look long enough and frequently enough you can see this happen, but it shouldn’t be a cause of excessive transaction log growth, that I’ve seen.
  • 9 = An Always On Availability Groups secondary replica is applying transaction log records of this database to a corresponding secondary database. About the clearest description yet..

Some things you don’t want to do:

  • Back up the log with TRUNCATE_ONLY option and then SHRINKFILE. For one, this TRUNCATE_ONLY option has been deprecated and is no longer available in current versions of SQL Server. Second, if you are in a FULL recovery model, this will destroy your log chain and require a new, full backup.
  • Detach the database, delete the log file, and re-attach. I can’t emphasize how dangerous this can be. Your database may not come back up, it may come up as a suspect, you may have to revert to a backup (if you have one), etc. etc.
  • Use the “shrink database” option. DBCC SHRINKDATABASE and the maintenance plan option to do the same are bad ideas, especially if you really only need to resolve a log problem issue. Target the file you want to adjust and adjust it independently, using DBCC SHRINKFILE or ALTER DATABASE … MODIFY FILE (examples above).
  • Shrink the log file to 1 MB. This looks tempting because, hey, SQL Server will let me do it in certain scenarios, and look at all the space it frees! Unless your database is read-only (and it is, you should mark it as such using ALTER DATABASE), this will absolutely just lead to many unnecessary growth events, as the log has to accommodate current transactions regardless of the recovery model. What is the point of freeing up that space temporarily just so SQL Server can take it back slowly and painfully?
  • Create a second log file. This will provide temporary relief for the drive that has filled your disk, but this is like trying to fix a punctured lung with a band-aid. You should deal with the problematic log file directly instead of just adding another potential problem. Other than redirecting some transaction log activity to a different drive, a second log file really does nothing for you (unlike a second data file), since only one of the files can ever be used at a time. Paul Randal also explains why multiple log files can bite you later.

Credits

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server?view=sql-server-2017

Paul S. Randal – https://www.sqlskills.com/blogs/paul/sqlskills-sql101-switching-recovery-models/

https://dba.stackexchange.com/users/1192/paul-white

https://www.brentozar.com/archive/2016/03/my-favorite-system-column-log_reuse_wait_desc/