Part 1 of the Replication Monitoring Series
There is an existing MSSQLTips.com blog post that shares how to identify the Replication Log Reader latency via Replication Monitor and by querying the Distributor’s distribution system tables. In this blog post, I will share another approach to identify the latency metric by utilizing the Publisher’s log file. The advantage of this approach is that we extract the log reader latency metric from the true source instead of waiting for the Distributor to routinely update its monitoring data.
As a reminder, monitoring Log Reader latency is critically important in a highly transactional database environment. Failure to do so could lead to stale replicated data at the subscriber(s) as well as log file growth at the publisher.
In addition to SQL Replication monitoring, identifying Log Reader latency is an important metric for Change Data Capture (CDC) workloads.
As shown below, we will use both the DBCC OPENTRAN command and the fn_dblog() function to extract the Log Reader’s position on the Publisher’s database log file.
DECLARE @LSN NVARCHAR(25),
@cleanLSN NVARCHAR(25);
--See for more info: https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-opentran-transact-sql?view=sql-server-ver17#b-specify-the-with-tableresults-option
IF OBJECT_ID('tempdb.dbo.#OpenTranStatus') IS NOT NULL
DROP TABLE #OpenTranStatus
-- Create the temporary table to accept the results.
CREATE TABLE #OpenTranStatus (
ActiveTransaction VARCHAR(25),
Details sql_variant
);
-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC ('DBCC OPENTRAN WITH TABLERESULTS, NO_INFOMSGS');
-- Extract log reader specifics
SELECT @LSN = CAST(Details as NVARCHAR(128))
FROM #OpenTranStatus
WHERE ActiveTransaction = 'REPL_NONDIST_OLD_LSN'
--Calculate the latency
IF @LSN <> '(0:0:0)'
BEGIN
SELECT
DATEDIFF(SECOND, [Begin Time], GETDATE()) AS 'Log reader latency (secs)'
FROM ::fn_dblog(REPLACE(REPLACE(@LSN,'(',''),')',''),REPLACE(REPLACE(@LSN,'(',''),')',''))
WHERE [Begin Time] IS NOT NULL
END
ELSE
BEGIN
SELECT 0 'Log reader latency (secs)'
END
DROP TABLE #OpenTranStatus
First, we need a table variable to hold the data from the DBCC OPENTRAN output. We execute the command to extract the REPL_NONDIST_OLD_LSN, which identifies the LSN where the Log Reader is currently reading.
Next, we use that captured LSN to query fn_dblog(). This returns the [Begin Time]—the transaction’s start time. Finally, we use DATEDIFF to calculate the difference between that start time and the current time. The result is our Log Reader latency in seconds.

Collecting Log Reader Latency at Scale
Let’s take this a step further.
Imagine an environment with multiple Publishers and several databases are enabled for replication. Instead of running the script manually against every database, we can streamline the whole process with PowerShell. The goal is to hit all the Publishers, check the Log Reader Latency for each, and pull it all back into a single result.
First, we must define a query to identify all publishers and their replication-enabled databases. We can leverage the system tables within the Distributor’s distribution database to extract this information.
USE distribution
GO
SELECT rs.srvname, pd.publisher_db
FROM MSpublisher_databases pd
JOIN MSreplservers rs on pd.publisher_id = rs.srvid

In a real production environment, you’re obviously going to have way more publishers and databases than what I’m showing here. But for this demo, let’s just assume we’re dealing with multiple rows of data.
Once we have the list of publishers and their databases, we can use a simple Invoke-DbaQuery to hit each one remotely.
Here is a demo to highlight this data collection effort.
Import-Module dbatools
$Distributor = 'SERVERB'
$DistributorDb = 'distribution'
$PublisherQuery = '
SELECT rs.srvname publisher_srv, pd.publisher_db
FROM MSpublisher_databases pd
JOIN MSreplservers rs on pd.publisher_id = rs.srvid
'
$logReaderLatencyQuery = '
DECLARE @LSN NVARCHAR(25),
@cleanLSN NVARCHAR(25);
--See for more info: https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-opentran-transact-sql?view=sql-server-ver17#b-specify-the-with-tableresults-option
IF OBJECT_ID(''tempdb.dbo.#OpenTranStatus'') IS NOT NULL
DROP TABLE #OpenTranStatus
-- Create the temporary table to accept the results.
CREATE TABLE #OpenTranStatus (
ActiveTransaction VARCHAR(25),
Details sql_variant
);
-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC (''DBCC OPENTRAN WITH TABLERESULTS, NO_INFOMSGS'');
-- Extract log reader specifics
SELECT @LSN = CAST(Details as NVARCHAR(128))
FROM #OpenTranStatus
WHERE ActiveTransaction = ''REPL_NONDIST_OLD_LSN''
--Calculate the latency
IF @LSN <> ''(0:0:0)''
BEGIN
SELECT
DATEDIFF(SECOND, [Begin Time], GETDATE()) AS ''LogReaderLatencySecs''
FROM ::fn_dblog(REPLACE(REPLACE(@LSN,''('',''''),'')'',''''),REPLACE(REPLACE(@LSN,''('',''''),'')'',''''))
WHERE [Begin Time] IS NOT NULL
END
ELSE
BEGIN
SELECT 0 ''LogReaderLatencySecs''
END
DROP TABLE #OpenTranStatus
'
$PublisherDbs = Invoke-DbaQuery -SqlInstance $Distributor -Database $DistributorDb -Query $PublisherQuery
$LogReaderResults = [System.Collections.Generic.List[PSCustomObject]]::new()
foreach($PublisherDb in $PublisherDbs) {
$Results = $null
$Results = Invoke-DbaQuery -SqlInstance $publisherDb.publisher_srv -Database $publisherDb.publisher_db -Query $LogReaderLatencyQuery
$Object = [PSCustomObject]@{
PublisherServer = $publisherDb.publisher_srv
PublisherDatabase = $publisherDb.publisher_db
LogReaderLatencySecs = $results.LogReaderLatencySecs
}
$LogReaderResults.Add($Object)
}
$LogReaderResults | Format-Table -AutoSize

Why consider this approach?
In a previous role, we managed SQL Replication at a massive scale, replicating over 10,000 tables across 40 publishers and 100 subscribers. We quickly realized that Replication Monitor was not meeting our monitoring needs as it could not handle the overhead from 10 remote distributors while maintaining real-time refreshes. We ultimately pivoted to a custom Grafana dashboard, using PowerShell to aggregate our own metrics. Tracking Log Reader latency was a critical component of this strategy, allowing us to maintain clear visibility into the health of all publishers.
Acknowledgement
I’ve always found that I learn better by collaborating on hard problems than by reading books or source code. This post is a perfect example of that.
I want to credit the Data Movement team at one of my previous employers. We brainstormed and built this approach together, bouncing ideas off one another until we found a solution that gave us the visibility we needed.

Leave a Reply