Distributed Transactions from the .NET SDK
A practical guide to using Couchbase’s distributed ACID transactions, via the .NET API.
This document presents a practical HOWTO on using Couchbase transactions, following on from our transactions documentation.
Below we show you how to create Transactions, step-by-step. You may also want to start with our transactions examples repository, which features useful downloadable examples of using Distributed Transactions.
API Docs are available online.
Requirements
-
Couchbase Server 6.6.2 or above.
-
Couchbase .NET client 3.2.3 or above. It is recommended you use the package on NuGet.
-
NTP should be configured so nodes of the Couchbase cluster are in sync with time.
-
The application, if it is using extended attributes (XATTRs), must avoid using the XATTR field
txn
, which is reserved for Couchbase use.
If using a single node cluster (for example, during development), then note that the default number of replicas for a newly created bucket is 1.
If left at this default, then all Key-Value writes performed at with durability will fail with a DurabilityImpossibleException .
In turn this will cause all transactions (which perform all Key-Value writes durably) to fail.
This setting can be changed via GUI or command line.
If the bucket already existed, then the server needs to be rebalanced for the setting to take effect.
|
Getting Started
Couchbase transactions require no additional components or services to be configured. Simply add the transactions library into your project. Version 1.1.0 was release October 29th, 2021. See the Release Notes for the latest version.
With NuGut this can be accomplished by using the NuGet Package Manager in your IDE:
PM > Install-Package Couchbase.Transactions -Version 1.1.0
Or via the CLI
dotnet add package Couchbase.Transactions --version 1.1.0
Or by using PackageReference in your .csproj file:
<PackageReference Include="Couchbase.Transactions" Version="1.1.0" />
A complete simple NuGet example is available on our transactions examples repository.
Initializing Transactions
Here are all imports used by the following examples:
using System;
using System.Linq;
using System.Threading.Tasks;
using Couchbase.KeyValue;
using Couchbase.Query;
using Couchbase.Transactions.Config;
using Couchbase.Transactions.Error;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json.Linq;
The starting point is the Transactions
object.
It is very important that the application ensures that only one of these is created per cluster, as it performs automated background clean-up processes that should not be duplicated.
In dependency injection context, this instance should be injected as a singleton.
// Initialize the Couchbase cluster
var options = new ClusterOptions().WithCredentials("Administrator", "password");
var cluster = await Cluster.ConnectAsync("couchbase://localhost", options).ConfigureAwait(false);
var bucket = await cluster.BucketAsync("default").ConfigureAwait(false);
var collection = bucket.DefaultCollection();
// Create the single Transactions object
var transactions = Transactions.Create(cluster, TransactionConfigBuilder.Create());
Multiple Transactions Objects
Generally an application will need just one Transactions
object, and in fact the library will usually warn if more are created.
Each Transactions
object uses some resources, including a thread-pool.
There is one rare exception where an application may need to create multiple Transactions
objects, which is covered in Custom Metadata Collections.
Configuration
Transactions can optionally be configured at the point of creating the Transactions
object:
var transactions = Transactions.Create(_cluster,
TransactionConfigBuilder.Create()
.DurabilityLevel(DurabilityLevel.PersistToMajority)
.Build());
The default configuration will perform all writes with the durability setting Majority
, ensuring that each write is available in-memory on the majority of replicas before the transaction continues.
There are two higher durability settings available that will additionally wait for all mutations to be written to physical storage on either the active or the majority of replicas, before continuing.
This further increases safety, at a cost of additional latency.
A level of None
is present but its use is discouraged and unsupported.
If durability is set to None
, then ACID semantics are not guaranteed.
Creating a Transaction
A core idea of Couchbase transactions is that an application supplies the logic for the transaction inside a lambda, including any conditional logic required, and the transaction is then automatically committed. If a transient error occurs, such as a temporary conflict with another transaction, then the transaction will rollback what has been done so far and run the lambda again. The application does have to do these retries and error handling itself.
Each run of the lambda is called an attempt
, inside an overall transaction
.
As with the Couchbase .NET Client, you should use the library asynchronusly using the async/await keywords (the exceptions will be explained later in Error Handling):
try
{
await _transactions.RunAsync(async (ctx)=>
{
// 'ctx' is an AttemptContext, which permits getting, inserting,
// removing and replacing documents, along with committing and
// rolling back the transaction.
// ... Your transaction logic here ...
// This call is optional - if you leave it off, the transaction
// will be committed anyway.
await ctx.CommitAsync().ConfigureAwait(false);
}).ConfigureAwait(false);
}
catch (TransactionCommitAmbiguousException e)
{
// The application will of course want to use its own logging rather
// than Console.WriteLine
Console.Error.WriteLine("Transaction possibly committed");
Console.Error.WriteLine(e);
}
catch (TransactionFailedException e)
{
Console.Error.WriteLine("Transaction did not reach commit point");
Console.Error.WriteLine(e);
}
The asynchronous API allows you to use the thread pool, which can help you scale with excellent efficiency.
However, operations inside an individual transaction should be kept in-order and executed using await
immediately.
Do not use fire-and-forget tasks under any circumstances.
The lambda gets passed an AttemptContext
object, generally referred to as ctx
here.
Since the lambda may be rerun multiple times, it is important that it does not contain any side effects.
In particular, you should never perform regular operations on a Collection , such as collection.InsertAsync() , inside the lambda.
Such operations may be performed multiple times, and will not be performed transactionally.
Instead such operations must be done through the ctx object, e.g. ctx.InsertAsync() .
|
Examples
A code example is worth a thousand words, so here is a quick summary of the main transaction operations. They are described in more detail below.
try
{
var result = await _transactions.RunAsync(async (ctx) =>
{
// Inserting a doc:
var insertedDoc = await ctx.InsertAsync(_collection, "doc-a", new {}).ConfigureAwait(false);
// Getting documents:
// Use ctx.GetAsync if the document should exist, and the transaction
// will fail if it does not
var docA = await ctx.GetAsync(_collection, "doc-a").ConfigureAwait(false);
// Replacing a doc:
var docB = await ctx.GetAsync(_collection, "doc-b").ConfigureAwait(false);
var content = docB.ContentAs<dynamic>();
content.put("transactions", "are awesome");
var replacedDoc = await ctx.ReplaceAsync(docB, content);
// Removing a doc:
var docC = await ctx.GetAsync(_collection, "doc-c").ConfigureAwait(false);
await ctx.RemoveAsync(docC).ConfigureAwait(false);
// This call is optional - if you leave it off, the transaction
// will be committed anyway.
await ctx.CommitAsync().ConfigureAwait(false);
}).ConfigureAwait(false);
}
catch (TransactionCommitAmbiguousException e)
{
Console.WriteLine("Transaction possibly committed");
Console.WriteLine(e);
}
catch (TransactionFailedException e)
{
Console.WriteLine("Transaction did not reach commit point");
Console.WriteLine(e);
}
Transaction Mechanics
While this document is focussed on presenting how transactions are used at the API level, it is useful to have a high-level understanding of the mechanics. Reading this section is completely optional.
Recall that the application-provided lambda (containing the transaction logic) may be run multiple times by Couchbase transactions.
Each such run is called an attempt
inside the overall transaction.
Active Transaction Record Entries
The first mechanic is that each of these attempts adds an entry to a metadata document in the Couchbase cluster. These metadata documents:
-
Are named Active Transaction Records, or ATRs.
-
Are created and maintained automatically.
-
Begin with "_txn:atr-".
-
Each contain entries for multiple attempts.
-
Are viewable, and they should not be modified externally.
Each such ATR entry stores some metadata and, crucially, whether the attempt has committed or not. In this way, the entry acts as the single point of truth for the transaction, which is essential for providing an 'atomic commit' during reads.
Staged Mutations
The second mechanic is that mutating a document inside a transaction, does not directly change the body of the document. Instead, the post-transaction version of the document is staged alongside the document (technically in its extended attributes (XATTRs)). In this way, all changes are invisible to all parts of the Couchbase Data Platform until the commit point is reached.
These staged document changes effectively act as a lock against other transactions trying to modify the document, preventing write-write conflicts.
Cleanup
There are safety mechanisms to ensure that leftover staged changes from a failed transaction cannot block live transactions indefinitely.
These include an asynchronous cleanup process that is started with the creation of the Transactions
object, and scans for expired transactions created by any application, on all buckets.
Note that if an application is not running, then this cleanup is also not running.
The cleanup process is detailed below in Asynchronous Cleanup.
Committing
Only once the lambda has successfully run to conclusion, will the attempt be committed. This updates the ATR entry, which is used as a signal by transactional actors to use the post-transaction version of a document from its XATTRs. Hence, updating the ATR entry is an 'atomic commit' switch for the transaction.
After this commit point is reached, the individual documents will be committed (or "unstaged"). This provides an eventually consistent commit for non-transactional actors.
Key-Value Mutations
Replacing
Replacing a document requires awaiting a TransactionGetResult
returned from ctx.GetAsync()
, ctx.InsertAsync()
, or another ctx.ReplaceAsync()
call first.
This is necessary to ensure that the document is not involved in another transaction.
(If it is, then the transaction will handle this, generally by rolling back what has been done so far, and retrying the lambda.)
await _transactions.RunAsync(async ctx =>
{
var anotherDoc = await ctx.GetAsync(_collection, "anotherDoc").ConfigureAwait(false);
var content = anotherDoc.ContentAs<dynamic>();
content.put("transactions", "are awesome");
_ = await ctx.ReplaceAsync(anotherDoc, content);
}).ConfigureAwait(false);
Removing
As with replaces, removing a document requires awaiting a TransactionGetResult
from a previous transaction operation first.
await _transactions.RunAsync(async ctx =>
{
var anotherDoc = await ctx.GetAsync(_collection, "anotherDoc").ConfigureAwait(false);
await ctx.RemoveAsync(anotherDoc).ConfigureAwait(false);
}).ConfigureAwait(false);
Key-Value Reads
There are two ways to get a document, GetAsync
and GetOptionalAsync
:
await _transactions.RunAsync(async ctx =>
{
var docId = "a-doc";
var docOpt = await ctx.GetAsync(_collection, docId).ConfigureAwait(false);
}).ConfigureAwait(false);
GetAsync
will cause the transaction to fail with TransactionFailedException
(after rolling back any changes, of course).
It is provided as a convenience method so the developer does not have to check for null
if the document must exist for the transaction to succeed.
Gets will 'read your own writes', e.g. this will succeed:
await _transactions.RunAsync(async ctx =>
{
var docId = "docId";
_ = await ctx.InsertAsync(_collection, docId, new { }).ConfigureAwait(false);
var doc = await ctx.GetAsync(_collection, docId).ConfigureAwait(false);
Console.WriteLine((object) doc.ContentAs<dynamic>());
}).ConfigureAwait(false);
N1QL Queries
As of Couchbase Server 7.0, N1QL queries may be used inside the transaction lambda, freely mixed with Key-Value operations.
BEGIN TRANSACTION
There are two ways to initiate a transaction with Couchbase 7.0: via a transactions library, and via the query service directly using BEGIN TRANSACTION
.
The latter is intended for those using query via the REST API, or using the query workbench in the UI, and it is strongly recommended that application writers instead use the transactions library.
This provides these benefits:
-
It automatically handles errors and retrying.
-
It allows Key-Value operations and N1QL queries to be freely mixed.
-
It takes care of issuing
BEGIN TRANSACTION
,END TRANSACTION
,COMMIT
andROLLBACK
automatically. These become an implementation detail and you should not use these statements inside the lambda.
Supported N1QL
The majority of N1QL DML statements are permitted within a transaction. Specifically: INSERT, UPSERT, DELETE, UPDATE, MERGE and SELECT are supported.
DDL statements, such as CREATE INDEX, are not.
Using N1QL
If you already use N1QL from the .NET SDK, then its use in transactions is very similar.
It returns the same IQueryResult<T>
you are used to, and takes most of the same options.
You must take care to write ctx.QueryAsync()
inside the lambda however, rather than cluster.QueryAsync()
or scope.QueryAsync()
.
An example of selecting some rows from the travel-sample
bucket:
var st = "SELECT * FROM `travel-sample`.inventory.hotel WHERE country = $1";
var transactionResult = await transactions.RunAsync(async ctx => {
IQueryResult<object> qr = await ctx.QueryAsync<object>(st,
new TransactionQueryOptions().Parameter("United Kingdom"));
await foreach (var result in qr.Rows)
{
Console.Out.WriteLine($"result = {result}", result);
}
});
Rather than specifying the full "`travel-sample`.inventory.hotel" name each time, it is easier to pass a reference to the inventory IScope
:
IBucket travelSample = await cluster.BucketAsync("travel-sample");
IScope inventory = travelSample.Scope("inventory");
var transactionResult = await transactions.RunAsync(async ctx =>
{
var st = "SELECT * FROM `travel-sample`.inventory.hotel WHERE country = $1";
IQueryResult<object> qr = await ctx.QueryAsync<object>(st,
options: new TransactionQueryOptions().Parameter("United Kingdom"),
scope: inventory);
});
An example using a IScope
for an UPDATE:
var hotelChain = "http://marriot%";
var country = "United States";
await transactions.RunAsync(async ctx => {
var qr = await ctx.QueryAsync<object>(
statement: "UPDATE hotel SET price = $price WHERE url LIKE $url AND country = $country",
configure: options => options.Parameter("price", 99.99m)
.Parameter("url", hotelChain)
.Parameter("country", country),
scope: inventory);
Console.Out.WriteLine($"Records Updated = {qr?.MetaData.Metrics.MutationCount}");
});
And an example combining SELECTs and UPDATEs. It’s possible to call regular C# methods from the lambda, as shown here, permitting complex logic to be performed. Just remember that since the lambda may be called multiple times, so may the method.
await transactions.RunAsync(async ctx => {
// Find all hotels of the chain
IQueryResult<Review> qr = await ctx.QueryAsync<Review>(
statement: "SELECT reviews FROM hotel WHERE url LIKE $1 AND country = $2",
configure: options => options.Parameter(hotelChain).Parameter(country),
scope: inventory);
// This function (not provided here) will use a trained machine learning model to provide a
// suitable price based on recent customer reviews.
var updatedPrice = PriceFromRecentReviews(qr);
// Set the price of all hotels in the chain
await ctx.QueryAsync<object>(
statement: "UPDATE hotel SET price = $1 WHERE url LIKE $2 AND country = $3",
configure: options => options.Parameter(hotelChain, country, updatedPrice),
scope: inventory);
});
Read Your Own Writes
As with Key-Value operations, N1QL queries support Read Your Own Writes.
This example shows inserting a document and then selecting it again.
await transactions.RunAsync(async ctx => {
await ctx.QueryAsync<object>("INSERT INTO `default` VALUES ('doc', {'hello':'world'})", TransactionQueryConfigBuilder.Create()); (1)
// Performing a 'Read Your Own Write'
var st = "SELECT `default`.* FROM `default` WHERE META().id = 'doc'"; (2)
IQueryResult<object> qr = await ctx.QueryAsync<object>(st, TransactionQueryConfigBuilder.Create());
Console.Out.WriteLine($"ResultCount = {qr?.MetaData.Metrics.ResultCount}");
});
1 | The inserted document is only staged at this point. as the transaction has not yet committed. Other transactions, and other non-transactional actors, will not be able to see this staged insert yet. |
2 | But the SELECT can, as we are reading a mutation staged inside the same transaction. |
Mixing Key-Value and N1QL
Key-Value operations and queries can be freely intermixed, and will interact with each other as you would expect.
In this example we insert a document with Key-Value, and read it with a SELECT.
await transactions.RunAsync(async ctx => {
_ = await ctx.InsertAsync(collection, "doc", new { Hello = "world" }); (1)
// Performing a 'Read Your Own Write'
var st = "SELECT `default`.* FROM `default` WHERE META().id = 'doc'"; (2)
var qr = await ctx.QueryAsync<object>(st);
Console.Out.WriteLine($"ResultCount = {qr?.MetaData.Metrics.ResultCount}");
});
1 | As with the 'Read Your Own Writes' example, here the insert is only staged, and so it is not visible to other transactions or non-transactional actors. |
2 | But the SELECT can view it, as the insert was in the same transaction. |
Query Options
Query options can be provided via TransactionQueryOptions
, which provides a subset of the options in the .NET SDK’s QueryOptions
.
await transactions.RunAsync(async ctx => {
await ctx.QueryAsync<object>("INSERT INTO `default` VALUES ('doc', {'hello':'world'})",
new TransactionQueryOptions().FlexIndex(true));
});
The supported options are:
-
Parameter
-
ScanConsistency
-
FlexIndex
-
Serializer
-
ClientContextId
-
ScanWait
-
ScanCap
-
PipelineBatch
-
PipelineCap
-
Readonly
-
AdHoc
-
Raw
See the QueryOptions documentation for details on these.
Query Concurrency
Only one query statement will be performed by the query service at a time. Non-blocking mechanisms can be used to perform multiple concurrent query statements, but this may result internally in some added network traffic due to retries, and is unlikely to provide any increased performance.
Query Performance Advice
This section is optional reading, and only for those looking to maximize transactions performance.
After the first query statement in a transaction, subsequent Key-Value operations in the lambda are converted into N1QL and executed by the query service rather than the Key-Value data service. The operation will behave identically, and this implementation detail can largely be ignored, except for these two caveats:
-
These converted Key-Value operations are likely to be slightly slower, as the query service is optimized for statements involving multiple documents. Those looking for the maximum possible performance are recommended to put Key-Value operations before the first query in the lambda, if possible.
-
Those using non-blocking mechanisms to achieve concurrency should be aware that the converted Key-Value operations are subject to the same parallelism restrictions mentioned above, e.g. they will not be executed in parallel by the query service.
Single Query Transactions
This section is mainly of use for those wanting to do large, bulk-loading transactions.
The query service maintains where required some in-memory state for each document in a transaction, that is freed on commit or rollback. For most use-cases this presents no issue, but there are some workloads, such as bulk loading many documents, where this could exceed the server resources allocated to the service. Solutions to this include breaking the workload up into smaller batches, and allocating additional memory to the query service. Alternatively, single query transaction, described here, may be used.
Single query transactions have these characteristics:
-
They have greatly reduced memory usage inside the query service.
-
As the name suggests, they consist of exactly one query, and no Key-Value operations.
You will see reference elsewhere in Couchbase documentation to the tximplicit
query parameter.
Single query transactions internally are setting this parameter.
In addition, they provide automatic error and retry handling.
Single query transactions may be initiated like so:
var bulkLoadStatement = "<a bulk-loading N1QL statement>";
try
{
SingleQueryTransactionResult<object> result = await transactions.QueryAsync<object>(bulkLoadStatement);
IQueryResult<object> queryResult = result.QueryResult;
}
catch (TransactionCommitAmbiguousException e)
{
Console.Error.WriteLine("Transaction possibly committed");
foreach (var log in e.Result.Logs)
{
Console.Error.WriteLine(log);
}
}
catch (TransactionFailedException e)
{
Console.Error.WriteLine("Transaction did not reach commit point");
foreach (var log in e.Result.Logs)
{
Console.Error.WriteLine(log);
}
}
You can also run a single query transaction against a particular IScope
(these examples will exclude the full error handling for brevity):
IBucket travelSample = await cluster.BucketAsync("travel-sample");
IScope inventory = travelSample.Scope("inventory");
await transactions.QueryAsync<object>(bulkLoadStatement, scope: inventory);
and configure it:
// with the Builder pattern.
await transactions.QueryAsync<object>(bulkLoadStatement, SingleQueryTransactionConfigBuilder.Create()
// Single query transactions will often want to increase the default timeout
.ExpirationTime(TimeSpan.FromSeconds(360)));
// using the lambda style
await transactions.QueryAsync<object>(bulkLoadStatement, config => config.ExpirationTime(TimeSpan.FromSeconds(360)));
Query with KV Roles
To execute a key-value operation within a transaction, users must have the relevant Administrative or Data RBAC roles, and permissions on the relevant buckets, scopes, and collections.
Similarly, to run a query statement within a transaction, users must have the relevant Administrative or Query & Index RBAC roles, and permissions on the relevant buckets, scopes and collections.
Refer to Roles for details.
Query Mode
When a transaction executes a query statement, the transaction enters query mode, which means that the query is executed with the user’s query permissions.
Any key-value operations which are executed by the transaction after the query statement are also executed with the user’s query permissions.
These may or may not be different to the user’s data permissions; if they are different, you may get unexpected results.
|
Committing
Committing is automatic: if there is no explicit call to ctx.CommitAsync()
at the end of the transaction logic callback, and no exception is thrown, it will be committed.
var result = await _transactions.RunAsync(async (ctx) =>
{
var doc = await ctx.GetAsync(_collection, "anotherDoc").ConfigureAwait(false);
var content = doc.ContentAs<JObject>();
content.Add("transactions", "are awesome");
await ctx.ReplaceAsync(doc, content).ConfigureAwait(false);
}).ConfigureAwait(false);
As described above, as soon as the transaction is committed, all its changes will be atomically visible to reads from other transactions. The changes will also be committed (or "unstaged") so they are visible to non-transactional actors, in an eventually consistent fashion.
Commit is final: after the transaction is committed, it cannot be rolled back, and no further operations are allowed on it.
An asynchronous cleanup process ensures that once the transaction reaches the commit point, it will be fully committed — even if the application crashes.
A Full Transaction Example
Let’s pull together everything so far into a more real-world example of a transaction.
This example simulates a simple Massively Multiplayer Online game, and includes documents representing:
-
Players, with experience points and levels;
-
Monsters, with hitpoints, and the number of experience points a player earns from their death.
In this example, the player is dealing damage to the monster. The player’s client has sent this instruction to a central server, where we’re going to record that action. We’re going to do this in a transaction, as we don’t want a situation where the monster is killed, but we fail to update the player’s document with the earned experience.
(Though this is just a demo - in reality, the game would likely live with the small risk and limited impact of this, rather than pay the performance cost for using a transaction.)
A complete version of this example is available on our GitHub transactions examples page.
try
{
await _transactions.RunAsync(async (ctx) =>
{
_logger.LogInformation(
"Starting transaction, player {playerId} is hitting monster {monsterId} for {damage} points of damage.",
playerId, monsterId, damage);
var monster = await ctx.GetAsync(_collection, monsterId).ConfigureAwait(false);
var player = await ctx.GetAsync(_collection, playerId).ConfigureAwait(false);
var monsterContent = monster.ContentAs<JObject>();
var playerContent = player.ContentAs<JObject>();
var monsterHitPoints = monsterContent.GetValue("hitpoints").ToObject<int>();
var monsterNewHitPoints = monsterHitPoints - damage;
_logger.LogInformation(
"Monster {monsterId} had {monsterHitPoints} hitpoints, took {damage} damage, now has {monsterNewHitPoints} hitpoints.",
monsterId, monsterHitPoints, damage, monsterNewHitPoints);
if (monsterNewHitPoints <= 0)
{
// Monster is killed. The remove is just for demoing, and a more realistic example would set a
// "dead" flag or similar.
await ctx.RemoveAsync(monster).ConfigureAwait(false);
// The player earns experience for killing the monster
var experienceForKillingMonster =
monsterContent.GetValue("experienceWhenKilled").ToObject<int>();
var playerExperience = playerContent.GetValue("experiance").ToObject<int>();
var playerNewExperience = playerExperience + experienceForKillingMonster;
var playerNewLevel = CalculateLevelForExperience(playerNewExperience);
_logger.LogInformation(
"Monster {monsterId} was killed. Player {playerId} gains {experienceForKillingMonster} experience, now has level {playerNewLevel}.",
monsterId, playerId, experienceForKillingMonster, playerNewLevel);
playerContent["experience"] = playerNewExperience;
playerContent["level"] = playerNewLevel;
await ctx.ReplaceAsync(player, playerContent).ConfigureAwait(false);
}
else
{
_logger.LogInformation("Monster {monsterId} is damaged but alive.", monsterId);
// Monster is damaged but still alive
monsterContent.Add("hitpoints", monsterNewHitPoints);
await ctx.ReplaceAsync(monster, monsterContent).ConfigureAwait(false);
}
_logger.LogInformation("About to commit transaction");
}).ConfigureAwait(false);
}
catch (TransactionCommitAmbiguousException e)
{
_logger.LogWarning("Transaction possibly committed:{0}{1}", Environment.NewLine, e);
}
catch (TransactionFailedException e)
{
// The operation timed out (the default timeout is 15 seconds) despite multiple attempts to commit the
// transaction logic. Both the monster and the player will be untouched.
// This situation should be very rare. It may be reasonable in this situation to ignore this particular
// failure, as the downside is limited to the player experiencing a temporary glitch in a fast-moving MMO.
// So, we will just log the error
_logger.LogWarning("Transaction did not reach commit:{0}{1}", Environment.NewLine, e);
}
Rollback
If an exception is thrown, either by the application from the lambda, or by the transactions library, then that attempt is rolled back. The transaction logic may or may not be retried, depending on the exception.
If the transaction is not retried then it will throw a TransactionFailedException
exception, and its Cause
property can be used for more details on the failure.
The application can use this to signal why it triggered a rollback, as so:
try
{
await _transactions.RunAsync(async ctx =>
{
var customer = await ctx.GetAsync(_collection, "customer-name").ConfigureAwait(false);
if (customer.ContentAs<dynamic>().balance < costOfItem) throw new BalanceInsufficientException();
// else continue transaction
}).ConfigureAwait(false);
}
catch (TransactionCommitAmbiguousException e)
{
// This exception can only be thrown at the commit point, after the
// BalanceInsufficient logic has been passed, so there is no need to
// check getCause here.
Console.Error.WriteLine("Transaction possibly committed");
Console.Error.WriteLine(e);
}
catch (TransactionFailedException e)
{
Console.Error.WriteLine("Transaction did not reach commit point");
}
The transaction can also be explicitly rolled back:
await _transactions.RunAsync(async (ctx) => {
var customer = await ctx.GetAsync(_collection, "customer-name").ConfigureAwait(false);
if (customer.ContentAs<dynamic>().balance < costOfItem)
{
await ctx.RollbackAsync().ConfigureAwait(false);
}
// else continue transaction
}).ConfigureAwait(false);
In this case, if ctx.RollbackAsync()
is reached, then the transaction will be regarded as successfully rolled back and no TransactionFailed will be thrown.
After a transaction is rolled back, it cannot be committed, no further operations are allowed on it, and the library will not try to automatically commit it at the end of the code block.
Error Handling
As discussed previously, Couchbase transactions will attempt to resolve many errors for you, through a combination of retrying individual operations and the application’s lambda. This includes some transient server errors, and conflicts with other transactions.
But there are situations that cannot be resolved, and total failure is indicated to the application via errors. These errors include:
-
Any error thrown by your transaction lambda, either deliberately or through an application logic bug.
-
Attempting to insert a document that already exists.
-
Attempting to remove or replace a document that does not exist.
-
Calling
ctx.GetAsync()
on a document key that does not exist.
Once one of these errors occurs, the current attempt is irrevocably failed (though the transaction may retry the lambda). It is not possible for the application to catch the failure and continue. Once a failure has occurred, all other operations tried in this attempt (including commit) will instantly fail. |
Transactions, as they are multi-stage and multi-document, also have a concept of partial success or failure.
This is signalled to the application through the TransactionResult.UnstagingComplete
property, described later.
There are three exceptions that Couchbase transactions can raise to the application:
TransactionFailedException
, TransactionExpiredException
and TransactionCommitAmbiguousException
.
All exceptions derive from TransactionFailedException
for backwards-compatibility purposes.
TransactionFailedException and TransactionExpiredException
The transaction definitely did not reach the commit point.
TransactionFailedException
indicates a fast-failure whereas TransactionExpiredException
indicates that retries were made until the expiration point was reached, but this distinction is not normally important to the application and generally TransactionExpiredException
does not need to be handled individually.
Either way, an attempt will have been made to rollback all changes. This attempt may or may not have been successful, but the results of this will have no impact on the protocol or other actors. No changes from the transaction will be visible (presently with the potential and temporary exception of staged inserts being visible to non-transactional actors, as discussed under Inserting).
Handling: Generally, debugging exactly why a given transaction failed requires review of the logs, so it is suggested that the application log these on failure (see Logging).
The application may want to try the transaction again later.
Alternatively, if transaction completion time is not a priority, then transaction expiration times (which default to 15 seconds) can be extended across the board through TransactionConfigBuilder
.
Transactions transactions = Transactions.Create(_cluster, TransactionConfigBuilder.Create()
.ExpirationTime(TimeSpan.FromSeconds(120))
.Build());
This will allow the protocol more time to get past any transient failures (for example, those caused by a cluster rebalance). The tradeoff to consider with longer expiration times, is that documents that have been staged by a transaction are effectively locked from modification from other transactions, until the expiration time has exceeded.
Note that expiration is not guaranteed to be followed precisely. For example, if the application were to do a long blocking operation inside the lambda (which should be avoided), then expiration can only trigger after this finishes. Similarly, if the transaction attempts a key-value operation close to the expiration time, and that key-value operation times out, then the expiration time may be exceeded.
TransactionCommitAmbiguousException
As discussed previously, each transaction has a 'single point of truth' that is updated atomically to reflect whether it is committed.
However, it is not always possible for the protocol to become 100% certain that the operation was successful, before the transaction expires. That is, the operation may have successfully completed on the cluster, or may succeed soon, but the protocol is unable to determine this (whether due to transient network failure or other reason). This is important as the transaction may or may not have reached the commit point, e.g. succeeded or failed.
Couchbase transactions will raise TransactionCommitAmbiguousException
to indicate this state.
It should be rare to receive this error.
If the transaction had in fact successfully reached the commit point, then the transaction will be fully completed ("unstaged") by the asynchronous cleanup process at some point in the future.
With default settings this will usually be within a minute, but whatever underlying fault has caused the TransactionCommitAmbiguousException
may lead to it taking longer.
If the transaction had not in fact reached the commit point, then the asynchronous cleanup process will instead attempt to roll it back at some point in the future. If unable to, any staged metadata from the transaction will not be visible, and will not cause problems (e.g. there are safety mechanisms to ensure it will not block writes to these documents for long).
Handling: This error can be challenging for an application to handle.
As with TransactionFailedException
it is recommended that it at least writes any logs from the transaction, for future debugging.
It may wish to retry the transaction at a later point, or globally extend transactional expiration times to give the protocol additional time to resolve the ambiguity.
TransactionResult.UnstagingComplete
This boolean flag indicates whether all documents were able to be unstaged (committed).
For most use-cases it is not an issue if it is false. All transactional actors will still all the changes from this transaction, as though it had committed fully. The cleanup process is asynchronously working to complete the commit, so that it will be fully visible to non-transactional actors.
The flag is provided for those rare use-cases where the application requires the commit to be fully visible to non-transactional actors, before it may continue. In this situation the application can raise an error here, or poll all documents involved until they reflect the mutations.
If you regularly see this flag false, consider increasing the transaction expiration time to reduce the possibility that the transaction times out during the commit.
Similar to TransactionResult
, SingleQueryTransactionResult
also has an UnstagingComplete
property.
Full Error Handling Example
Pulling all of the above together, this is the suggested best practice for error handling:
try
{
var result = await _transactions.RunAsync(async (ctx) => {
// ... transactional code here ...
});
// The transaction definitely reached the commit point. Unstaging
// the individual documents may or may not have completed
if (result.UnstagingComplete)
{
// Operations with non-transactional actors will want
// unstagingComplete() to be true.
await _cluster.QueryAsync<dynamic>(" ... N1QL ... ",
new QueryOptions()).ConfigureAwait(false);
var documentKey = "a document key involved in the transaction";
var getResult = await _collection.GetAsync(documentKey).ConfigureAwait(false);
}
else
{
// This step is completely application-dependent. It may
// need to throw its own exception, if it is crucial that
// result.unstagingComplete() is true at this point.
// (Recall that the asynchronous cleanup process will
// complete the unstaging later on).
}
}
catch (TransactionCommitAmbiguousException err)
{
// The transaction may or may not have reached commit point
Console.Error.WriteLine("Transaction returned TransactionCommitAmbiguous and" +
" may have succeeded, logs:");
// Of course, the application will want to use its own logging rather
// than Console.Error
Console.Error.WriteLine(err);
}
catch (TransactionFailedException err)
{
// The transaction definitely did not reach commit point
Console.Error.WriteLine("Transaction failed with TransactionFailed, logs:");
Console.Error.WriteLine(err);
}
Asynchronous Cleanup
Transactions will try to clean up after themselves in the advent of failures. However, there are situations that inevitably created failed, or 'lost' transactions, such as an application crash.
This requires an asynchronous cleanup task, described in this section.
Creating the Transactions
object spawns a background cleanup task, whose job it is to periodically scan for expired transactions and clean them up.
It does this by scanning a subset of the Active Transaction Record (ATR) transaction metadata documents, on each bucket.
As you’ll recall from earlier, an entry for each transaction attempt exists in one of these documents.
They are removed during cleanup or at some time after successful completion.
The default settings are tuned to find expired transactions reasonably quickly, while creating negligible impact from the background reads required by the scanning process. To be exact, with default settings it will generally find expired transactions within 60 seconds, and use less than 20 reads per second. This is unlikely to impact performance on any cluster, but the settings may be tuned as desired.
All applications connected to the same cluster and running transactions will share in the cleanup, via a low-touch communication protocol on the "_txn:client-record" metadata document that will be created in each bucket in the cluster. This document is visible and should not be modified externally as it is maintained automatically. All ATRs on a bucket will be distributed between all cleanup clients, so increasing the number of applications will not increase the reads required for scanning.
An application may cleanup transactions created by another application.
It is important to understand that if an application is not running, then cleanup is not running. This is particularly relevant to developers running unit tests or similar.
If this is an issue, then the deployment may want to consider running a simple application at all times that just opens a transaction, to guarantee that cleanup is running.
Configuring Cleanup
The cleanup settings can be configured as so:
Setting | Default | Description |
---|---|---|
|
60 seconds |
This determines how long a cleanup 'run' is; that is, how frequently this client will check its subset of ATR documents. It is perfectly valid for the application to change this setting, which is at a conservative default. Decreasing this will cause expiration transactions to be found more swiftly (generally, within this cleanup window), with the tradeoff of increasing the number of reads per second used for the scanning process. |
|
true |
This is the thread that takes part in the distributed cleanup process described above, that cleans up expired transactions created by any client. It is strongly recommended that it is left enabled. |
|
true |
This thread is for cleaning up transactions created just by this client. The client will preferentially aim to send any transactions it creates to this thread, leaving transactions for the distributed cleanup process only when it is forced to (for example, on an application crash). It is strongly recommended that it is left enabled. |
Logging
To aid troubleshooting, each transaction maintains a list of log entries, which can be logged on failure like this:
try
{
var result = await transactions.RunAsync(async ctx => {
// ... transactional code here ...
});
}
catch (TransactionFailedException err)
{
// ... log the error as you normally would
// then include the logs
foreach (var logLine in err.Result.Logs)
{
Console.Error.WriteLine(logLine);
}
}
A failed transaction can involve dozens, even hundreds, of lines of logging, so the application may prefer to write failed transactions into a separate file.
Please see the .NET SDK logging documentation for details.
Here is an example of configuring a Microsoft.Extensions.Logging.ILoggingFactory
:
//Logging dependencies
var services = new ServiceCollection();
services.AddLogging(builder =>
{
builder.AddFile(AppContext.BaseDirectory);
builder.AddConsole();
});
await using var provider = services.BuildServiceProvider();
var loggerFactory = provider.GetService<ILoggerFactory>();
var logger = loggerFactory.CreateLogger<Program>();
//create the transactions object and add the ILoggerFactory
var transactions = Transactions.Create(_cluster,
TransactionConfigBuilder.Create().LoggerFactory(loggerFactory));
try
{
var result = await transactions.RunAsync(async ctx => {
// ... transactional code here ...
});
}
catch (TransactionCommitAmbiguousException err)
{
// The transaction may or may not have reached commit point
logger.LogInformation("Transaction returned TransactionCommitAmbiguous and" +
" may have succeeded, logs:");
Console.Error.WriteLine(err);
}
catch (TransactionFailedException err)
{
// The transaction definitely did not reach commit point
logger.LogInformation("Transaction failed with TransactionFailed, logs:");
Console.Error.WriteLine(err);
}
Custom Metadata Collections
As described earlier, transactions automatically create and use metadata documents. By default, these are created in the default collection of the bucket of the first mutated document in the transaction. Optionally, you can instead use a collection to store the metadata documents. Most users will not need to use this functionality, and can continue to use the default behavior. They are provided for these use-cases:
-
The metadata documents contain, for documents involved in each transaction, the document’s key and the name of the bucket, scope and collection it exists on. In some deployments this may be sensitive data.
-
You wish to remove the default collections. Before doing this, you should ensure that all existing transactions using metadata documents in the default collections have finished.
Usage
Custom metadata collections are enabled with:
ICouchbaseCollection metadataCollection = null; // this is a Collection opened by your code earlier
Transactions transactionsWithCustomMetadataCollection = Transactions.Create(cluster,
TransactionConfigBuilder.Create().MetadataCollection(metadataCollection));
When specified:
-
Any transactions created from this
Transactions
object, will create and use metadata in that collection. -
The asynchronous cleanup started by this
Transactions
object will be looking for expired transactions only in this collection.
You need to ensure that this application has RBAC data read and write privileges to it, and should not delete the collection subsequently as it can interfere with existing transactions. You can use an existing collection or create a new one.
Further Reading
-
There’s plenty of explanation about how Transactions work in Couchbase in our Transactions documentation.
-
You can find further code examples on our transactions examples repository.