Skip to main content

Command Palette

Search for a command to run...

RavenDB Lessons Learned - Part 2

Published
4 min read
F

I used to be a .NET developer. Nowaways, I am a DevOps solutions architect with a focus on Azure and Kubernetes.

I also love productivity topics, especially when it comes to doing more with less of my time. I'm also a daddy, so time is a limited resource for me.

A continuation of my lessons learned on RavenDB. Some tips on testing and what not to do.

1. Testing Indexes is Really Really Really Important

One of the best features in RavenDB, is the ability to use a full in-memory database when running your integration tests. There are so many features tied to a Lucene and Map-Reduce indexes, that you’ll be thankful you wrote integration tests later. As you go along learning these features, it is hugely beneficial to show to a fellow developer how this feature works without having to seed example documents or sift through polluted data in an integration database somewhere.

Another reason to write unit tests with an embedded Raven database, is that Lucene and Map-Reduce will be new concepts for most developers. Therefore, if you have to perform a Lucene query to do a full-text search, you can back it up with a Raven unit test to ensure that this feature is always working. Same can be said for aggregation queries using Map-Reduce.

Use the Raven Test Helper NuGet Package

When developing your indexes and testing them from the studio, you want to make sure that it’s not wasted work. Spent a little extra time creating automated tests with the Raven Test Helper NuGet package. These tests pay huge dividends since you’re running an embedded version of RavenDB and your tests don’t have to be dependant on some server.

“But isn’t it the same as writing tests against my DEV database instead?”

NOPE. By having your database embedded with your tests, you can seed data into the database without affecting your other tests. In short, you can stand up an embedded database, seed data, and destroy it on each test execution.

This is huge! If only we were able to load RDBMS in memory on each unit test execution so that we’re able to test our views, stored procedures, paging logic, etc etc.

2. You have Reporting or Business Intelligence Needs? Don’t Use Raven

It’s well documented and talked about by Ayende. NoSQL in general is not a good solution for a system that has to serve enterprise reporting capabilities. Unless of course, you have a Big Data problem.

Yes there’s a Raven to SQL Replication Bundle. It works amazingly well. However, you’ll run into a new set of issues. For example:

  • Your RDMBS most likely won’t be able to keep with Raven.
  • You’ll still be responsible for a schema, post development
  • Manual Deployment of SQL replication scripts

However, if your approach to reporting is embedded analytics, then Raven be a good use-case. Features like dashboards and self-service analytics suit very well with features available in Map-Reduce and Lucene.

3. Think Twice About Using the Repository Pattern

The Repository pattern is great for hiding inline SQL or other types of data access code. There’s plenty of material on why you shouldn’t use the Repository pattern on top of an OR/M. Here’s some material on why it’s a bad idea in general:

Personally, I’ve been part of a project that chose to use the Repository pattern on top of Raven. Some of the issues we ran into where:

  • Could not control a TransactionScope for operations across document collections
  • Led you to forget about eventual consistency - no safety guards against this
  • For CRUD operations, there wasn’t much “business logic” being decoupled
  • It was tempting to create a base generic repository which caused Raven session limitations
  • Reduced Raven performance because you have to call SaveChanges much more frequently.
  • You’ll be tempted to hack around it even more - i.e. disabling change tracking

Instead some solutions are :

D

Was your experience in using RavenDB on a desktop application? I ask because my experience contradicts a few of yours:

  • You can control transaction scope with repository pattern, but you do it by abstracting a UoW as well. Basically something similar to DbContext where you have a single interface that contains properties that are all of your collection repositories. This UoW takes the session as a ctor and passes that into all of the other repositories as they're requested. Then in your repos, you just never call save changes. So then in your service, you request the UoW instead, and you can work across whatever collections you want and call savechanges whenever you want. It has worked very well for me.
  • I also implemented a base repository for crud operations and have no issues with sessions. What I do is inherit them on repos that can perform crud operations and extend them with the repo's interface for more bespoke operations. So when you request the repo/collection in your service, you'll have both.

My implementations were for web services, desktop applications. I did have a problem with sessions when I tried to access the database directly from Blazor because it uses a single session for the entire SPA. There were two solutions for it though:

  • When requesting the repo via DI, make sure you're getting back a new scope every time.
  • Move all database operations out of the UI and into an API which is what I ended up doing for other reasons. Since the API is stateless, this problem works itself out.
1
F

Denny Crane - Hey there!

I was using RavenDB around 2015-2016 timeframe. It's been a long time since then. That was my opinion back then. My experience using RavenDB has been mostly for web applications. I remember reading Ayende's blog a lot and he was a firm believer in no repository or Unit of Work pattern.

Thanks for the comment!