Home RSS icon Posts RSS icon Microblog

Dependency Hell

RustyElm - 2019-07-30: Dependency Hell [AUDIO]

Hello, my name is Ian Jones, and this is the RustyElm microcast, a short, very ad-hoc journal of my attempt to learn Rust and Elm in order to build a side project.

So, I've finally had some time over the last week or so to get stuck into my Rust and Elm project, starting by trying to get the Rust based server up and running.

Getting a server up and running with a simple "Hello World" is a doddle with Rocket, and I had fun playing with the basics of request guards etc. to easily handle calls to /hello/ and similar play things to get a grip on how it hangs together.

Rocket

However, I then made the mistake of jumping into taking a look at how to use the Juniper crate along with Rocket for creating a GraphQL server. Things very quickly came unstuck as I tried to work through a seemingly endless amount of errors that frankly made no sense to me as I have very little experience with the Rust compiler. I tried following the quick start guide but even that had problems when implemented in Rocket with slight modifications as per the integration docs, and its example on GitHub. The problems seemed to be related to dependencies, I could never get the derive macros to work properly.

Juniper

GraphQL

Juniper Quick Start Guide

example on GitHub

Eventually I came to my senses and decided that maybe I should attempt setting up a couple of simple REST routes that return JSON first, that's built into Rocket, and in the simple case doesn't need hardly any dependencies.

return JSON

It worked fine, changing the existing play routes to return JSON was easy. Tiny steps for the win!

Then it was time to connect up a database and grab some data from it. I thought it would be relatively simple, as again, I was going to use functionality that is effectively built into Rocket for accessing a database via the Diesel ORM.

built into Rocket

Diesel ORM

I have some data I can use from an existing prototype of the application I'm building, the prototype currently uses an SQLite database and I have a couple of shell scripts and accompanying SQL scripts built that can extract the data and import it into a CockroachDB database. I want to use CockroachDB because it uses the PostgreSQL wire protocol and is mostly SQL compatible with PostgreSQL too, so is easy to use with most languages and frameworks, but can easily scale and have data GEO partitioned. I don't need that scale out capability or GEO partitioning just now, but seeing as Cockroach is a breeze to set up and frankly just cool to play with, I'm going to use it. Have I mentioned that this project is all about learning how to use technologies that I find very interesting and see a great future for?

SQLite

CockroachDB

PostgreSQL

Anyway, installing the Diesel CLI was dead easy, a simple cargo install diesel_cli and we were done as I had already made sure I had the required PostgreSQL client bits and bobs installed as per the Quick Start Guide. And the basic setup of its dotenv file along with similar configuration in Rocket.toml was a no-brainer too. A quick run of diesel setup and I had the expected new migrations directory and schema.rs file in my project.

Diesel Quick Start Guide

Before getting stuck into using Diesel with Rocket I took a detour to make a few tweaks to my data migration scripts to use UUID columns instead of plain integer style serial columns. It's highly recommended when data can be written to multiple nodes in a cluster, like with CockroachDB, to use random primary keys as it prevents data writes from being clumped together in its range based data setup. However, using UUIDs turned out to cause more dependency headaches further down the line that required some more quick learning, but we'll come to that in a second.

Diesel's migrations feature allowed me to convert the schema definition bits from my data migrations scripts into proper migration steps. The first step to set up a basic schema with extra "old ID" fields in tables, the next step being used for data import so I can re-create it easily from the SQLite database, and a follow on migration that then fixed up the data to convert the pre-existing integer based primary/foreign key pairs to use the new UUID setup. That last step also dropped the "old ID" fields on completion.

Then came the task of creating structs in the code that could be hooked up to the schema with derive(Queryable) macros etc. These macros allow the Diesel ORM to build queries and with the right use statements the fields from the structs can easily be used in code to form predicates. This turned out to be another source of Dependency Hell, largely due to the UUID and TIMESTAMP columns in the schema requiring new crates that exposed the underlying problem I had before.

There were a lot of errors related to traits not being satisfied one way or another, and infuriatingly it would often show trait signatures that exactly matched what I saw in the docs for the UUID or chrono crates that I was using. Every time I searched Stackoverflow or other sources for anything related, they always said that dependencies hadn't been properly added to Cargo.toml or as use statements, and then showed me exactly what I already had. Eventually I pieced together enough information to realise that if the same crate is imported as a dependency by a couple of crates, and/or explicitly by your code, if they happen to import slightly different versions that seemingly have exactly the same function signatures etc, just by being even a debug level different, Rust will consider the traits completely different and incompatible.

I used the excellent cargo-tree plugin to inspect the dependency tree and saw these tiny differences in crate dependencies. At first I was at a bit of a loss as to what to do, then on a whim I decided to relax the version numbers I'd used in the dependencies sections of the Cargo.toml file. So instead of setting Rocket to be version 0.4.2, I instead used version 0.4, and so on for all the other dependencies. And boom, just like that, cargo test compiled and ran! I had previously set tight versions to try and match what I'd seen in the dependency tree, slightly coaxing versions I had control over to match the versions I saw as automatic imports. That was around the wrong way, a better strategy was to relax the versions a little so that Cargo had wiggle room and could recognise satisfied dependencies based on slightly looser version requirements. Not quite sure where my head was on that one, it's obvious in retrospect, I blame late night programming!

cargo-tree

With that I was able to get a couple of routes set up to return information in JSON format from the database based on UUID or other field types I played with. I even set things up so that custom JSON errors were returned when supplied query params weren't as expected and so on.

I've seen it mentioned somewhere before that one of the hardest hurdles to get over when starting with languages like Rust is simply getting past the endless stream of compile time errors you encounter when first trying to find your feet. I very much felt that pain over the last few days, and I expect it's one of the reasons Elm tries so hard to have nice helpful compiler errors that try to suggest fixes. To be fair, in a lot of cases there were helpful hints from the Rust complier too, there were times where I knocked out a big bunch of errors by following the tips it showed.

So I now had the basics of a REST-API server set up, albeit querying a single database table, but alas my GitLab CI pipeline was failing due to the new database stuff. How I got that fixed up and cut down from a run time of 26 minutes to under 3 minutes is a story for another day as I've run out of time.

As always, you can find me on...

Micro.blog: @ianmjones

Twitter: @ianmjones

---

"Dependency Hell" was published on July 30, 2019.

~/