Merge pull request 'switch-to-seaorm' (#1) from switch-to-seaorm into main

Reviewed-on: #1
This commit is contained in:
Gabor Körber 2023-12-21 21:09:37 +01:00
commit d167b2eb03
30 changed files with 2812 additions and 182 deletions

2514
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -2,20 +2,29 @@
name = "miniweb" name = "miniweb"
version = "0.1.0" version = "0.1.0"
edition = "2021" edition = "2021"
publish = false
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[workspace]
members = [".", "entity", "migration"]
[features] [features]
# https://github.com/rust-db/barrel/blob/master/guides/diesel-setup.md # https://github.com/rust-db/barrel/blob/master/guides/diesel-setup.md
use_barrel = ["barrel", "sqlformat"] use_barrel = ["barrel", "sqlformat"]
default = ["use_barrel"] default = ["use_barrel"]
[dependencies] [dependencies]
entity = { path = "./entity" }
sea-orm = { version = "0.12.10", features = [
"runtime-tokio-native-tls",
"sqlx-postgres",
] }
sqlformat = { version = "0.2.2", optional = true } sqlformat = { version = "0.2.2", optional = true }
anyhow = "1.0.75" anyhow = "1.0.75"
axum = "0.6.20" axum = "0.6.20"
barrel = { version = "0.7.0", optional = true, features = ["pg"] } barrel = { version = "0.7.0", optional = true, features = ["pg"] }
diesel = { version = "2.1.3", features = ["serde_json", "postgres"] }
dotenvy = "0.15.7" dotenvy = "0.15.7"
mime_guess = "2.0.4" mime_guess = "2.0.4"
minijinja = { version = "1.0.8", features = [ minijinja = { version = "1.0.8", features = [

View File

@ -1,14 +1,19 @@
default: set dotenv-load := true
@just hello
default:
@echo "# Miniweb Project"
@just --list
# Run Service
run: run:
@cargo run --bin miniweb @cargo run --bin miniweb
# Run Bins
bin args='': bin args='':
@cargo run --bin {{args}} @cargo run --bin {{args}}
hello: status:
@echo "Hello, world!" sea-orm-cli status
# Start PostgreSQL # Start PostgreSQL
pg-up: pg-up:
@ -18,9 +23,18 @@ pg-up:
pg-down: pg-down:
cd docker && docker-compose down cd docker && docker-compose down
# Run Migrations
migrate:
sea-orm-cli up
# Install Developer dependencies
dev-install: dev-install:
cargo install diesel_cli --no-default-features --features="postgres" cargo install sea-orm-cli
# Reset Database
dev-reset: dev-reset:
diesel migration revert --all sea-orm-cli migrate reset
# Creates Entities from Database
db-create-entities:
sea-orm-cli generate entity -u $DATABASE_URL -o entity_generated/src --lib

51
NOTES.md Normal file
View File

@ -0,0 +1,51 @@
# SeaORM
## Some Rant about ORM Implementations
#### Entity, ActiveModel, Model
SeaORM tries to implement (those #@!$) DDD ideas in much detail, creating this multitude of fixed struct names, like user::Model, user::Entity, user::ActiveModel, where each operation still needs us to import the Trait itself, most code is hidden behind macros, and you still end up having to call "insert" on the active model, instead of a repository object.
At least the activemodel, unlike in implementations like Django's ActiveRecord pattern, seems to support dirty flags, given you have to define fields as enums "Set()" and "NotSet" in the model. This probably will make it easier to save models without creating huge SQLs, or micromanaging which fields are touched, or resaving data that was actually unchanged, allowing more transparent PATCH implementations.
When it comes to writing ORM Code, SeaORM is rather bulky, and complicated, somehow the opposite of what I expect from an ORM implementation. At least however, it is concise, and you can access all your model parts, like Columns, by the same pattern, and allow nice filter patterns (e.g. user::Column.Name.contains(...)) that could be expanded by implementing additional functions on them.
So, once one accepts the concept, it works mostly well, and as it forces you with the tools to separate database concerns from your main code base, you are tempted to write your own repositories and value objects for interacting with the storage layer.
Why functionality implemented on Entity could not have been implemented for Model instead, I am not sure. Also, having access to the Model and ActiveModel type from Entity, would be at least easier, if you could just `pub use table::Entity as MyModelName`, and access `MyModelName::Model`, or `MyModelName::ActiveModel`, respectively, instead of sometimes using the module name, and then alternating between Entity and Model.
It also kind of sucks, to have Entity as a name, as imho the more prevalent use of that word is in ECS systems, where it kind of means the same thing, but not really, while in the context of DDD Storage mechanisms, pardon my hot take, it is just a waste of good nomenclature.
I have yet to find out, how i can move the automatically generated `entity` and `migration` folders to some subdirectory, like "crates", given putting them in the workspace is rather daunting, and is not a nice thing in general, imho, if not adjustable, but I am fairly sure that works out somehow.
I also don't see the reason, why I would separate the migration app from the entity app, and not manage the whole thing in one layer instead, given I have to now look for version changes of sea-orm in 3 Cargo.tomls for 1 project. It seems to be an overabstraction, that is dictated by the tooling, mainly sea-orm-cli.
However, this may be mitigated, or may be "accepted" by the user, as it is only a problem for a certain point of idealism, and might be easily defended in a discussion by anyone, who thinks that layers are things you cannot have enough of, and tons of Uncle Bob quotes.
#### Source of Truth
In Diesel my main issue was, that the source of truth was never clear for models, as the tools rewrote the schema.rs, while the migration could also be generated from changing the schema itself.
Here SeaORM clearly shows 2 approaches, either migration first, or entity first. However, where Diesel seems to grow into a better workflow, and started to have autodetection of migration changes, even generating a migration (instead of an entity), from a current table seems unsupported in SeaORM, which means, either you accept writing migrations manually, and using that as your state of truth, or you lose all migration support.
The entity first approach, is, as expected, not the main mindset.
#### Ease of Use and the ORM Idealism
Most ORMs come with some idealisms behind it.
For example, if you look at SQLAlchemy in the Python world, sometimes you have the feeling, the authors never really wanted to write an ORM, as you need some SQL statements at least, usually to set up databases.
I would say, Diesel is very similar in that mindset, as they only later introduced a programmatical migration language, and instead, expect you to write SQL in their tutorial.
Django may have it's flaws, but the ORM of Django clearly has a source of truth: the Model. Changing the Model leads to auto-detected migrations, which means, you can automate checking model changes in the CI/CD. Also, the model language of django nearly covers all aspects of SQL, except the DEFAULT value, which seems to be hard to implement, as most ORMs don't support it.
SeaORM seems like a weird cross-over. While I would have said, Diesel tries to go into the direction of full automation, and might one day finally solve it's source of truth issue, if it finally starts to introduce more options to define models, and throws away the idea of a common generated schema file, SeaORM seems not to bother, as probably most users of it work with the migration workflow in mind, and "there is is this a documented way to create entities in the database from code", even if that never really ties into their migration syntax.
Both ORMs force you to put your models in certain places, even if SeaORM is more flexible if you just don't use sea-orm-cli, while diesel just won't run otherwise, and therefore expect the database layer to be some global service layer, which is fine in a microservice world, but kind of sucks in a modular monolith. So putting database models in various applications, like you might be used from Django, is not really a thing.
It is better to just see your storage layer as a global database layer, and implement local value objects that implement From<T> for the storage layer objects, and some repositories, that do all the ORM work behind the curtain.
It's not like that is a bad thing, given this is also one of the downsides I witness in django projects, where models start to become swiss army knives around a domain topic.
## Accepting Fate
#### Generating Entities from a Database
sea-orm-cli generate entity -u postgresql://miniweb:miniweb@localhost:54321/miniweb -o entity/src --lib

View File

@ -17,7 +17,7 @@ So this is not thought of being a framework.
- `axum` as webserver framework - `axum` as webserver framework
- `minijinja` as template renderer - `minijinja` as template renderer
- `diesel` as database framework - `sea-orm` as database framework
- `rust_embed` to embed vital static files - `rust_embed` to embed vital static files
### On the Frontend ### On the Frontend
@ -45,12 +45,6 @@ So this is not thought of being a framework.
- Event-Bus link to RabbitMQ - Event-Bus link to RabbitMQ
- Logging - Logging
## Development Installation
For Dev with SQLite
- `cargo install diesel_cli --no-default-features --features postgres`
### Windows 10 ### Windows 10
Env if using MINGW64; Env if using MINGW64;

View File

@ -1,9 +0,0 @@
# For documentation on how to configure this file,
# see https://diesel.rs/guides/configuring-diesel-cli
[print_schema]
file = "src/schema.rs"
custom_type_derives = ["diesel::query_builder::QueryId"]
[migrations_directory]
dir = "migrations"

View File

@ -10,7 +10,7 @@ services:
POSTGRES_USER: miniweb POSTGRES_USER: miniweb
POSTGRES_PASSWORD: miniweb POSTGRES_PASSWORD: miniweb
ports: ports:
- "5432:5432" - "54321:5432"
volumes: volumes:
postgres_data: postgres_data:

20
entity/Cargo.toml Normal file
View File

@ -0,0 +1,20 @@
[package]
name = "entity"
version = "0.1.0"
edition = "2021"
publish = false
[lib]
name = "entity"
path = "src/lib.rs"
[dependencies]
serde = { version = "1", features = ["derive"] }
tokio = { version = "1.32.0", features = ["full"] }
[dependencies.sea-orm]
version = "0.12.10" # sea-orm version
features = [
"runtime-tokio-native-tls",
"sqlx-postgres",
]

5
entity/src/lib.rs Normal file
View File

@ -0,0 +1,5 @@
pub mod user;
pub mod permission;
pub use user::Entity as User;
pub use permission::Entity as Permission;

37
entity/src/main.rs Normal file
View File

@ -0,0 +1,37 @@
use sea_orm::{DbConn, EntityTrait, Schema, DatabaseConnection, Database, ConnectionTrait};
mod user;
mod permission;
async fn create_table<E>(db: &DbConn, entity: E)
where
E: EntityTrait,
{
let backend = db.get_database_backend();
let schema = Schema::new(backend);
let mut table_create_statement = schema.create_table_from_entity(entity);
// we need to shadow the mutable instance X, because if_not_exists() returns &mut X
let table_create_statement = table_create_statement.if_not_exists();
// we need to reborrow after dereferencing, which transforms our &mut X into &X
let stmt = backend.build(&*table_create_statement);
match db.execute(stmt).await {
Ok(_) => println!("Migrated {}", entity.table_name()),
Err(e) => println!("Error: {}", e),
}
}
pub async fn create_tables(db: &DbConn) {
create_table(db, user::Entity).await;
create_table(db, permission::Entity).await;
}
#[tokio::main]
async fn main() {
// Running Entities manually creates the tables from the entities in their latest incarnation.
println!("Connecting to database...");
let db: DatabaseConnection = Database::connect("postgresql://miniweb:miniweb@localhost:54321/miniweb").await.unwrap();
println!("Creating tables for entities...");
create_tables(&db).await;
}

18
entity/src/permission.rs Normal file
View File

@ -0,0 +1,18 @@
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel, Serialize, Deserialize)]
#[sea_orm(table_name = "permissions")]
pub struct Model {
#[sea_orm(primary_key)]
#[serde(skip_deserializing)]
pub id: i32,
#[sea_orm(index = "permission_names")]
pub name: String,
pub level: i32,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

18
entity/src/user.rs Normal file
View File

@ -0,0 +1,18 @@
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel, Serialize, Deserialize)]
#[sea_orm(table_name = "users")]
pub struct Model {
#[sea_orm(primary_key)]
#[serde(skip_deserializing)]
pub id: i32,
pub username: String,
#[sea_orm(column_type = "Text")]
pub description: Option<String>,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

19
migration/Cargo.toml Normal file
View File

@ -0,0 +1,19 @@
[package]
name = "migration"
version = "0.1.0"
edition = "2021"
publish = false
[lib]
name = "migration"
path = "src/lib.rs"
[dependencies]
async-std = { version = "1", features = ["attributes", "tokio1"] }
[dependencies.sea-orm-migration]
version = "0.12.10"
features = [
"runtime-tokio-native-tls",
"sqlx-postgres",
]

41
migration/README.md Normal file
View File

@ -0,0 +1,41 @@
# Running Migrator CLI
- Generate a new migration file
```sh
cargo run -- generate MIGRATION_NAME
```
- Apply all pending migrations
```sh
cargo run
```
```sh
cargo run -- up
```
- Apply first 10 pending migrations
```sh
cargo run -- up -n 10
```
- Rollback last applied migrations
```sh
cargo run -- down
```
- Rollback last 10 applied migrations
```sh
cargo run -- down -n 10
```
- Drop all tables from the database, then reapply all migrations
```sh
cargo run -- fresh
```
- Rollback all applied migrations, then reapply all migrations
```sh
cargo run -- refresh
```
- Rollback all applied migrations
```sh
cargo run -- reset
```
- Check the status of all migrations
```sh
cargo run -- status
```

12
migration/src/lib.rs Normal file
View File

@ -0,0 +1,12 @@
pub use sea_orm_migration::prelude::*;
mod m20220101_000001_create_table;
pub struct Migrator;
#[async_trait::async_trait]
impl MigratorTrait for Migrator {
fn migrations() -> Vec<Box<dyn MigrationTrait>> {
vec![Box::new(m20220101_000001_create_table::Migration)]
}
}

View File

@ -0,0 +1,41 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(User::Table)
.if_not_exists()
.col(
ColumnDef::new(User::Id)
.integer()
.not_null()
.auto_increment()
.primary_key(),
)
.col(ColumnDef::new(User::Username).string().not_null())
.col(ColumnDef::new(User::Description).string().not_null())
.to_owned(),
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_table(Table::drop().table(User::Table).to_owned())
.await
}
}
#[derive(DeriveIden)]
enum User {
Table,
Id,
Username,
Description,
}

6
migration/src/main.rs Normal file
View File

@ -0,0 +1,6 @@
use sea_orm_migration::prelude::*;
#[async_std::main]
async fn main() {
cli::run_cli(migration::Migrator).await;
}

View File

View File

@ -1,6 +0,0 @@
-- This file was automatically created by Diesel to setup helper functions
-- and other internal bookkeeping. This file is safe to edit, any future
-- changes will be added to existing projects as new migrations.
DROP FUNCTION IF EXISTS diesel_manage_updated_at(_tbl regclass);
DROP FUNCTION IF EXISTS diesel_set_updated_at();

View File

@ -1,36 +0,0 @@
-- This file was automatically created by Diesel to setup helper functions
-- and other internal bookkeeping. This file is safe to edit, any future
-- changes will be added to existing projects as new migrations.
-- Sets up a trigger for the given table to automatically set a column called
-- `updated_at` whenever the row is modified (unless `updated_at` was included
-- in the modified columns)
--
-- # Example
--
-- ```sql
-- CREATE TABLE users (id SERIAL PRIMARY KEY, updated_at TIMESTAMP NOT NULL DEFAULT NOW());
--
-- SELECT diesel_manage_updated_at('users');
-- ```
CREATE OR REPLACE FUNCTION diesel_manage_updated_at(_tbl regclass) RETURNS VOID AS $$
BEGIN
EXECUTE format('CREATE TRIGGER set_updated_at BEFORE UPDATE ON %s
FOR EACH ROW EXECUTE PROCEDURE diesel_set_updated_at()', _tbl);
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION diesel_set_updated_at() RETURNS trigger AS $$
BEGIN
IF (
NEW IS DISTINCT FROM OLD AND
NEW.updated_at IS NOT DISTINCT FROM OLD.updated_at
) THEN
NEW.updated_at := current_timestamp;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

View File

@ -1,2 +0,0 @@
-- This file should undo anything in `up.sql`
DROP TABLE IF EXISTS "users";

View File

@ -1,6 +0,0 @@
-- Your SQL goes here
CREATE TABLE "users"(
"id" SERIAL PRIMARY KEY,
"username" VARCHAR NOT NULL
);

8
src/auth/models.rs Normal file
View File

@ -0,0 +1,8 @@
pub struct User {
pub id: i32,
pub username: String,
}
pub struct NewUser<'a> {
pub username: &'a str,
}

View File

@ -1,32 +1,29 @@
use diesel::pg::PgConnection;
use diesel::prelude::*;
use dotenvy::dotenv; use dotenvy::dotenv;
use miniweb::users::models::{NewUser, User}; use entity;
use sea_orm::{DatabaseConnection, Database, DbConn, ActiveModelTrait, DbErr, Set};
use std::env; use std::env;
use std::io::stdin; use std::io::stdin;
pub fn establish_connection() -> PgConnection { pub async fn establish_connection() -> DatabaseConnection {
dotenv().ok(); dotenv().ok();
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set"); let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
PgConnection::establish(&database_url) let db = Database::connect(&database_url).await;
.unwrap_or_else(|_| panic!("Error connecting to {}", database_url)) db.unwrap_or_else(|_| panic!("Error connecting to {}", database_url))
} }
pub fn create_user(conn: &mut PgConnection, username: &str) -> User { pub async fn create_user(conn: &DbConn, username: &str) -> Result<entity::user::Model, DbErr> {
use miniweb::schema::users; let user = entity::user::ActiveModel {
username: Set(username.to_owned()),
let new_user = NewUser { username }; ..Default::default()
};
diesel::insert_into(users::table) let user = user.insert(conn).await?;
.values(&new_user) Ok(user)
.returning(User::as_returning())
.get_result(conn)
.expect("Error saving new user")
} }
fn main() {
let connection = &mut establish_connection(); #[tokio::main]
async fn main() {
let connection = establish_connection().await;
let mut username = String::new(); let mut username = String::new();
@ -34,6 +31,6 @@ fn main() {
stdin().read_line(&mut username).unwrap(); stdin().read_line(&mut username).unwrap();
let username = username.trim_end(); // Remove the trailing newline let username = username.trim_end(); // Remove the trailing newline
let user = create_user(connection, username); let user = create_user(&connection, username).await.expect("Error creating user");
println!("\nSaved user {} with id {}", username, user.id); println!("\nSaved user {} with id {}", username, user.id);
} }

View File

@ -1,25 +1,22 @@
use diesel::pg::PgConnection;
use diesel::prelude::*;
use dotenvy::dotenv; use dotenvy::dotenv;
use miniweb::schema::users::dsl::*; use entity;
use miniweb::users::models::User; use sea_orm::{DatabaseConnection, Database, EntityTrait};
use std::env; use std::env;
pub fn establish_connection() -> PgConnection { pub async fn establish_connection() -> DatabaseConnection {
dotenv().ok(); dotenv().ok();
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set"); let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
PgConnection::establish(&database_url) let db = Database::connect(&database_url).await;
.unwrap_or_else(|_| panic!("Error connecting to {}", database_url)) db.unwrap_or_else(|_| panic!("Error connecting to {}", database_url))
} }
fn main() {
let connection = &mut establish_connection();
let results = users #[tokio::main]
.select(User::as_select()) async fn main() {
.load(connection) let connection = establish_connection().await;
.expect("Error loading posts");
let results = entity::User::find()
.all(&connection).await.expect("Error loading users.");
println!("Displaying {} users", results.len()); println!("Displaying {} users", results.len());
for user in results { for user in results {

View File

@ -1,5 +1,4 @@
pub mod admin; pub mod admin;
pub mod schema; pub mod auth;
pub mod service; pub mod service;
pub mod state; pub mod state;
pub mod users;

View File

@ -1,8 +0,0 @@
// @generated automatically by Diesel CLI.
diesel::table! {
users (id) {
id -> Int4,
username -> Varchar,
}
}

View File

@ -1,15 +0,0 @@
use diesel::prelude::*;
#[derive(Queryable, Selectable)]
#[diesel(table_name = crate::schema::users)]
#[diesel(check_for_backend(diesel::pg::Pg))]
pub struct User {
pub id: i32,
pub username: String,
}
#[derive(Insertable)]
#[diesel(table_name = crate::schema::users)]
pub struct NewUser<'a> {
pub username: &'a str,
}

View File

@ -1,6 +0,0 @@
# Users
The most basic requirement of most services is authentication or identifying users.
This module should aim to provide that.