# Storm Framework - Complete Documentation > Storm is an AI-first ORM framework for Kotlin 2.0+ and Java 21+, the gold > standard for AI-assisted database development. > > It uses immutable data classes and records instead of proxied entities, > providing type-safe queries, predictable performance, and zero hidden magic. > Storm works perfectly standalone, but its design and tooling make it uniquely > suited for AI-assisted development: immutable entities produce stable code, > the CLI installs per-tool skills, and a locally running MCP server exposes > only schema metadata (table definitions, column types, constraints) while > shielding your database credentials and data from the LLM. Built-in > verification (validateSchema(), SqlCapture) lets the AI validate its own work > correct before anything is committed. > > Get started: `npx @storm-orm/cli` > Website: https://orm.st > GitHub: https://github.com/storm-orm/storm-framework > License: Apache 2.0 # Generated: 2026-04-09T21:01:04Z ======================================== ## Source: index.md ======================================== # ST/ORM > **Tip:** Storm includes a schema-aware MCP server that exposes your table definitions, column types, and foreign keys to AI coding tools like Claude Code, Cursor, Copilot and Codex. Run `npx @storm-orm/cli` for full Storm ORM support including AI skills, conventions, and schema access. Using Python, Go, Ruby, or another language? Run `npx @storm-orm/cli mcp init` to set up the MCP server standalone. **Storm** is a modern, high-performance ORM for Kotlin 2.0+ and Java 21+, built around a powerful SQL template engine. It focuses on simplicity, type safety, and predictable performance through immutable models and compile-time metadata. **Key benefits:** - **Minimal code**: Define entities with simple records/data classes and query with concise, readable syntax, no boilerplate. - **Parameterized by default**: String interpolations are automatically converted to bind variables, making queries SQL injection safe by design. - **Close to SQL**: Storm embraces SQL rather than abstracting it away, keeping you in control of your database operations. - **Type-safe**: Storm's DSL mirrors SQL, providing a type-safe, intuitive experience that makes queries easy to write and read while reducing the risk of runtime errors. - **Direct Database Interaction**: Storm translates method calls directly into database operations, offering a transparent and straightforward experience. It eliminates inefficiencies like the N+1 query problem for predictable and efficient interactions. - **Stateless**: Avoids hidden complexities and "magic" with stateless, record-based entities, ensuring simplicity and eliminating lazy initialization and transaction issues downstream. - **Performance**: Template caching, transaction-scoped entity caching, and zero-overhead dirty checking (thanks to immutability) ensure efficient database interactions. Batch processing, lazy streams, and upserts are built in. - **Universal Database Compatibility**: Fully compatible with all SQL databases, it offers flexibility and broad applicability across various database systems. ## Built for the AI Era Storm is the ORM that AI coding assistants get right. Its stateless, immutable entities mean what you see in the source code is exactly what exists at runtime: no hidden proxies, no lazy loading surprises, no persistence context rules that trip up AI-generated code. When you ask your AI tool to write a query, define an entity, or build a repository, the output is straightforward data classes and explicit SQL, the same code a senior developer would write by hand. Traditional ORMs carry invisible complexity (managed entity state, implicit flushes, bytecode-enhanced proxies) that AI tools have no reliable way to reason about. Storm eliminates these failure modes entirely. Combined with its compile-time metamodel that catches errors before runtime, Storm and AI coding tools form a natural partnership. **Get started in seconds:** ```bash npx @storm/cli ``` This configures your AI tool (Claude Code, Cursor, Copilot, Windsurf, or Codex) with Storm's patterns, conventions, and slash commands. See [AI-Assisted Development](ai.md) for details. ## Why Storm? Storm draws inspiration from established ORMs such as Hibernate, but is built from scratch around a clear design philosophy: capture intent using the minimum amount of code, optimized for Kotlin and modern Java. **Storm's mission:** Make database development productive and enjoyable, with full developer control and high performance. Storm embraces SQL rather than abstracting it away. It simplifies database interactions while remaining transparent, and scales from prototypes to enterprise systems. | Traditional ORM Pain | Storm Solution | |----------------------|----------------| | N+1 queries from lazy loading | Entity graphs load in a single query | | Hidden magic (proxies, implicit flush, cascades) | Stateless records; explicit, predictable behavior | | Entity state confusion (managed/detached/transient) | Immutable records; no state to manage | | Entities tied to session/context | Stateless records easily cached and shared across layers | | Dirty checking via bytecode manipulation | Lightning-fast dirty checking thanks to immutability | | Complex mapping configuration | Convention over configuration | | Runtime query errors | Compile-time type-safe DSL | | SQL hidden behind abstraction layers | SQL-first design; stay close to the database | **Storm is ideal for** developers who understand that the best solutions emerge when object model and database model work in harmony. If you value a database-first approach where records naturally mirror your schema, Storm is built for you. Custom mappings are supported when needed, but the real elegance comes from alignment, not abstraction. ## Choose Your Language Both Kotlin and Java support SQL Templates for powerful query composition. Kotlin additionally provides a type-safe DSL with infix operators for a more idiomatic experience. [Kotlin] ```kotlin // Define an entity data class User( @PK val id: Int = 0, val email: String, val name: String, @FK val city: City ) : Entity // Type-safe predicates — query nested properties like city.name in one go val users = orm.findAll(User_.city.name eq "Sunnyvale") // Custom repository — inherits all CRUD operations, add your own queries interface UserRepository : EntityRepository { fun findByCityName(name: String) = findAll(User_.city.name eq name) } val users = userRepository.findByCityName("Sunnyvale") // Block DSL — build queries with where, orderBy, joins, pagination val users = userRepository.select { where(User_.city.name eq "Sunnyvale") orderBy(User_.name) }.resultList // SQL Template for full control; parameterized by default, SQL injection safe val users = orm.query { """ SELECT ${User::class} FROM ${User::class} WHERE ${User_.city.name} = $cityName""" }.resultList() ``` Full coroutine support with `Flow` for streaming and programmatic transactions: ```kotlin // Streaming with Flow val users: Flow = orm.entity(User::class).selectAll() users.collect { user -> println(user.name) } // Programmatic transactions transaction { val city = orm insert City(name = "Sunnyvale", population = 155_000) val user = orm insert User(email = "bob@example.com", name = "Bob", city = city) } ``` [Java] ```java // Define an entity record User(@PK Integer id, String email, String name, @FK City city ) implements Entity {} // Custom repository—inherits all CRUD operations, add your own queries interface UserRepository extends EntityRepository { default List findByCityName(String name) { return select().where(User_.city.name, EQUALS, name).getResultList(); } } List users = userRepository.findByCityName("Sunnyvale"); // Query Builder for more complex operations List users = orm.entity(User.class) .select() .where(User_.city.name, EQUALS, "Sunnyvale") .orderBy(User_.name) .getResultList(); // SQL Template for full control; parameterized by default, SQL injection safe List users = orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.city.name} = \{cityName} """).getResultList(User.class); ``` ## Quick Start Storm provides a Bill of Materials (BOM) for centralized version management. Import the BOM once and omit version numbers from individual Storm dependencies. [Kotlin (Gradle)] ```kotlin dependencies { implementation(platform("st.orm:storm-bom:@@STORM_VERSION@@")) implementation("st.orm:storm-kotlin") runtimeOnly("st.orm:storm-core") // Use storm-compiler-plugin-2.0 for Kotlin 2.0.x, -2.1 for 2.1.x, etc. kotlinCompilerPluginClasspath("st.orm:storm-compiler-plugin-2.0") } ``` [Java (Maven)] ```xml st.orm storm-bom @@STORM_VERSION@@ pom import st.orm storm-java21 st.orm storm-core runtime ``` Ready to get started? Head to the [Getting Started](getting-started.md) guide. ## Learning Paths Not sure where to begin? Pick the path that fits your situation. ### New to Storm If you are new to Storm, follow these guides in order to build a solid foundation: 1. [Installation](installation.md) -- add Storm to your project 2. [First Entity](first-entity.md) -- define entities, insert and fetch records 3. [First Query](first-query.md) -- filtering, repositories, and streaming 4. [Entities](entities.md) -- annotations, nullability, naming conventions 5. [Queries](queries.md) -- the full query DSL and builder reference 6. [Repositories](repositories.md) -- the repository pattern and custom query methods 7. [Relationships](relationships.md) -- foreign keys, entity graphs, and many-to-many ### Migrating from JPA If you are coming from JPA or Hibernate, these pages explain the key differences and how to transition: 1. [Migration from JPA](migration-from-jpa.md) -- annotation mapping, concept translation, coexistence strategy 2. [Storm vs Other Frameworks](comparison.md) -- feature comparison with JPA, jOOQ, MyBatis, and others 3. [Entities](entities.md) -- how Storm entities differ from JPA entities 4. [Repositories](repositories.md) -- Storm repositories vs. Spring Data repositories 5. [Transactions](transactions.md) -- transaction management without an EntityManager 6. [Spring Integration](spring-integration.md) -- Spring Boot Starter and auto-configuration ### Evaluating for Production If you are a tech lead or architect evaluating Storm for a production system, these pages cover the areas that matter most: 1. [Storm vs Other Frameworks](comparison.md) -- feature-level comparison across frameworks 2. [Spring Integration](spring-integration.md) -- Spring Boot auto-configuration, repository scanning, DI 3. [Ktor Integration](ktor-integration.md) -- Ktor plugin, HOCON configuration, coroutine-native transactions 4. [Batch Processing and Streaming](batch-streaming.md) -- bulk operations and large dataset handling 4. [Testing](testing.md) -- JUnit 5 integration, statement capture, and test isolation 5. [Configuration](configuration.md) -- runtime tuning, dirty checking modes, cache retention 6. [Database Dialects](dialects.md) -- database-specific optimizations ## What Storm Does Not Do Storm is focused on being a great ORM and SQL template engine. It intentionally does not include: - **Schema migration or DDL generation.** Storm does not automatically create, alter, or drop tables at runtime. With Storm's [AI integration](/ai), your coding assistant can read your database schema and generate Flyway or Liquibase migration scripts on demand. For schema versioning, use [Flyway](https://flywaydb.org/) or [Liquibase](https://www.liquibase.com/). - **Second-level cache.** Storm's entity cache is transaction-scoped and cleared on commit. For cross-transaction caching, use Spring's `@Cacheable` or a dedicated cache layer like Caffeine or Redis. - **Lazy loading proxies.** Entities are plain records with no proxies. Related entities are loaded eagerly in a single query via JOINs. For deferred loading, use [Refs](refs.md) to explicitly control when related data is fetched. ## Database Support Storm works with any JDBC-compatible database. Dialect packages provide optimized support for: ![Oracle](https://img.shields.io/badge/Oracle-F80000?logo=oracle&logoColor=white) ![SQL Server](https://img.shields.io/badge/SQL_Server-CC2927?logo=microsoftsqlserver&logoColor=white) ![PostgreSQL](https://img.shields.io/badge/PostgreSQL-4169E1?logo=postgresql&logoColor=white) ![MySQL](https://img.shields.io/badge/MySQL-4479A1?logo=mysql&logoColor=white) ![MariaDB](https://img.shields.io/badge/MariaDB-003545?logo=mariadb&logoColor=white) ![SQLite](https://img.shields.io/badge/SQLite-003B57?logo=sqlite&logoColor=white) ![H2](https://img.shields.io/badge/H2-0000bb?logoColor=white) See [Database Dialects](dialects.md) for installation and configuration details. ## Requirements - Kotlin 2.0+ or Java 21+ - Maven 3.9+ or Gradle 8+ ## Glossary New to Storm's terminology? See the [Glossary](glossary.md) for definitions of key terms like Entity, Projection, Metamodel, Ref, Hydration, and more. ## License Storm is released under the [Apache 2.0 License](https://github.com/storm-repo/storm-framework/blob/main/LICENSE). ======================================== ## Source: getting-started.md ======================================== # Getting Started Storm is a modern SQL Template and ORM framework for Kotlin 2.0+ and Java 21+. It uses immutable data classes and records instead of proxied entities, giving you predictable behavior, type-safe queries, and high performance. ## Design Philosophy Storm is built around a simple idea: your data model should be a plain value, not a framework-managed object. In Storm, entities are Kotlin data classes or Java records. They carry no hidden state, no change-tracking proxies, and no lazy-loading hooks. You can create them, pass them across layers, serialize them, compare them by value, and store them in collections without worrying about session scope, detachment, or side effects. What you see in the source code is exactly what exists at runtime. This stateless design is a deliberate trade-off. Traditional ORMs like JPA/Hibernate give you transparent lazy loading and proxy-based dirty checking, but at the cost of complexity: you must reason about managed vs. detached state, proxy initialization, persistence context boundaries, and cascading rules that interact in subtle ways. Storm avoids all of this. It still performs [dirty checking](/dirty-checking), but by comparing entity state within a transaction rather than through proxies or bytecode manipulation. When you query a relationship, you get the result in the same query. There are no surprises. Storm is also SQL-first. Rather than abstracting SQL away behind a query language (like JPQL) or a verbose criteria builder, Storm embraces SQL directly. Its SQL Template API lets you write real SQL with type-safe parameter interpolation and automatic result mapping. For common CRUD patterns, the type-safe DSL and repository interfaces provide concise, compiler-checked alternatives, but the full power of SQL is always available when you need it. The framework is organized around three core abstractions: - **Entity** is your data model. A Kotlin data class or Java record with a few annotations (`@PK`, `@FK`) that describe its mapping to the database. Storm derives table and column names automatically, so annotations are only needed for primary keys, foreign keys, and cases where the naming convention does not match. - **Repository** provides CRUD operations and type-safe queries for a specific entity. You define an interface, write query methods with explicit bodies using the DSL, and Storm handles the rest. No magic method-name parsing, no hidden query generation. - **SQL Template** gives you direct access to SQL with type-safe parameter binding and result mapping. You write real SQL, embed parameters and entity types directly in the query string, and get back typed results. This is the escape hatch when the DSL is not enough, and it is a first-class citizen in Storm, not an afterthought. These abstractions share a common principle: explicit behavior over implicit magic. Every query is visible in the source code. Every relationship is loaded when you ask for it. Every transaction boundary is declared, not inferred. This makes Storm applications straightforward to debug, profile, and reason about. ## Choose Your Path Storm supports two ways to get started. Pick the one that fits your workflow. [AI-Assisted] ### AI-Assisted Setup If you use an AI coding tool (Claude Code, Cursor, GitHub Copilot, Windsurf, or Codex), Storm provides rules, skills, and an optional database-aware MCP server that give the AI deep knowledge of Storm's conventions. The AI can generate entities from your schema, write queries, and verify its own work against a real database. **1. Install the Storm CLI and run it in your project:** ```bash npx @storm-orm/cli init ``` The interactive setup configures your AI tool with Storm's rules and skills, and optionally connects it to your development database for schema-aware code generation. **2. Ask your AI tool to set up Storm:** Once `storm init` has configured your tool, you can ask it to add the right dependencies, create entities from your database tables, and write queries. The AI has access to Storm's full documentation and your database schema. For example: - "Add Storm to this project with Spring Boot and PostgreSQL" - "Set up Storm with Ktor and PostgreSQL" - "Create entities for the users and orders tables" - "Write a repository method that finds orders by status with pagination" **3. Verify:** Storm's AI workflow includes built-in verification. The AI can run `ORMTemplate.validateSchema()` to prove entities match the database and `SqlCapture` to inspect generated SQL, all in an isolated H2 test database before anything touches production. See [AI-Assisted Development](ai.md) for the full setup guide, available skills, and MCP server configuration. [Manual] ### Manual Setup Follow these three steps in order for the fastest path from zero to a working application. **1. Installation** Set up your project with the right dependencies, build flags, and optional modules. **[Go to Installation](installation.md)** **2. First Entity** Define your first entity, create an ORM template, and perform insert, read, update, and remove operations. **[Go to First Entity](first-entity.md)** **3. First Query** Write custom queries, build repositories, stream results, and use the type-safe metamodel. **[Go to First Query](first-query.md)** --- ## What's Next Once you have completed the getting-started guides, explore the features that match your needs: **Core Concepts:** - [Entities](entities.md) -- annotations, nullability, naming conventions - [Queries](queries.md) -- query DSL, filtering, joins, aggregation - [Relationships](relationships.md) -- one-to-one, many-to-one, many-to-many - [Repositories](repositories.md) -- custom repository pattern **Operations:** - [Transactions](transactions.md) -- transaction management and propagation - [Upserts](upserts.md) -- insert-or-update operations - [Batch Processing & Streaming](batch-streaming.md) -- bulk operations and large datasets - [Dirty Checking](dirty-checking.md) -- automatic change detection on update **Integration:** - [Spring Integration](spring-integration.md) -- Spring Boot Starter, auto-configuration, and DI - [Testing](testing.md) -- JUnit 5 integration and statement capture - [Database Dialects](dialects.md) -- database-specific features **Advanced:** - [Refs](refs.md) -- lightweight entity references for deferred loading - [Projections](projections.md) -- read-only views of entities - [SQL Templates](sql-templates.md) -- raw SQL with type safety - [Metamodel](metamodel.md) -- compile-time type-safe field references - [JSON Support](json.md) -- JSON columns and aggregation - [Entity Serialization](serialization.md) -- JSON serialization with Ref support **Migration:** - [Migration from JPA](migration-from-jpa.md) -- step-by-step guide - [Storm vs Other Frameworks](comparison.md) -- feature comparison ======================================== ## Source: installation.md ======================================== # Installation This page covers everything you need to add Storm to your project: prerequisites, dependency setup, and optional modules. ## Prerequisites | Requirement | Version | |-------------|---------| | JDK | 21 or later | | Kotlin (if using Kotlin) | 2.0 or later | | Build tool | Maven 3.9+ or Gradle 8+ | | Database | Any JDBC-compatible database | Kotlin users do not need any preview flags. Java users must enable `--enable-preview` in their compiler configuration because the Java API uses String Templates (JEP 430). ## Add the BOM Storm provides a Bill of Materials (BOM) for centralized version management. Import the BOM once, then omit version numbers from individual Storm dependencies. This prevents version mismatches between modules. [Kotlin] ```kotlin dependencies { implementation(platform("st.orm:storm-bom:@@STORM_VERSION@@")) } ``` [Java] **Maven:** ```xml st.orm storm-bom @@STORM_VERSION@@ pom import ``` **Gradle (Kotlin DSL):** ```kotlin dependencies { implementation(platform("st.orm:storm-bom:@@STORM_VERSION@@")) } ``` ## Add the Core Dependencies [Kotlin] ```kotlin plugins { id("com.google.devtools.ksp") version "2.0.21-1.0.28" } dependencies { implementation(platform("st.orm:storm-bom:@@STORM_VERSION@@")) implementation("st.orm:storm-kotlin") runtimeOnly("st.orm:storm-core") ksp("st.orm:storm-metamodel-ksp") kotlinCompilerPluginClasspath("st.orm:storm-compiler-plugin-2.0") } ``` The `storm-metamodel-ksp` dependency generates type-safe metamodel classes (e.g., `User_`, `City_`) at compile time. See [Metamodel](metamodel.md) for details. The `storm-compiler-plugin` automatically wraps string interpolations inside SQL template lambdas, making queries injection-safe by default. The `2.0` suffix matches the Kotlin major.minor version used in your project (e.g., `storm-compiler-plugin-2.1` for Kotlin 2.1.x). See [String Templates](string-templates.md) for details. [Java] **Gradle (Kotlin DSL):** ```kotlin dependencies { implementation(platform("st.orm:storm-bom:@@STORM_VERSION@@")) implementation("st.orm:storm-java21") runtimeOnly("st.orm:storm-core") annotationProcessor("st.orm:storm-metamodel-processor") } tasks.withType { options.compilerArgs.add("--enable-preview") } tasks.withType { jvmArgs("--enable-preview") } ``` **Maven:** ```xml st.orm storm-java21 st.orm storm-core runtime st.orm storm-metamodel-processor provided ``` Enable preview features for String Templates (JEP 430): ```xml org.apache.maven.plugins maven-compiler-plugin 21 --enable-preview ``` The metamodel processor generates type-safe metamodel classes (e.g., `User_`, `City_`) at compile time. See [Metamodel](metamodel.md) for details. ## Optional Modules Storm is modular. Add only what you need. ### Database Dialects Storm works with any JDBC-compatible database out of the box. Dialect modules provide database-specific optimizations (e.g., native upsert syntax, tuple comparisons). Add the one that matches your database as a runtime dependency: | Module | Database | |--------|----------| | `storm-oracle` | Oracle | | `storm-mssqlserver` | SQL Server | | `storm-postgresql` | PostgreSQL | | `storm-mysql` | MySQL | | `storm-mariadb` | MariaDB | | `storm-sqlite` | SQLite | | `storm-h2` | H2 | ```kotlin runtimeOnly("st.orm:storm-postgresql") ``` See [Database Dialects](dialects.md) for what each dialect provides. ### Spring Boot Integration For Spring Boot applications, use the starter modules instead of the base modules. The starters auto-configure the `ORMTemplate` bean, enable repository scanning, and integrate with Spring's transaction management. See [Spring Integration](spring-integration.md) for full setup details. [Kotlin] ```kotlin implementation("st.orm:storm-kotlin-spring-boot-starter") ``` [Java] ```xml st.orm storm-spring-boot-starter ``` ### Ktor Integration For Ktor applications, add the Ktor plugin module. It provides a `Storm` plugin that manages the DataSource lifecycle, reads HOCON configuration, and exposes the `ORMTemplate` through extension properties on `Application`, `ApplicationCall`, and `RoutingContext`. See [Ktor Integration](ktor-integration.md) for full setup details. ```kotlin implementation("st.orm:storm-ktor") ``` For testing: ```kotlin testImplementation("st.orm:storm-ktor-test") ``` ### JSON Support Storm supports storing and reading JSON-typed columns. Pick the module that matches your serialization library: | Module | Library | |--------|---------| | `storm-jackson2` | Jackson 2.17+ (Spring Boot 3.x) | | `storm-jackson3` | Jackson 3.0+ (Spring Boot 4+) | | `storm-kotlinx-serialization` | Kotlinx Serialization | See [JSON Support](json.md) for usage details. ## Module Overview The following diagram shows how Storm's modules relate to each other. You only need the modules relevant to your language and integration choices. ``` storm-foundation (base interfaces) └── storm-kotlin / storm-java21 (your primary dependency) ├── storm-kotlin-spring / storm-spring (Spring Framework) │ └── storm-kotlin-spring-boot-starter / storm-spring-boot-starter ├── storm-ktor (Ktor) │ └── storm-ktor-test (testing support) ├── dialect modules (postgresql, mysql, mariadb, oracle, mssqlserver, sqlite, h2) └── JSON modules (jackson2, jackson3, kotlinx-serialization) ``` ## Next Steps With Storm installed, you are ready to define your first entity and run your first query: - [First Entity](first-entity.md) -- define an entity, create an ORM template, insert and fetch a record - [First Query](first-query.md) -- custom queries, repositories, and type-safe filtering ======================================== ## Source: first-entity.md ======================================== # First Entity This guide walks you through defining your first Storm entity, creating an ORM template, and performing basic CRUD operations. By the end, you will have inserted a record into the database and read it back. ## Define an Entity Storm entities are plain data classes (Kotlin) or records (Java) that implement the `Entity` interface. Annotate the primary key with `@PK` and foreign keys with `@FK`. Storm maps field names to column names automatically using camelCase-to-snake_case conversion, so no XML or additional configuration is needed. [Kotlin] ```kotlin data class City( @PK val id: Int = 0, val name: String, val population: Long ) : Entity data class User( @PK val id: Int = 0, val email: String, val name: String, @FK val city: City ) : Entity ``` Non-nullable fields (like `city: City`) produce `INNER JOIN` queries. Nullable fields (like `city: City?`) produce `LEFT JOIN` queries. Kotlin's type system maps directly to Storm's null handling. [Java] ```java @Builder(toBuilder = true) record City(@PK Integer id, String name, long population ) implements Entity {} @Builder(toBuilder = true) record User(@PK Integer id, String email, String name, @FK City city ) implements Entity {} ``` In Java, record components are nullable by default. Use `@Nonnull` on fields that must always have a value. Primitive types (`int`, `long`, etc.) are inherently non-nullable. The `@Builder` annotation is from [Lombok](https://projectlombok.org/) and is optional. It generates a builder that lets you construct entities without specifying the primary key, and creates modified copies via `toBuilder()`. Without Lombok, you can pass `null` as the primary key (e.g., `new City(null, "Sunnyvale", 155_000)`) or define a convenience constructor that omits it. See [Modifying Entities](entities.md#modifying-entities) for details. These entities map to the following database tables: | Table | Columns | |-------|---------| | `city` | `id`, `name`, `population` | | `user` | `id`, `email`, `name`, `city_id` | Storm automatically appends `_id` to foreign key column names. See [Entities](entities.md) for the full set of annotations, naming conventions, and customization options. ## Create the ORM Template The `ORMTemplate` is the central entry point for all database operations. It is thread-safe and typically created once at application startup (or provided as a Spring bean). You can create one from a JDBC `DataSource`, `Connection`, or JPA `EntityManager`. [Kotlin] Kotlin provides extension properties for concise creation: ```kotlin // From a DataSource (most common) val orm = dataSource.orm // From a Connection val orm = connection.orm // From a JPA EntityManager val orm = entityManager.orm ``` [Java] Use the `ORMTemplate.of(...)` factory methods: ```java // From a DataSource (most common) var orm = ORMTemplate.of(dataSource); // From a Connection var orm = ORMTemplate.of(connection); // From a JPA EntityManager var orm = ORMTemplate.of(entityManager); ``` If you are using Spring Boot with one of the starter modules, the `ORMTemplate` bean is created automatically. See [Spring Integration](spring-integration.md) for details. ## Insert a Record [Kotlin] Storm's Kotlin API provides infix operators for a concise syntax: ```kotlin // Insert a city -- the returned object has the database-generated ID val city = orm insert City(name = "Sunnyvale", population = 155_000) // Insert a user that references the city val user = orm insert User( email = "alice@example.com", name = "Alice", city = city ) ``` The `insert` operator sends an INSERT statement, retrieves the auto-generated primary key, and returns a new instance with the key populated. You do not need to set the `id` field yourself when using `IDENTITY` generation (the default). [Java] ```java var cities = orm.entity(City.class); var users = orm.entity(User.class); // Insert a city -- the returned object has the database-generated ID City city = cities.insertAndFetch(City.builder() .name("Sunnyvale") .population(155_000) .build()); // Insert a user that references the city User user = users.insertAndFetch(User.builder() .email("alice@example.com") .name("Alice") .city(city) .build()); ``` The `insertAndFetch` method sends an INSERT statement, retrieves the auto-generated primary key, and returns a new record with the key populated. ## Read a Record [Kotlin] ```kotlin // Find by ID val user: User? = orm.entity().findById(userId) // Find by field value using the metamodel (requires storm-metamodel-processor) val user: User? = orm.find(User_.email eq "alice@example.com") ``` [Java] ```java // Find by ID Optional user = orm.entity(User.class).findById(userId); // Find by field value using the metamodel (requires storm-metamodel-processor) Optional user = orm.entity(User.class) .select() .where(User_.email, EQUALS, "alice@example.com") .getOptionalResult(); ``` When Storm loads a `User`, it automatically joins the `City` table (because `city` is marked with `@FK`) and populates the full `City` object in a single query. There is no N+1 problem. ## Update a Record Since entities are immutable, you create a new instance with the changed fields and pass it to the update operation. [Kotlin] ```kotlin val updatedUser = orm update user.copy(name = "Alice Johnson") ``` [Java] ```java users.update(new User(user.id(), user.email(), "Alice Johnson", user.city())); ``` ## Remove a Record [Kotlin] ```kotlin orm remove user ``` [Java] ```java users.remove(user); ``` ## Transactions Wrap multiple operations in a transaction to ensure they succeed or fail together. [Kotlin] Storm provides a `transaction` block that commits on success and rolls back on exception: ```kotlin transaction { val city = orm insert City(name = "Sunnyvale", population = 155_000) val user = orm insert User(email = "bob@example.com", name = "Bob", city = city) } ``` [Java] With Spring's `@Transactional`: ```java @Transactional public User createUser(String email, String name, City city) { return orm.entity(User.class) .insertAndFetch(User.builder() .email(email) .name(name) .city(city) .build()); } ``` See [Transactions](transactions.md) for programmatic transaction control, propagation modes, and savepoints. ## Summary You have now seen the core workflow: 1. Define entities as data classes or records with `@PK` and `@FK` annotations 2. Create an `ORMTemplate` from a `DataSource` 3. Use `insert`, `findById`, `update`, and `remove` for basic CRUD ## Next Steps - [First Query](first-query.md) -- custom queries, repositories, filtering, and streaming - [Entities](entities.md) -- enumerations, versioning, composite keys, and naming conventions - [Spring Integration](spring-integration.md) -- auto-configuration and dependency injection ======================================== ## Source: first-query.md ======================================== # First Query Once you can insert and fetch records (see [First Entity](first-entity.md)), the next step is querying. This page covers the query patterns you will use most often: filtering with predicates, using repositories, streaming results, and writing type-safe queries with the metamodel. ## Filtering with Predicates The simplest way to query is with predicate methods directly on the ORM template or entity repository. [Kotlin] ```kotlin val users = orm.entity(User::class) // Find all users in a city val usersInCity: List = users.findAll(User_.city eq city) // Find a single user by email val user: User? = users.find(User_.email eq "alice@example.com") // Combine conditions with and / or val results: List = users.findAll( (User_.city eq city) and (User_.name like "A%") ) // Check existence val exists: Boolean = users.existsById(userId) // Count val count: Long = users.count() ``` [Java] ```java var users = orm.entity(User.class); // Find all users in a city List usersInCity = users.select() .where(User_.city, EQUALS, city) .getResultList(); // Find a single user by email Optional user = users.select() .where(User_.email, EQUALS, "alice@example.com") .getOptionalResult(); // Combine conditions with and / or List results = users.select() .where(it -> it.where(User_.city, EQUALS, city) .and(it.where(User_.name, LIKE, "A%"))) .getResultList(); // Check existence boolean exists = users.existsById(userId); // Count long count = users.count(); ``` These predicate methods use the [Static Metamodel](metamodel.md) (`User_`, `City_`), which is generated at compile time. The compiler catches typos and type mismatches in field references before your code runs. ## Custom Repositories For domain-specific queries that you will reuse, define a custom repository interface. This keeps query logic in a single place and makes it testable through interface substitution. [Kotlin] ```kotlin interface UserRepository : EntityRepository { fun findByEmail(email: String): User? = find(User_.email eq email) fun findByNameInCity(name: String, city: City): List = findAll((User_.city eq city) and (User_.name eq name)) fun streamByCity(city: City): Flow = select { User_.city eq city } } // Get the repository from the ORM template val userRepository = orm.repository() // Use it val user = userRepository.findByEmail("alice@example.com") val usersInCity = userRepository.findByNameInCity("Alice", city) ``` Custom repositories inherit all built-in CRUD operations (`insert`, `findById`, `update`, `remove`, etc.) from `EntityRepository`. You only add methods for domain-specific queries. [Java] ```java interface UserRepository extends EntityRepository { default Optional findByEmail(String email) { return select() .where(User_.email, EQUALS, email) .getOptionalResult(); } default List findByNameInCity(String name, City city) { return select() .where(it -> it.where(User_.city, EQUALS, city) .and(it.where(User_.name, EQUALS, name))) .getResultList(); } } // Get the repository from the ORM template UserRepository userRepository = orm.repository(UserRepository.class); // Use it Optional user = userRepository.findByEmail("alice@example.com"); ``` Custom repositories inherit all built-in CRUD operations from `EntityRepository`. You only add `default` methods for domain-specific queries. See [Repositories](repositories.md) for the full repository pattern, Spring integration, and scrolling. ## Query Builder For queries that need ordering, pagination, joins, or aggregation, use the fluent query builder. [Kotlin] ```kotlin val users = orm.entity(User::class) // Ordering and pagination val page = users.select() .where(User_.city eq city) .orderBy(User_.name) .limit(10) .resultList // Joins (for entities not directly referenced by @FK) val roles = orm.entity(Role::class) .select() .innerJoin(UserRole::class).on(Role::class) .whereAny(UserRole_.user eq user) .resultList // Aggregation data class CityCount(val city: City, val count: Long) val counts = users.select(CityCount::class) { "${City::class}, COUNT(*)" } .groupBy(User_.city) .resultList ``` [Java] ```java var users = orm.entity(User.class); // Ordering and pagination List page = users.select() .where(User_.city, EQUALS, city) .orderBy(User_.name) .limit(10) .getResultList(); // Joins (for entities not directly referenced by @FK) List roles = orm.entity(Role.class) .select() .innerJoin(UserRole.class).on(Role.class) .where(UserRole_.user, EQUALS, user) .getResultList(); // Aggregation record CityCount(City city, long count) {} List counts = users .select(CityCount.class, RAW."\{City.class}, COUNT(*)") .groupBy(User_.city) .getResultList(); ``` See [Queries](queries.md) for the full query reference, including scrolling, distinct results, and compound field handling. ## Streaming For large result sets, streaming avoids loading all rows into memory at once. Rows are fetched lazily from the database as you consume them. [Kotlin] Kotlin uses `Flow`, which provides automatic resource management through structured concurrency: ```kotlin val users: Flow = orm.entity(User::class).selectAll() // Process each row users.collect { user -> println(user.name) } // Transform and collect val emails: List = users.map { it.email }.toList() ``` [Java] Java uses `Stream`, which holds an open database cursor. Always close streams to release resources: ```java try (Stream users = orm.entity(User.class).selectAll()) { List emails = users.map(User::email).toList(); } ``` See [Batch Processing and Streaming](batch-streaming.md) for bulk operations and advanced streaming patterns. ## SQL Templates When the query builder does not cover your use case (for example, CTEs, window functions, or database-specific syntax), SQL Templates give you full control over the SQL while retaining type safety and parameterized values. [Kotlin] ```kotlin val users = orm.query { """SELECT ${User::class} FROM ${User::class} WHERE ${User_.city} = $city ORDER BY ${User_.name}""" }.resultList() ``` With the [Storm compiler plugin](string-templates.md), interpolated expressions are automatically processed by the template engine: entity types expand to column lists, metamodel fields resolve to column names, and values become parameterized placeholders. [Java] ```java List users = orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.city} = \{city} ORDER BY \{User_.name}""") .getResultList(User.class); ``` Java uses String Templates (JEP 430) with the `RAW` processor. Entity types expand to column lists, metamodel fields to column names, and values to parameterized placeholders. See [SQL Templates](sql-templates.md) for the full template reference. ## Summary Storm provides multiple query styles that you can mix freely: | Style | Best for | |-------|----------| | Predicate methods (`find`, `findAll`) | Simple single-entity lookups | | Custom repositories | Reusable domain-specific queries | | Query builder | Ordering, pagination, joins, aggregation | | SQL Templates | Complex SQL, CTEs, window functions | Start with the simplest approach that fits your query. Move to a more powerful style only when needed. ## Next Steps - [Queries](queries.md) -- full query reference - [Repositories](repositories.md) -- repository pattern and Spring integration - [Entities](entities.md) -- annotations, nullability, and naming conventions - [Relationships](relationships.md) -- one-to-one, many-to-one, many-to-many - [Metamodel](metamodel.md) -- compile-time type-safe field references ======================================== ## Source: entities.md ======================================== # Entities Storm entities are simple data classes that map to database tables. By default, Storm applies sensible naming conventions to map entity fields to database columns automatically. --- ## Defining Entities [Kotlin] Use Kotlin data classes with the `Entity` interface: ```kotlin data class City( @PK val id: Int = 0, val name: String, val population: Long ) : Entity data class User( @PK val id: Int = 0, val email: String, val birthDate: LocalDate, val street: String, val postalCode: String?, @FK val city: City ) : Entity ``` [Java] Use Java records with the `Entity` interface: ```java record City(@PK Integer id, String name, long population ) implements Entity {} record User(@PK Integer id, String email, LocalDate birthDate, String street, String postalCode, @FK City city ) implements Entity {} ``` --- ## Entity Interface Implementing the `Entity` interface is optional but required for using `EntityRepository` with built-in CRUD operations. The type parameter specifies the primary key type. Without this interface, you can still use Storm's SQL template features and query builder, but you lose the convenience methods like `findById`, `insert`, `update`, and `remove`. If you only need read access, consider using `Projection` instead (see [Projections](projections.md)). Storm also supports polymorphic entity hierarchies using sealed interfaces. A sealed interface extending `Entity` can define multiple record subtypes, enabling Single-Table or Joined Table inheritance with compile-time exhaustive pattern matching. See [Polymorphism](polymorphism.md) for details. --- ## Nullability [Kotlin] Kotlin's type system maps directly to Storm's null handling. A non-nullable field produces an `INNER JOIN` for foreign keys and a `NOT NULL` expectation for columns. A nullable field produces a `LEFT JOIN` for foreign keys and allows `NULL` values from the database. This means your entity definition fully describes the expected schema constraints. Use nullable types (`?`) to indicate nullable fields: ```kotlin data class User( @PK val id: Int = 0, val email: String, // Non-nullable val birthDate: LocalDate, // Non-nullable val postalCode: String?, // Nullable @FK val city: City? // Nullable (results in LEFT JOIN) ) : Entity ``` [Java] In Java, record components are nullable by default. Use `@Nonnull` to mark fields that must always have a value. Primitive types (`int`, `long`, etc.) are inherently non-nullable. As with Kotlin, nullability determines JOIN behavior: a non-nullable `@FK` field produces an `INNER JOIN`, while a `@Nullable` one produces a `LEFT JOIN`. ```java record User(@PK Integer id, @Nonnull String email, // Non-nullable @Nonnull LocalDate birthDate, // Non-nullable String postalCode, // Nullable (default) @Nullable @FK City city // Nullable (results in LEFT JOIN) ) implements Entity {} ``` --- ## Primary Key Generation The `@PK` annotation supports a `generation` parameter that controls how primary key values are generated: | Strategy | Description | |----------|-------------| | `IDENTITY` | Database generates the key using an identity/auto-increment column (default) | | `SEQUENCE` | Database generates the key using a named sequence | | `NONE` | No generation; the caller must provide the key value | [Kotlin] **IDENTITY (default):** ```kotlin data class User( @PK val id: Int = 0, // Database generates via auto-increment val name: String ) : Entity ``` When inserting, Storm omits the PK column and retrieves the generated value: ```kotlin val user = User(name = "Alice") val inserted = orm.insert(user) // Returns User with generated id ``` **SEQUENCE:** ```kotlin data class Order( @PK(generation = SEQUENCE, sequence = "order_seq") val id: Long = 0, val total: BigDecimal ) : Entity ``` Storm fetches the next value from the sequence before inserting. **NONE:** ```kotlin data class Country( @PK(generation = NONE) val code: String, // Caller provides the value val name: String ) : Entity ``` Use `NONE` when: - The key is a natural key (like country codes or UUIDs) - The key comes from an external source - The primary key is also a foreign key (see [Primary Key as Foreign Key](relationships.md#primary-key-as-foreign-key)) [Java] **IDENTITY (default):** ```java record User(@PK Integer id, // Database generates via auto-increment @Nonnull String name ) implements Entity {} ``` When inserting, Storm omits the PK column and retrieves the generated value: ```java var user = new User(null, "Alice"); var inserted = orm.entity(User.class).insert(user); // Returns User with generated id ``` **SEQUENCE:** ```java record Order(@PK(generation = SEQUENCE, sequence = "order_seq") Long id, @Nonnull BigDecimal total ) implements Entity {} ``` Storm fetches the next value from the sequence before inserting. **NONE:** ```java record Country(@PK(generation = NONE) String code, // Caller provides the value @Nonnull String name ) implements Entity {} ``` Use `NONE` when: - The key is a natural key (like country codes or UUIDs) - The key comes from an external source - The primary key is also a foreign key (see [Primary Key as Foreign Key](relationships.md#primary-key-as-foreign-key)) --- ## Composite Primary Keys For join tables or entities whose identity is defined by a combination of columns, wrap the key fields in a separate data class and annotate it with `@PK`. Storm treats all fields in the composite key class as part of the primary key. [Kotlin] ```kotlin data class UserRolePk( val userId: Int, val roleId: Int ) data class UserRole( @PK val userRolePk: UserRolePk, @FK val user: User, @FK val role: Role ) : Entity ``` [Java] ```java record UserRolePk(int userId, int roleId) {} record UserRole(@PK UserRolePk userRolePk, @Nonnull @FK User user, @Nonnull @FK Role role ) implements Entity {} ``` --- ## Foreign Keys The `@FK` annotation marks a field as a foreign key reference to another table-backed type (entity, projection, or data class with a `@PK`). Storm uses these annotations to automatically generate JOINs when querying and to derive column names (by default, appending `_id` to the field name). [Kotlin] ```kotlin data class User( @PK val id: Int = 0, val email: String, @FK val city: City // Always loaded via INNER JOIN ) : Entity ``` [Java] ```java record User(@PK Integer id, String email, @FK City city // Always loaded via INNER JOIN ) implements Entity {} ``` > **Tip:** Use the full entity type (e.g., `@FK val city: City`) when you always want the related entity loaded. Use `Ref` (e.g., `@FK val city: Ref`) when you only sometimes need the related entity, when the relationship is optional, or to prevent circular dependencies. See [Refs](refs.md) for details. --- ## Unique Keys Use `@UK` on fields that have a unique constraint in the database. The `@PK` annotation implies `@UK`, so primary key fields are automatically unique. Annotating a field with `@UK` tells Storm that the column contains unique values, which enables several framework features: 1. **Type-safe lookups.** `findBy(Key, value)` and `getBy(Key, value)` return a single result without requiring a predicate. The metamodel processor generates `Metamodel.Key` instances for `@UK` fields. See [Metamodel](metamodel.md#unique-keys-uk-and-metamodelkey) for details. 2. **Scrolling.** `@UK` fields can serve as cursor columns for `scroll(Scrollable)`. Because the values are unique, the cursor position is always unambiguous. See [Scrolling](pagination-and-scrolling.md#scrolling). 3. **Schema validation.** When [schema validation](validation.md) is enabled, Storm checks that the database actually has a matching unique constraint for each `@UK` field and reports a warning if it is missing. [Kotlin] ```kotlin data class User( @PK val id: Int = 0, @UK val email: String, val name: String ) : Entity ``` [Java] ```java record User(@PK Integer id, @UK String email, String name ) implements Entity {} ``` ### Compound Unique Keys For compound unique constraints that need a metamodel key (e.g., for keyset pagination or type-safe lookups), use an inline record annotated with `@UK`. When the compound key columns overlap with other fields on the entity, use `@Persist(insertable = false, updatable = false)` to prevent duplicate persistence: [Kotlin] ```kotlin data class UserEmailUk(val userId: Int, val email: String) data class SomeEntity( @PK val id: Int = 0, @FK val user: User, val email: String, @UK @Persist(insertable = false, updatable = false) val uniqueKey: UserEmailUk ) : Entity ``` [Java] ```java record UserEmailUk(int userId, String email) {} record SomeEntity(@PK Integer id, @Nonnull @FK User user, @Nonnull String email, @UK @Persist(insertable = false, updatable = false) UserEmailUk uniqueKey ) implements Entity {} ``` Compound unique constraints that do not require a metamodel key do not need to be modeled in the entity. Schema validation does not warn about unmodeled compound constraints. Use `@UK(constraint = false)` when the unique constraint does not exist in the database — for example, when uniqueness is enforced at the application level. When a column is not annotated with `@UK` but becomes unique in a specific query context (for example, a GROUP BY column produces unique values in the result set), wrap the metamodel with `.key()` (Kotlin) or `Metamodel.key()` (Java) to indicate it can serve as a scrolling cursor. See [Manual Key Wrapping](metamodel.md#manual-key-wrapping) for details. --- ## Embedded Components Embedded components group related fields into a reusable data class without creating a separate database table. The component's fields are stored as columns in the parent entity's table. This is useful for value objects like addresses, coordinates, or monetary amounts that appear in multiple entities. [Kotlin] Use data classes for embedded components: ```kotlin data class Address( val street: String? = null, @FK val city: City? = null ) data class Owner( @PK val id: Int = 0, val firstName: String, val lastName: String, val address: Address, val telephone: String? ) : Entity ``` [Java] Use records for embedded components: ```java record Address(String street, @FK City city) {} record Owner(@PK Integer id, @Nonnull String firstName, @Nonnull String lastName, @Nonnull Address address, @Nullable String telephone ) implements Entity {} ``` ### `@Persist` Propagation on Embedded Components When `@Persist` is placed on an embedded component field, it propagates to all child fields within that component. This is useful when the embedded component's columns overlap with other fields on the entity and should not be persisted separately. Child fields can override the inherited `@Persist` with their own annotation. [Kotlin] ```kotlin data class OwnerCityKey(val ownerId: Int, val cityId: Int) data class Pet( @PK val id: Int = 0, val name: String, @FK val owner: Owner, @FK val city: City, @Persist(insertable = false, updatable = false) val ownerCityKey: OwnerCityKey ) : Entity ``` In this example, the `owner` and `city` foreign keys define the actual persisted columns. The `ownerCityKey` inline record maps to the same underlying columns but is excluded from INSERT and UPDATE statements because its child fields inherit `@Persist(insertable = false, updatable = false)` from the parent field. [Java] ```java record OwnerCityKey(int ownerId, int cityId) {} record Pet(@PK Integer id, @Nonnull String name, @Nonnull @FK Owner owner, @Nonnull @FK City city, @Persist(insertable = false, updatable = false) OwnerCityKey ownerCityKey ) implements Entity {} ``` In this example, the `owner` and `city` foreign keys define the actual persisted columns. The `ownerCityKey` inline record maps to the same underlying columns but is excluded from INSERT and UPDATE statements because its child fields inherit `@Persist(insertable = false, updatable = false)` from the parent field. --- ## Enumerations Storm persists enum values as their `name()` string by default, which is readable and resilient to reordering. If storage efficiency is a priority or your schema uses integer columns for enums, you can switch to ordinal storage with `@DbEnum(ORDINAL)`. Be aware that ordinal storage is sensitive to the order of enum constants: adding or reordering values will break existing data. [Kotlin] Enums are stored by their name by default: ```kotlin enum class RoleType { USER, ADMIN } data class Role( @PK val id: Int = 0, val name: String, val type: RoleType // Stored as "USER" or "ADMIN" ) : Entity ``` To store by ordinal: ```kotlin data class Role( @PK val id: Int = 0, val name: String, @DbEnum(ORDINAL) val type: RoleType // Stored as 0 or 1 ) : Entity ``` [Java] Enums are stored by their name by default: ```java enum RoleType { USER, ADMIN } record Role(@PK Integer id, @Nonnull String name, @Nonnull RoleType type // Stored as "USER" or "ADMIN" ) implements Entity {} ``` To store by ordinal: ```java record Role(@PK Integer id, @Nonnull String name, @Nonnull @DbEnum(ORDINAL) RoleType type // Stored as 0 or 1 ) implements Entity {} ``` --- ## Converters When an entity field uses a type that is not directly supported by the JDBC driver, use `@Convert` to specify a converter that transforms between your domain type and a JDBC-compatible column type. Storm also supports auto-apply converters via `@DefaultConverter`, which automatically apply to all matching field types without requiring explicit annotations. [Kotlin] ```kotlin data class Money(val amount: BigDecimal) @DbTable("product") data class Product( @PK val id: Int = 0, val name: String, @Convert(converter = MoneyConverter::class) val price: Money ) : Entity ``` [Java] ```java record Money(BigDecimal amount) {} @DbTable("product") record Product(@PK Integer id, @Nonnull String name, @Convert(converter = MoneyConverter.class) Money price ) implements Entity {} ``` See [Converters](converters.md) for the full `Converter` interface, auto-apply with `@DefaultConverter`, resolution order, and practical examples. --- ## Versioning (Optimistic Locking) Optimistic locking prevents lost updates when multiple users or threads modify the same record concurrently. Storm checks the version value during updates: if another transaction has already changed the row, the update fails with an exception rather than silently overwriting the other change. You can use either an integer counter or a timestamp. [Kotlin] Use `@Version` for optimistic locking: ```kotlin data class Owner( @PK val id: Int = 0, val firstName: String, val lastName: String, @Version val version: Int ) : Entity ``` Timestamps are also supported: ```kotlin data class Visit( @PK val id: Int = 0, val visitDate: LocalDate, val description: String? = null, @FK val pet: Pet, @Version val timestamp: Instant? ) : Entity ``` [Java] Use `@Version` for optimistic locking: ```java record Owner(@PK Integer id, @Nonnull String firstName, @Nonnull String lastName, @Version int version ) implements Entity {} ``` Timestamps are also supported: ```java record Visit(@PK Integer id, @Nonnull LocalDate visitDate, @Nullable String description, @Nonnull @FK Pet pet, @Version Instant timestamp ) implements Entity {} ``` --- ## Non-Updatable Fields Some fields should be set once at creation and never changed by the application, such as creation timestamps, entity types, or references that define an object's identity. Marking a field with `@Persist(updatable = false)` tells Storm to include it in INSERT statements but exclude it from UPDATE statements. [Kotlin] Use `@Persist(updatable = false)` for fields that should only be set on insert: ```kotlin data class Pet( @PK val id: Int = 0, val name: String, @Persist(updatable = false) val birthDate: LocalDate, @FK @Persist(updatable = false) val type: PetType, @FK val owner: Owner? = null ) : Entity ``` [Java] Use `@Persist(updatable = false)` for fields that should only be set on insert: ```java record Pet(@PK Integer id, @Nonnull String name, @Nonnull @Persist(updatable = false) LocalDate birthDate, @Nonnull @FK @Persist(updatable = false) PetType type, @Nullable @FK Owner owner ) implements Entity {} ``` --- ## Modifying Entities Since Storm entities are immutable, updating a field means creating a new instance with the changed value. Kotlin data classes have a built-in `copy()` method for this. Java records do not provide an equivalent, but Lombok's `@Builder(toBuilder = true)` annotation generates a builder that copies all fields from an existing instance: ```java @Builder(toBuilder = true) record User(@PK Integer id, @Nonnull String email, @Nonnull String name, @FK City city ) implements Entity {} ``` This enables `user.toBuilder().email("new@example.com").build()` to create a modified copy. See the [FAQ](faq.md#how-do-i-modify-a-java-record-entity) for alternative approaches and upcoming Java language features. --- ## Naming Conventions Storm uses pluggable name resolvers to convert Kotlin/Java names to database identifiers. By default, camelCase names are converted to snake_case, and foreign key fields append `_id`. ### Default Conversion: CamelCase to Snake_Case The default resolver converts camelCase to snake_case: 1. Convert the first character to lowercase 2. Insert an underscore before each uppercase letter and convert it to lowercase | Field/Class | Resolved Name | |-------------|---------------| | `id` | `id` | | `email` | `email` | | `birthDate` | `birth_date` | | `postalCode` | `postal_code` | | `firstName` | `first_name` | | `UserRole` | `user_role` | For foreign keys, `_id` is appended after the conversion: | FK Field | Resolved Column | |----------|-----------------| | `city` | `city_id` | | `petType` | `pet_type_id` | | `homeAddress` | `home_address_id` | For details on customizing name resolution (uppercase conversion, custom resolvers, composable wrappers), see [Naming Conventions](configuration.md#naming-conventions). ### Per-Entity and Per-Field Overrides Annotation overrides (`@DbTable`, `@DbColumn`, and the string parameters on `@PK` and `@FK`) always take precedence over configured resolvers. See [Custom Table and Column Names](#custom-table-and-column-names) for details and examples. ### Identifier Escaping Storm automatically escapes identifiers that are SQL reserved words or contain special characters. Force escaping with the `escape` parameter: [Kotlin] ```kotlin @DbTable("order", escape = true) // "order" is a reserved word data class Order( @PK val id: Int = 0, @DbColumn("select", escape = true) val select: String // "select" is reserved ) : Entity ``` [Java] ```java @DbTable(value = "order", escape = true) // "order" is a reserved word record Order(@PK Integer id, @DbColumn(value = "select", escape = true) String select // "select" is reserved ) implements Entity {} ``` --- ## Custom Table and Column Names When the database schema does not follow Storm's default camelCase-to-snake_case convention, use annotations to specify the exact names. `@DbTable` overrides the table name, `@DbColumn` overrides a column name, and the string parameter on `@PK` or `@FK` overrides their respective column names. These annotations take precedence over any configured name resolver. [Kotlin] ```kotlin @DbTable("app_users") data class User( @PK("user_id") val id: Int = 0, @DbColumn("email_address") val email: String, @FK("home_city_id") val city: City ) : Entity ``` [Java] ```java @DbTable("app_users") record User(@PK("user_id") Integer id, @DbColumn("email_address") String email, @FK("home_city_id") City city ) implements Entity {} ``` --- ## Column Mapping Storm automatically maps fields to columns using these conventions: | Entity Field | Database Column | |--------------|-----------------| | `id` | `id` | | `email` | `email` | | `birthDate` | `birth_date` | | `postalCode` | `postal_code` | | `city` (FK) | `city_id` | CamelCase field names are converted to snake_case column names. Foreign keys automatically append `_id` and reference the primary key of the related entity. --- ## Join Behavior Nullability affects how relationships are loaded: - **Non-nullable FK:** INNER JOIN (referenced entity must exist) - **Nullable FK:** LEFT JOIN (referenced entity may be null) --- ## Suppressing Schema Validation To suppress constraint-specific warnings (missing primary key, foreign key, or unique constraint), use the `constraint` attribute on `@PK`, `@FK`, or `@UK`. This is more targeted than `@DbIgnore` because it only suppresses the constraint check while preserving all other validation (column existence, type compatibility, nullability). See [Constraint Validation](validation.md#constraint-validation) for details and examples. Use `@DbIgnore` to suppress [schema validation](configuration.md#schema-validation) for an entity or a specific field entirely. This is useful for legacy tables, columns handled by [custom converters](converters.md), or known type mismatches that are safe at runtime. [Kotlin] ```kotlin // Suppress all schema validation for a legacy entity. @DbIgnore data class LegacyUser( @PK val id: Int = 0, val name: String ) : Entity // Suppress schema validation for a specific field. data class User( @PK val id: Int = 0, val name: String, @DbIgnore("DB uses FLOAT, but column only stores whole numbers") val age: Int ) : Entity ``` [Java] ```java // Suppress all schema validation for a legacy entity. @DbIgnore record LegacyUser(@PK Integer id, @Nonnull String name ) implements Entity {} // Suppress schema validation for a specific field. record User(@PK Integer id, @Nonnull String name, @DbIgnore("DB uses FLOAT, but column only stores whole numbers") @Nonnull Integer age ) implements Entity {} ``` The optional `value` parameter documents why the mismatch is acceptable. When placed on an embedded component field, `@DbIgnore` suppresses validation for all columns within that component. ======================================== ## Source: projections.md ======================================== # Projections ## What Are Projections? Projections are **read-only** data structures that represent database views or complex queries defined via `@ProjectionQuery`. Like entities, they are plain Kotlin data classes or Java records with no proxies and no bytecode manipulation. Unlike entities, projections support only read operations: no insert, update, or remove. ``` ┌─────────────────────────────────────────────────────────────────────┐ │ Entity vs Projection │ ├─────────────────────────────────────────────────────────────────────┤ │ │ │ Entity Projection │ │ ─────────── ────────────── │ │ - Full CRUD operations - Read-only operations │ │ - Represents a database table - Represents a query result │ │ - Primary key required - Primary key optional │ │ - Dirty checking supported - No dirty checking needed │ │ │ └─────────────────────────────────────────────────────────────────────┘ ``` ## When to Use Projections **Database views:** Represent database views or materialized views as first-class types in your application. **Complex reusable queries:** Use `@ProjectionQuery` to define projections backed by complex SQL involving joins, aggregations, or subqueries that you want to reuse across your application. For simple ad-hoc queries or one-off aggregations, prefer using a plain data class. Projections are best suited for reusable, view-like structures. See [SQL Templates](sql-templates.md) for details. --- ## Defining a Projection A projection is a data class (Kotlin) or record (Java) that implements `Projection`, where `ID` is the type of the primary key. Use `Projection` when the projection has no primary key. ### Basic Projection with Primary Key [Kotlin] ```kotlin data class OwnerView( @PK val id: Int, val firstName: String, val lastName: String, val telephone: String? ) : Projection ``` [Java] ```java record OwnerView( @PK Integer id, @Nonnull String firstName, @Nonnull String lastName, @Nullable String telephone ) implements Projection {} ``` Storm maps this projection to the `owner` table (derived from the class name) and selects only the specified columns. ### Projection Without Primary Key When a projection doesn't need a primary key (e.g., aggregation results), use `Projection`: [Kotlin] ```kotlin data class VisitSummary( val visitDate: LocalDate, val description: String?, val petName: String ) : Projection ``` [Java] ```java record VisitSummary( @Nonnull LocalDate visitDate, @Nullable String description, @Nonnull String petName ) implements Projection {} ``` ### Projection with Foreign Keys Projections can reference entities or other projections using `@FK`: [Kotlin] ```kotlin data class PetView( @PK val id: Int, val name: String, @FK val owner: OwnerView // References another projection ) : Projection ``` [Java] ```java record PetView(@PK Integer id, @Nonnull String name, @FK OwnerView owner // References another projection ) implements Projection {} ``` Storm automatically joins the related table and populates the nested projection. ### Projection with Custom SQL Use `@ProjectionQuery` to define a projection backed by custom SQL: [Kotlin] ```kotlin @ProjectionQuery(""" SELECT b.id, COUNT(*) AS item_count, SUM(i.price) AS total_price FROM basket b JOIN basket_item bi ON bi.basket_id = b.id JOIN item i ON i.id = bi.item_id GROUP BY b.id """) data class BasketSummary( @PK val id: Int, val itemCount: Int, val totalPrice: BigDecimal ) : Projection ``` [Java] ```java @ProjectionQuery(""" SELECT b.id, COUNT(*) AS item_count, SUM(i.price) AS total_price FROM basket b JOIN basket_item bi ON bi.basket_id = b.id JOIN item i ON i.id = bi.item_id GROUP BY b.id """) record BasketSummary( @PK Integer id, int itemCount, BigDecimal totalPrice ) implements Projection {} ``` This is useful for aggregations, complex joins, or mapping database views. --- ## Querying Projections ### Getting a ProjectionRepository Obtain a `ProjectionRepository` from the ORM template. This is the read-only counterpart to `EntityRepository`. It provides find, select, count, and existence-check operations, but no insert, update, or remove. [Kotlin] ```kotlin val ownerViews = orm.projection(OwnerView::class) ``` [Java] ```java ProjectionRepository ownerViews = orm.projection(OwnerView.class); ``` ### Basic Operations The `ProjectionRepository` supports the same query patterns as `EntityRepository`, minus write operations. Results are plain data objects with no proxy behavior or session attachment. [Kotlin] ```kotlin // Count all val count = ownerViews.count() // Find by primary key (returns null if not found) val owner = ownerViews.findById(1) // Get by primary key (throws if not found) val owner = ownerViews.getById(1) // Check existence val exists = ownerViews.existsById(1) // Fetch all as a list val allOwners = ownerViews.findAll() // Fetch all as a lazy stream ownerViews.selectAll().forEach { owner -> println(owner.firstName) } ``` [Java] ```java // Count all long count = ownerViews.count(); // Find by primary key Optional owner = ownerViews.findById(1); // Get by primary key (throws if not found) OwnerView owner = ownerViews.getById(1); // Check existence boolean exists = ownerViews.existsById(1); // Fetch all as a list List allOwners = ownerViews.findAll(); // Fetch all as a stream (must close) try (Stream owners = ownerViews.selectAll()) { owners.forEach(o -> System.out.println(o.firstName())); } ``` ### Query Builder Use the `select()` method for type-safe queries with the generated metamodel: [Kotlin] ```kotlin // Filter by field value val owners = ownerViews.select() .where(OwnerView_.lastName, EQUALS, "Smith") .getResultList() // Filter with comparison operators val recentVisits = orm.projection(VisitView::class).select() .where(VisitView_.visitDate, GREATER_THAN, LocalDate.of(2024, 1, 1)) .getResultList() // Filter by nested foreign key val ownerPets = orm.projection(PetView::class).select() .where(PetView_.owner.id, EQUALS, 1) .getResultList() // Count with filter val count = ownerViews.selectCount() .where(OwnerView_.lastName, EQUALS, "Smith") .getSingleResult() ``` [Java] ```java // Filter by field value List owners = ownerViews.select() .where(OwnerView_.lastName, EQUALS, "Smith") .getResultList(); // Filter with comparison operators List recentVisits = orm.projection(VisitView.class).select() .where(VisitView_.visitDate, GREATER_THAN, LocalDate.of(2024, 1, 1)) .getResultList(); // Filter by nested foreign key List ownerPets = orm.projection(PetView.class).select() .where(PetView_.owner.id, EQUALS, 1) .getResultList(); ``` ### Batch Operations Efficiently fetch multiple projections by ID: [Kotlin] ```kotlin // Fetch multiple by IDs val ids = listOf(1, 2, 3) val owners = ownerViews.findAllById(ids) // Flow-based batch fetching (lazy evaluation) val idFlow = flowOf(1, 2, 3, 4, 5) ownerViews.selectById(idFlow).collect { owner -> // Process each owner } ``` [Java] ```java // Fetch multiple by IDs List ids = List.of(1, 2, 3); List owners = ownerViews.findAllById(ids); // Stream-based batch fetching (must close) try (Stream stream = ownerViews.selectById(ids.stream())) { stream.forEach(owner -> { // Process each owner }); } ``` --- ## Projections vs Entities: Choosing the Right Tool ``` ┌─────────────────────────────────────────────────────────────────────┐ │ When to Use What │ ├─────────────────────────────────────────────────────────────────────┤ │ │ │ Use Entity when you need to: │ │ • Create, update, or delete records │ │ • Work with the full row including all columns │ │ • Leverage dirty checking and optimistic locking │ │ • Maintain referential integrity through the ORM │ │ │ │ Use Projection when you need to: │ │ • Map database views or materialized views │ │ • Define reusable complex queries via @ProjectionQuery │ │ │ └─────────────────────────────────────────────────────────────────────┘ ``` ### Example: Same Table, Different Views [Kotlin] ```kotlin // Full entity for writes data class Owner( @PK val id: Int = 0, val firstName: String, val lastName: String, val address: String, val city: String, val telephone: String?, @Version val version: Int = 0 ) : Entity // Lightweight projection for list views data class OwnerListItem( @PK val id: Int, val firstName: String, val lastName: String ) : Projection // Detailed projection for detail views data class OwnerDetail( @PK val id: Int, val firstName: String, val lastName: String, val address: String, val city: String, val telephone: String? ) : Projection ``` [Java] ```java // Full entity for writes record Owner(@PK Integer id, @Nonnull String firstName, @Nonnull String lastName, @Nonnull String address, @Nonnull String city, @Nullable String telephone, @Version int version ) implements Entity {} // Lightweight projection for list views record OwnerListItem(@PK Integer id, @Nonnull String firstName, @Nonnull String lastName ) implements Projection {} // Detailed projection for detail views record OwnerDetail(@PK Integer id, @Nonnull String firstName, @Nonnull String lastName, @Nonnull String address, @Nonnull String city, @Nullable String telephone ) implements Projection {} ``` Use `Owner` when creating or updating owners. Use `OwnerListItem` for displaying a list (fewer columns, faster queries). Use `OwnerDetail` for read-only detail views. --- ## Working with Refs When a projection references another entity or projection but you do not need the full related object in every query, use `Ref` to store only the foreign key value. This avoids the cost of an additional JOIN when you only need the key. You can resolve the reference later by fetching the full object on demand. ```kotlin data class PetListItem( @PK val id: Int, val name: String, @FK val owner: Ref // Lightweight reference ) : Projection ``` The `Ref` contains only the foreign key value. You can resolve it later if needed: ```kotlin val pet = orm.projection(PetListItem::class).getById(1) // Access the foreign key without loading the owner val ownerId = pet.owner.id() // Load the full owner when needed val owner = orm.projection(OwnerView::class).getById(ownerId) ``` --- ## Mapping to Custom Tables By default, Storm derives the table name from the projection class name. Override this with `@DbTable`: ```kotlin @DbTable("owner") data class OwnerSummary( @PK val id: Int, @DbColumn("first_name") val name: String ) : Projection ``` Use `@DbColumn` to map fields to columns with different names. --- ## ProjectionRepository Methods | Method | Description | |--------|-------------| | `count()` | Count all projections | | `findById(id)` | Find by primary key, returns null if not found | | `getById(id)` | Get by primary key, throws if not found | | `existsById(id)` | Check if projection exists | | `findAll()` | Fetch all as a list | | `findAllById(ids)` | Fetch multiple by IDs | | `selectAll()` | Lazy Flow of all projections | | `selectById(ids)` | Lazy Flow by IDs | | `select()` | Query builder for filtering | | `selectCount()` | Query builder for counting | Note: Unlike `EntityRepository`, there are no `insert`, `update`, `remove`, or `upsert` methods. Projections are read-only. --- ## Best Practices ### 1. Keep Projections Focused Design projections for specific use cases rather than trying to reuse one projection everywhere: ```kotlin // Good: Purpose-built projections data class OwnerDropdownItem( @PK val id: Int, val displayName: String // Computed: firstName + lastName ) : Projection data class OwnerSearchResult( @PK val id: Int, val firstName: String, val lastName: String, val city: String ) : Projection // Avoid: One projection trying to serve all purposes data class OwnerProjection( @PK val id: Int, val firstName: String, val lastName: String, val address: String?, // Sometimes null, sometimes not val city: String?, val telephone: String?, val petCount: Int? // Only populated in some queries ) : Projection ``` ### 2. Use @ProjectionQuery for Complex Queries When your projection involves joins, aggregations, or subqueries, define the SQL explicitly: ```kotlin @ProjectionQuery(""" SELECT o.id, o.first_name, o.last_name, COUNT(p.id) AS pet_count FROM owner o LEFT JOIN pet p ON p.owner_id = o.id GROUP BY o.id, o.first_name, o.last_name """) data class OwnerWithPetCount( @PK val id: Int, val firstName: String, val lastName: String, val petCount: Int ) : Projection ``` ### 3. Prefer Projections for Read-Heavy Paths In read-heavy scenarios (dashboards, lists, search results), projections reduce database load: ```kotlin // Instead of loading full entities val owners = orm.entity(Owner::class).findAll() // Loads all columns // Load only what you need val owners = orm.projection(OwnerListItem::class).findAll() // Loads 3 columns ``` ### 4. Use Void for Keyless Results Aggregations and analytics often don't have a natural primary key: ```kotlin @ProjectionQuery(""" SELECT DATE_TRUNC('month', visit_date) AS month, COUNT(*) AS visit_count, COUNT(DISTINCT pet_id) AS unique_pets FROM visit GROUP BY DATE_TRUNC('month', visit_date) """) data class MonthlyVisitStats( val month: LocalDate, val visitCount: Int, val uniquePets: Int ) : Projection // No primary key ``` ### 5. Combine with Entity Graphs For complex object graphs, you can mix projections with entity relationships: ```kotlin data class PetWithOwnerSummary( @PK val id: Int, val name: String, val birthDate: LocalDate?, @FK val owner: OwnerListItem // Projection, not full entity ) : Projection ``` This fetches pet details with a lightweight owner summary in a single query. ======================================== ## Source: relationships.md ======================================== # Relationships Automatic relationship loading is a core part of Storm's design. Your data model is fully captured by immutable entity classes. When you define a foreign key, Storm automatically joins the related entity and returns complete, fully populated records in a single query. This design enables: - **Single-query loading.** No N+1 problems. One query returns the complete entity graph. - **Type-safe path expressions.** Filter on joined fields with full IDE support, including auto-completion across relationships: `User_.city.name eq "Sunnyvale"` - **Concise syntax.** No manual joins, no fetch configuration, no lazy loading surprises. - **Predictable behavior.** What you define is what you get. The entity structure *is* the query structure. ```kotlin // Define the relationships once data class Country( @PK val code: String, val name: String ) : Entity data class City( @PK val id: Int = 0, val name: String, @FK val country: Country ) : Entity data class User( @PK val id: Int = 0, val name: String, @FK val city: City // Auto-joins City, Country, and all nested relationships ) : Entity // Query with type-safe access to nested fields throughout the entire entity graph val users = orm.findAll(User_.city.country.code eq "US") // Result: fully populated User with City and Country included users.forEach { println("${it.name} lives in ${it.city.name}, ${it.city.country.name}") } ``` All relationship types are supported through the `@FK` annotation. --- ## One-to-One / Many-to-One The most common relationship type. A foreign key field on one entity points to the primary key of another. Storm automatically generates a JOIN when querying and populates the referenced entity in the result. [Kotlin] Use `@FK` to reference another entity: ```kotlin data class City( @PK val id: Int = 0, val name: String, val population: Long ) : Entity data class User( @PK val id: Int = 0, val email: String, @FK val city: City // Many users belong to one city ) : Entity ``` When you query a `User`, the related `City` is automatically loaded: ```kotlin val user = orm.find(User_.id eq userId) println(user?.city.name) // City is already loaded ``` [Java] Use `@FK` to reference another entity: ```java record City(@PK Integer id, String name, long population ) implements Entity {} record User(@PK Integer id, String email, @FK City city // Many users belong to one city ) implements Entity {} ``` When you query a `User`, the related `City` is automatically loaded: ```java Optional user = orm.entity(User.class) .select() .where(User_.id, EQUALS, userId) .getOptionalResult(); user.ifPresent(u -> System.out.println(u.city().name())); // City is already loaded ``` --- ## Nullable Relationships [Kotlin] When a foreign key can be null (the referenced entity is optional), Storm uses a LEFT JOIN instead of an INNER JOIN. This ensures that parent rows are still returned even when the referenced entity does not exist. ```kotlin data class User( @PK val id: Int = 0, val email: String, @FK val city: City? // Nullable = LEFT JOIN ) : Entity ``` [Java] In Java, use `@Nullable` on foreign key fields to indicate that the referenced entity is optional. Storm switches from INNER JOIN to LEFT JOIN for nullable foreign keys. ```java record User(@PK Integer id, String email, @Nullable @FK City city // Nullable = LEFT JOIN ) implements Entity {} ``` --- ## One-to-Many Storm does not store collections on the "one" side of a relationship. Instead, query the "many" side and filter by the parent entity. This keeps entities stateless and avoids the lazy-loading pitfalls found in traditional ORMs. [Kotlin] ```kotlin // Find all users in a city val usersInCity: List = orm.findAll(User_.city eq city) ``` [Java] ```java // Find all users in a city List usersInCity = orm.entity(User.class) .select() .where(User_.city, EQUALS, city) .getResultList(); ``` --- ## Many-to-Many Use a join entity with composite primary key: [Kotlin] ```kotlin data class UserRolePk( val userId: Int, val roleId: Int ) data class UserRole( @PK val userRolePk: UserRolePk, @FK @Persist(insertable = false, updatable = false) val user: User, @FK @Persist(insertable = false, updatable = false) val role: Role ) : Entity ``` The `@Persist(insertable = false, updatable = false)` annotation indicates that the FK columns overlap with the composite PK columns. The FK fields are used to load the related entities, but the column values come from the PK during insert/update operations. Query through the join entity: ```kotlin // Find all roles for a user val userRoles: List = orm.findAll(UserRole_.user eq user) val roles: List = userRoles.map { it.role } // Find all users with a specific role val userRoles: List = orm.findAll(UserRole_.role eq role) val users: List = userRoles.map { it.user } ``` For more control, use explicit join queries: ```kotlin val roles: List = orm.entity(Role::class) .select() .innerJoin(UserRole::class).on(Role::class) .whereAny(UserRole_.user eq user) .resultList ``` [Java] ```java record UserRolePk(int userId, int roleId) {} record UserRole(@PK UserRolePk userRolePk, @Nonnull @FK @Persist(insertable = false, updatable = false) User user, @Nonnull @FK @Persist(insertable = false, updatable = false) Role role ) implements Entity {} ``` The `@Persist(insertable = false, updatable = false)` annotation indicates that the FK columns overlap with the composite PK columns. The FK fields are used to load the related entities, but the column values come from the PK during insert/update operations. Query through the join entity: ```java // Find all roles for a user List userRoles = orm.entity(UserRole.class) .select() .where(UserRole_.user, EQUALS, user) .getResultList(); List roles = userRoles.stream() .map(UserRole::role) .toList(); ``` For more control, use explicit join queries: ```java List roles = orm.entity(Role.class) .select() .innerJoin(UserRole.class).on(Role.class) .where(UserRole_.user, EQUALS, user) .getResultList(); ``` --- ## Composite Foreign Keys When referencing an entity with a composite primary key, Storm automatically generates multi-column join conditions: [Kotlin] ```kotlin // Entity with composite PK data class UserRolePk( val userId: Int, val roleId: Int ) data class UserRole( @PK val pk: UserRolePk, @FK val user: User, @FK val role: Role, val grantedAt: Instant ) : Entity // Entity referencing the composite PK entity data class AuditLog( @PK val id: Int = 0, val action: String, @FK val userRole: UserRole? // References entity with composite PK ) : Entity ``` Storm generates a multi-column join condition: ```sql LEFT JOIN user_role ur ON al.user_id = ur.user_id AND al.role_id = ur.role_id ``` **Custom column names:** Use `@DbColumn` annotations to specify custom FK column names: ```kotlin data class AuditLog( @PK val id: Int = 0, val action: String, @FK @DbColumn("audit_user_id") @DbColumn("audit_role_id") val userRole: UserRole? ) : Entity ``` [Java] ```java // Entity with composite PK record UserRolePk(int userId, int roleId) {} record UserRole(@PK UserRolePk pk, @Nonnull @FK User user, @Nonnull @FK Role role, Instant grantedAt ) implements Entity {} // Entity referencing the composite PK entity record AuditLog(@PK Integer id, String action, @Nullable @FK UserRole userRole // References entity with composite PK ) implements Entity {} ``` Storm generates a multi-column join condition: ```sql LEFT JOIN user_role ur ON al.user_id = ur.user_id AND al.role_id = ur.role_id ``` **Custom column names:** Use `@DbColumn` annotations to specify custom FK column names: ```java record AuditLog(@PK Integer id, String action, @Nullable @FK @DbColumn("audit_user_id") @DbColumn("audit_role_id") UserRole userRole ) implements Entity {} ``` --- ## Self-Referential Relationships When an entity references itself (e.g., employees with managers, categories with parents), eager loading would recurse infinitely. Use `Ref` to break the cycle. `Ref` stores only the foreign key value without loading the referenced entity, so Storm stops the JOIN chain at that point. [Kotlin] ```kotlin data class Employee( @PK val id: Int = 0, val name: String, @FK val manager: Ref? // Self-reference with Ref ) : Entity ``` [Java] ```java record Employee(@PK Integer id, String name, @Nullable @FK Ref manager // Self-reference with Ref ) implements Entity {} ``` --- ## Primary Key as Foreign Key Sometimes a table's primary key is also a foreign key to another entity. This is common for: - **Dependent one-to-one relationships** where a child entity cannot exist without its parent - **Extension tables** that add optional data to an existing entity - **Specialized subtypes** in a table-per-subtype inheritance strategy (see [Polymorphism](polymorphism.md)) Use both `@PK` and `@FK` annotations on the same field, with `generation = NONE` since the key value comes from the related entity rather than being auto-generated: [Kotlin] ```kotlin data class UserProfile( @PK(generation = NONE) @FK val user: User, // PK is also FK to User val bio: String?, val avatarUrl: String?, val theme: Theme? ) : Entity ``` The `generation = NONE` tells Storm that the primary key is not auto-generated; the value must be provided when inserting. This is necessary because the key comes from the related `User` entity. **Column name resolution:** When both `@PK` and `@FK` are present, Storm resolves the column name in this order: 1. Explicit name in `@PK` (e.g., `@PK("user_profile_id")`) 2. Explicit name in `@DbColumn` 3. Foreign key naming convention (default) For a field named `user`, the FK convention produces `user_id`. To override this, specify the name explicitly: ```kotlin @PK("user_profile_id", generation = NONE) @FK val user: User // Uses "user_profile_id" ``` The entity's type parameter is the related entity type (`User`), not a primitive key type. This reflects that the `UserProfile` is uniquely identified by its associated `User`. When inserting, provide the related entity: ```kotlin val profile = UserProfile( user = existingUser, bio = "Software developer", avatarUrl = null, theme = Theme.DARK ) orm.insert(profile) ``` Storm extracts the primary key from the `User` entity and uses it as the value for the `user_id` column. [Java] ```java record UserProfile(@PK(generation = NONE) @FK User user, // PK is also FK to User @Nullable String bio, @Nullable String avatarUrl, @Nullable Theme theme ) implements Entity {} ``` The `generation = NONE` tells Storm that the primary key is not auto-generated; the value must be provided when inserting. This is necessary because the key comes from the related `User` entity. **Column name resolution:** When both `@PK` and `@FK` are present, Storm resolves the column name in this order: 1. Explicit name in `@PK` (e.g., `@PK("user_profile_id")`) 2. Explicit name in `@DbColumn` 3. Foreign key naming convention (default) For a field named `user`, the FK convention produces `user_id`. To override this, specify the name explicitly: ```java @PK(value = "user_profile_id", generation = NONE) @FK User user // Uses "user_profile_id" ``` The entity's type parameter is the related entity type (`User`), not a primitive key type. This reflects that the `UserProfile` is uniquely identified by its associated `User`. When inserting, provide the related entity: ```java var profile = new UserProfile(existingUser, "Software developer", null, Theme.DARK); orm.entity(UserProfile.class).insert(profile); ``` Storm extracts the primary key from the `User` entity and uses it as the value for the `user_id` column. --- ## Relationship Loading Behavior Storm loads the complete reachable entity graph in a single query using JOINs, unless a relationship is explicitly broken with `Ref`: ```kotlin data class Order( @PK val id: Int = 0, @FK val customer: Customer, @FK val shippingAddress: Address ) : Entity data class Customer( @PK val id: Int = 0, val name: String, @FK val defaultAddress: Address ) : Entity ``` When you query `Order`: 1. `Order` is loaded 2. `Customer` is loaded (via JOIN) 3. `Address` for shipping is loaded (via JOIN) 4. `Address` for customer default is loaded (via JOIN) All in **one SQL query**. No lazy loading surprises, no N+1 problems. ### How It Works Storm generates a single SELECT with all necessary JOINs: ``` ┌─────────────────────────────────────────────────────────────────────┐ │ SELECT o.id, o.customer_id, o.shipping_address_id, │ │ c.id, c.name, c.default_address_id, │ │ a1.id, a1.street, a1.city, │ │ a2.id, a2.street, a2.city │ │ FROM order o │ │ INNER JOIN customer c ON o.customer_id = c.id │ │ INNER JOIN address a1 ON o.shipping_address_id = a1.id │ │ INNER JOIN address a2 ON c.default_address_id = a2.id │ │ WHERE o.id = ? │ └─────────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────────┐ │ Result: Single row with all columns from all joined tables │ │ │ │ Storm automatically: │ │ 1. Parses columns back into their respective entity types │ │ 2. Constructs the complete object graph │ │ 3. Returns a fully populated Order with nested entities │ └─────────────────────────────────────────────────────────────────────┘ ``` Storm always uses explicit column names (never `SELECT *`), ensuring predictable results even when table schemas change. ### Entity Graph to JOIN Mapping Storm traverses the entity graph and generates JOINs based on FK nullability: ``` Entity Graph Generated JOINs ───────────── ─────────────── ┌─────────┐ FROM order o │ Order │ └────┬────┘ │ ├──── @FK customer ──────────────► INNER JOIN customer c │ (non-null) ON o.customer_id = c.id │ │ │ └─ @FK defaultAddress ► INNER JOIN address a2 │ (non-null) ON c.default_address_id = a2.id │ └──── @FK shippingAddress? ──────► LEFT JOIN address a1 (nullable) ON o.shipping_address_id = a1.id ``` **Join type is determined by nullability:** - Non-nullable FK -> INNER JOIN (referenced entity must exist) - Nullable FK -> LEFT JOIN (referenced entity may be null) **Nested FKs are joined transitively.** Storm follows the entire entity graph, joining each FK it encounters. ### Why Eager Loading? Traditional ORMs use lazy loading, which causes: | Problem | Description | |---------|-------------| | **N+1 queries** | Accessing a collection triggers N additional queries | | **LazyInitializationException** | Accessing data outside transaction scope fails | | **Unpredictable performance** | Same code has different DB load depending on access patterns | | **Hidden complexity** | Proxied entities mask when database access occurs | Storm's approach: | Benefit | Description | |---------|-------------| | **Predictable queries** | One query per `find`/`select` operation | | **No session required** | Entities work anywhere, no transaction scope needed | | **Transparent behavior** | What you query is what you get | | **Simple debugging** | Easy to trace and optimize SQL | ### Managing Graph Depth For deep or circular relationships, use `Ref` to break the loading chain: ```kotlin data class Category( @PK val id: Int = 0, val name: String, @FK val parent: Ref? // Stops here, loads only the ID ) : Entity ``` See [Refs](refs.md) for details on lightweight references. ## Tips 1. **Keep entity graphs shallow.** Deep graphs mean large JOINs. Use `Ref` for optional or deep relationships. 2. **Query the "many" side.** For one-to-many, query the child entity with a filter on the parent. 3. **Use join entities for many-to-many.** Explicit join tables give you control over the relationship. 4. **Match nullability to your schema.** Use nullable FKs only when the database column allows NULL. 5. **Use Ref for circular references.** Prevents infinite recursion in self-referential entities. ======================================== ## Source: repositories.md ======================================== # Repositories Entity repositories provide a high-level abstraction for managing entities in the database. They offer methods for creating, reading, updating, and deleting entities, as well as querying and filtering based on specific criteria. --- ## Getting a Repository [Kotlin] Storm provides two ways to obtain a repository. The generic `entity()` method returns a built-in repository with standard CRUD operations. For custom query methods, define your own interface extending `EntityRepository` and retrieve it with `repository()` (covered below in Custom Repositories). ```kotlin val orm = ORMTemplate.of(dataSource) // Generic entity repository val userRepository = orm.entity(User::class) // Or using extension function val userRepository = orm.entity() ``` [Java] The Java API follows the same pattern as Kotlin. The generic `entity()` method provides standard CRUD operations; custom interfaces use `repository()`. ```java var orm = ORMTemplate.of(dataSource); // Generic entity repository EntityRepository userRepository = orm.entity(User.class); ``` --- ## Basic CRUD Operations [Kotlin] All CRUD operations use the entity's primary key (marked with `@PK`) for identity. Insert returns the entity with any database-generated fields populated (such as auto-increment IDs). Update and remove match by primary key. Query methods accept metamodel-based filter expressions that compile to parameterized WHERE clauses. ```kotlin // Create val user = orm insert User( email = "alice@example.com", name = "Alice", birthDate = LocalDate.of(1990, 5, 15) ) // Read val found: User? = orm.entity().findById(user.id) val alice: User? = orm.find(User_.name eq "Alice") val all: List = orm.findAll(User_.city eq city) // Update orm update user.copy(name = "Alice Johnson") // Remove orm remove user // Remove by condition orm.removeBy(User_.city, city) // Remove by predicate orm.removeAll(User_.active eq false) // Remove all orm.removeAll() // Delete all (builder approach, requires unsafe() to confirm intent) orm.entity(User::class).delete().unsafe().executeUpdate() ``` [Java] Java CRUD operations use the fluent builder pattern. Since Java records are immutable, updates require constructing a new record instance with the changed field values. ```java // Insert User user = userRepository.insertAndFetch(new User( null, "alice@example.com", "Alice", LocalDate.of(1990, 5, 15), city )); // Read Optional found = userRepository.select() .where(User_.id, EQUALS, user.id()) .getOptionalResult(); List all = userRepository.select() .where(User_.city, EQUALS, city) .getResultList(); // Update userRepository.update(new User( user.id(), "alice@example.com", "Alice Johnson", user.birthDate(), user.city() )); // Remove userRepository.remove(user); // Remove all userRepository.removeAll(); // Delete all (builder approach, requires unsafe() to confirm intent) userRepository.delete().unsafe().executeUpdate(); ``` > **Warning:** Storm rejects DELETE and UPDATE queries that have no WHERE clause, throwing a `PersistenceException`. This prevents accidental bulk deletions, which is especially important because `QueryBuilder` is immutable and a lost `where()` return value would silently drop the filter. Call `unsafe()` to opt out of this check when you intentionally want to affect all rows. The `removeAll()` convenience method calls `unsafe()` internally. Storm uses dirty checking to determine which columns to include in the UPDATE statement. See [Dirty Checking](dirty-checking.md) for configuration details. --- ## Streaming [Kotlin] For result sets that may be large, streaming avoids loading all rows into memory at once. Kotlin's `Flow` provides automatic resource management through structured concurrency: the underlying database cursor and connection are released when the flow completes or is cancelled, without requiring explicit cleanup. ```kotlin val users: Flow = userRepository.selectAll() val count = users.count() // Collect to list val userList: List = users.toList() ``` [Java] Java streams over database results hold open a database cursor and connection. You must close the stream explicitly, either with try-with-resources or by calling `close()`. Failing to close the stream leaks database connections. ```java try (Stream users = userRepository.selectAll()) { List userIds = users.map(User::id).toList(); } ``` --- ## Unique Key Lookups When a field is annotated with `@UK`, the metamodel generates a `Metamodel.Key` instance that enables type-safe single-result lookups: [Kotlin] ```kotlin val user: User? = userRepository.findBy(User_.email, "alice@example.com") val user: User = userRepository.getBy(User_.email, "alice@example.com") // throws if not found ``` [Java] ```java Optional user = userRepository.findBy(User_.email, "alice@example.com"); User user = userRepository.getBy(User_.email, "alice@example.com"); // throws if not found ``` Since `@PK` implies `@UK`, primary key fields also work with `findBy` and `getBy`. Entities loaded within a transaction are cached. See [Entity Cache](entity-cache.md) for details. --- ## Offset-Based Pagination Storm provides built-in `Page` and `Pageable` types for offset-based pagination. These eliminate the need to write manual `LIMIT`/`OFFSET` queries or define your own page wrapper. The repository handles the count query and result slicing automatically. For query-builder-level pagination (manual offset/limit, Page with query builder), see [Pagination and Scrolling: Pagination](pagination-and-scrolling.md#pagination). ### Page and Pageable A `Pageable` describes a pagination request: which page to fetch, how many results per page, and an optional sort order. A `Page` holds the results along with metadata such as the total number of matching results, the total number of pages, and navigation helpers. | `Page` field / method | Description | |---|---| | `content` | The list of results for this page | | `totalCount` | Total number of matching rows across all pages | | `pageNumber()` | Zero-based index of the current page | | `pageSize()` | Maximum number of elements per page | | `totalPages()` | Total number of pages | | `hasNext()` | Whether a next page exists | | `hasPrevious()` | Whether a previous page exists | | `nextPageable()` | Returns a `Pageable` for the next page (preserves sort orders) | | `previousPageable()` | Returns a `Pageable` for the previous page (preserves sort orders) | Create a `Pageable` using one of the factory methods: - `Pageable.ofSize(pageSize)` creates a request for the first page (page 0) with the given size. - `Pageable.of(pageNumber, pageSize)` creates a request for a specific page. - Chain `.sortBy(field)` or `.sortByDescending(field)` to add sort orders. ### Basic Usage The simplest way to paginate is to call `page(pageNumber, pageSize)` on a repository. For more control over sorting, construct a `Pageable` and pass it to `page(pageable)`. [Kotlin] ```kotlin // First page of 20 users val page1: Page = userRepository.page(0, 20) // Using Pageable with sort order val pageable = Pageable.ofSize(20).sortBy(User_.name) val page: Page = userRepository.page(pageable) // Navigate to next page if (page.hasNext()) { val nextPage = userRepository.page(page.nextPageable()) } ``` [Java] ```java // First page of 20 users Page page1 = userRepository.page(0, 20); // Using Pageable with sort order Pageable pageable = Pageable.ofSize(20).sortBy(User_.name); Page page = userRepository.page(pageable); // Navigate to next page if (page.hasNext()) { Page nextPage = userRepository.page(page.nextPageable()); } ``` ### Ref Variants Use `pageRef` to load only primary keys instead of full entities, returning a `Page>`. This is useful when you need identifiers for a subsequent batch operation without the overhead of fetching full entity data. [Kotlin] ```kotlin val refPage: Page> = userRepository.pageRef(0, 20) ``` [Java] ```java Page> refPage = userRepository.pageRef(0, 20); ``` --- ## Scrolling Repositories provide convenience methods for scrolling through result sets, where a unique column value (typically the primary key) acts as a cursor. This approach avoids the performance issues of `OFFSET` on large tables, because the database can seek directly to the cursor position using an index rather than scanning and discarding skipped rows. The key parameter must be a `Metamodel.Key`, which is generated for fields annotated with `@UK` or `@PK`. See [Metamodel](metamodel.md#unique-keys-uk-and-metamodelkey) for details. The `scroll` method accepts a `Scrollable` that captures the cursor state (key, page size, direction, and cursor values) and returns a `Window` containing the page content, informational `hasNext`/`hasPrevious` flags, and `Scrollable` navigation tokens for fetching the adjacent window. Navigation tokens (`next()`, `previous()`) are always present when the window has content; they are only `null` when the window is empty. The `hasNext` and `hasPrevious` flags indicate whether more results existed at query time, but they do not gate access to the navigation tokens. Since new data may appear after the query, the developer decides whether to follow a cursor. Create a `Scrollable` using the factory methods, then use the navigation tokens on the returned `Window` to move forward or backward: [Kotlin] ```kotlin // First page of 20 users ordered by ID val window: Window = userRepository.scroll(Scrollable.of(User_.id, 20)) // Next page (next() is non-null whenever the window has content) val next: Window = userRepository.scroll(window.next()) // Previous page val previous: Window = userRepository.scroll(window.previous()) // Optionally check hasNext/hasPrevious to decide whether to follow the cursor. // These flags reflect a snapshot at query time; new data may appear afterward. if (window.hasNext()) { // more results existed when the query ran } ``` To scroll through a filtered subset, use the query builder with `scroll` as a terminal operation. The filter and cursor conditions are combined with AND. ```kotlin val activeWindow = userRepository.select() .where(User_.active, EQUALS, true) .scroll(Scrollable.of(User_.id, 20)) val nextActive = userRepository.select() .where(User_.active, EQUALS, true) .scroll(activeWindow.next()) ``` For backward scrolling (starting from the end of the result set), use `.backward()`: ```kotlin val lastWindow: Window = userRepository.scroll(Scrollable.of(User_.id, 20).backward()) ``` The scroll methods handle ordering internally and reject explicit `orderBy()` calls. Backward scrolling returns results in descending key order; reverse the list if you need ascending order for display. See [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling) for full details on ordering constraints. [Java] The same scrolling methods described in the Kotlin section are available on Java repositories. The `scroll` method accepts a `Scrollable` and returns a `Window` containing the page `content()`, informational `hasNext()`/`hasPrevious()` flags, and `Scrollable` navigation tokens (`next()`, `previous()`) that are always present when the window has content. ```java // First page of 20 users ordered by ID Window window = userRepository.scroll(Scrollable.of(User_.id, 20)); // Next page (next() is non-null whenever the window has content) Window next = userRepository.scroll(window.next()); // Previous page Window previous = userRepository.scroll(window.previous()); // Optionally check hasNext/hasPrevious to decide whether to follow the cursor. // These flags reflect a snapshot at query time; new data may appear afterward. if (window.hasNext()) { // more results existed when the query ran } ``` For filtered results, use the query builder and call `scroll` as a terminal operation. The filter and cursor conditions are combined with AND. ```java Window activeWindow = userRepository.select() .where(User_.active, EQUALS, true) .scroll(Scrollable.of(User_.id, 20)); ``` For backward scrolling (starting from the end of the result set), use `.backward()`: ```java Window lastWindow = userRepository.scroll(Scrollable.of(User_.id, 20).backward()); ``` As with Kotlin, the scroll methods handle ordering internally and reject explicit `orderBy()` calls. Backward scrolling returns results in descending key order. See [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling) for full details. ### Scrolling with Sort When you need to sort by a non-unique column (for example, a date or status), use the `Scrollable.of` overload that accepts a separate sort column. This accepts a `key` column (typically the primary key) as a unique tiebreaker, and a `sort` column for the primary sort order, to guarantee deterministic paging even when `sort` values repeat. [Kotlin] ```kotlin // First page sorted by creation date, with ID as tiebreaker val window: Window = postRepository.scroll(Scrollable.of(Post_.id, Post_.createdAt, 20)) // Next page val next: Window = postRepository.scroll(window.next()) // With filter (use query builder) val activeWindow = postRepository.select() .where(Post_.active, EQUALS, true) .scroll(Scrollable.of(Post_.id, Post_.createdAt, 20)) ``` [Java] ```java // First page sorted by creation date, with ID as tiebreaker Window window = postRepository.scroll(Scrollable.of(Post_.id, Post_.createdAt, 20)); // Next page Window next = postRepository.scroll(window.next()); ``` The `Window` carries navigation tokens (`next()`, `previous()`) that encode the cursor values internally, so the client does not need to extract cursor values manually. These tokens are always non-null when the window contains content. For REST APIs, `nextCursor()` and `previousCursor()` provide a convenient serialized form: `nextCursor()` returns `null` when `hasNext` is false, and `previousCursor()` returns `null` when `hasPrevious` is false. For queries that need joins, projections, or more complex filtering, use the query builder and call `scroll` as a terminal operation. See [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling) for full details on how scrolling composes with WHERE and ORDER BY clauses, including indexing recommendations. ## Pagination vs. Scrolling Storm supports two strategies for traversing large result sets. The table below summarizes the trade-offs to help you choose. | Factor | Pagination (`page`) | Scrolling (`scroll`) | |---|---|---| | Request type | `Pageable` | `Scrollable` | | Result type | `Page` | `Window` | | Navigation | page number | cursor | | Count query | yes | no | | Random access | yes | no | | Performance at page 1 | Good | Good | | Performance at page 1,000 | Degrades (database must skip rows) | Consistent (index seek) | | Handles concurrent inserts | Rows may shift between pages | Stable cursor | | Navigate forward | `page.nextPageable()` | `window.next()` | | Navigate backward | `page.previousPageable()` | `window.previous()` | Use pagination when you need random page access or a total count (for example, displaying "Page 3 of 12" in a UI). Use scrolling when you need consistent performance over deep result sets or when the data changes frequently between requests. --- ## Refs Refs are lightweight identifiers that carry only the record type and primary key. Selecting refs instead of full entities reduces memory usage and network bandwidth when you only need IDs for subsequent operations, such as batch lookups or filtering. See [Refs](refs.md) for a detailed discussion. [Kotlin] ```kotlin // Select refs (lightweight identifiers) val refs: Flow> = userRepository.selectAllRef() // Select by refs val users: Flow = userRepository.selectByRef(refs) ``` [Java] Ref operations in Java return `Stream` objects that must be closed. Refs carry only the primary key and record type, making them suitable for batch operations where loading full records would be wasteful. ```java // Select refs (lightweight identifiers) try (Stream> refs = userRepository.selectAllRef()) { // Process refs } // Select by refs List> refList = ...; try (Stream users = userRepository.selectByRef(refList.stream())) { // Process users } ``` --- ## Custom Repositories [Kotlin] Custom repositories let you encapsulate domain-specific queries behind a typed interface. Define an interface that extends `EntityRepository`, add methods with default implementations that use the inherited query API, and retrieve it from `orm.repository()`. This keeps query logic in a single place and makes it testable through interface substitution. The advantage over using the generic `entity()` repository is that custom methods express domain intent (e.g., `findByEmail`) rather than exposing raw query construction to callers. ```kotlin interface UserRepository : EntityRepository { // Custom query method fun findByEmail(email: String): User? = find(User_.email eq email) // Custom query with multiple conditions fun findByNameInCity(name: String, city: City): List = findAll((User_.city eq city) and (User_.name eq name)) } ``` Get the repository: ```kotlin val userRepository: UserRepository = orm.repository() ``` [Java] Java custom repositories follow the same pattern as Kotlin, using `default` methods to provide implementations. The fluent builder API chains `where`, `and`, and `or` calls to construct type-safe filter expressions. ```java interface UserRepository extends EntityRepository { // Custom query method default Optional findByEmail(String email) { return select() .where(User_.email, EQUALS, email) .getOptionalResult(); } // Custom query with multiple conditions default List findByNameInCity(String name, City city) { return select() .where(it -> it.where(User_.city, EQUALS, city) .and(it.where(User_.name, EQUALS, name))) .getResultList(); } } ``` Get the repository: ```java UserRepository userRepository = orm.repository(UserRepository.class); ``` --- ## Repository with Spring [Kotlin] Repositories can be injected using Spring's dependency injection: ```kotlin @Service class UserService( private val userRepository: UserRepository ) { fun findUser(email: String): User? = userRepository.findByEmail(email) } ``` [Java] Repositories can be injected using Spring's dependency injection: ```java @Service public class UserService { private final UserRepository userRepository; public UserService(UserRepository userRepository) { this.userRepository = userRepository; } public Optional findUser(String email) { return userRepository.findByEmail(email); } } ``` --- ## Spring Configuration Storm repositories are plain interfaces, so Spring cannot discover them through component scanning. The `RepositoryBeanFactoryPostProcessor` bridges this gap by scanning specified packages for interfaces that extend `EntityRepository` or `ProjectionRepository` and registering proxy implementations as Spring beans. Once registered, you can inject repositories through standard constructor injection. See [Spring Integration](spring-integration.md) for full configuration details. [Kotlin] ```kotlin @Configuration class AcmeRepositoryBeanFactoryPostProcessor : RepositoryBeanFactoryPostProcessor() { override val repositoryBasePackages: Array get() = arrayOf("com.acme.repository") } ``` [Java] ```java @Configuration public class AcmeRepositoryBeanFactoryPostProcessor extends RepositoryBeanFactoryPostProcessor { @Override public String[] getRepositoryBasePackages() { return new String[] { "com.acme.repository" }; } } ``` ## Tips 1. **Use custom repositories.** Encapsulate domain-specific queries in repository interfaces. 2. **Close streams.** Always close `Stream` results to release database resources. 3. **Prefer Kotlin Flow.** Kotlin's Flow automatically handles resource cleanup. 4. **Use Spring injection.** Let Spring manage repository lifecycle for cleaner code. ======================================== ## Source: queries.md ======================================== # Queries Storm provides a powerful and flexible query API. All queries are type-safe; the generated metamodel (`User_`, `City_`, etc.) catches errors at compile time rather than at runtime. Key features: - **Compile-time checked** -- field references are validated by the metamodel - **No string-based queries** -- no risk of typos in column names - **Single-query loading** -- related entities load in JOINs, not N+1 queries - **Two styles** -- quick methods for simple cases, fluent builder for complex queries --- ## Choosing a Query Approach Storm offers three ways to query data, each suited to different complexity levels: | Approach | Best for | Type safety | Flexibility | |----------|----------|-------------|-------------| | **Repository `findBy`** | Simple key lookups by primary key or unique key | Full compile-time | Low (single-field equality only) | | **Query DSL** | Filtering, ordering, pagination with type-safe conditions | Full compile-time | Medium (AND/OR predicates, joins, ordering) | | **SQL Templates** | Complex joins, subqueries, CTEs, window functions, database-specific SQL | Column references checked at compile time, SQL structure at runtime | High (full SQL control) | Start with the simplest approach that meets your needs. Use `findBy` or `findAll` for straightforward lookups. Move to the query builder when you need compound filters or pagination. Use SQL templates when you need SQL features the DSL does not cover. --- ## Quick Queries [Kotlin] Storm for Kotlin offers two complementary query styles; use whichever fits best. For simple queries, use methods directly on the ORM template: ```kotlin // Find single entity with predicate val user: User? = orm.find(User_.email eq email) // Find all matching val users: List = orm.findAll(User_.city eq city) // Find by field value val user: User? = orm.findBy(User_.email, email) // Check existence val exists: Boolean = orm.existsBy(User_.email, email) ``` [Java] The Java DSL uses the same `EntityRepository` interface as Kotlin. Obtain a repository with `orm.entity(Class)` and use its fluent query builder. Return types use `Optional` for single results and `List` for collections. ```java var users = orm.entity(User.class); // Find by ID Optional user = users.findById(userId); // Find all matching List usersInCity = users.select() .where(User_.city, EQUALS, city) .getResultList(); // Find first matching Optional user = users.select() .where(User_.email, EQUALS, email) .getOptionalResult(); // Count long count = users.count(); ``` --- ## Repository Queries [Kotlin] For more complex operations, use the repository: ```kotlin val users = orm.entity(User::class) // Find by ID val user: User? = users.findById(userId) // Find with predicate val user: User? = users.find(User_.email eq email) // Find all matching val usersInCity: List = users.findAll(User_.city eq city) // Count val count: Long = users.count() // Exists val exists: Boolean = users.existsById(userId) ``` [Java] For more complex operations, use the repository: ```java var users = orm.entity(User.class); // Find by ID Optional user = users.findById(userId); // Find with predicate Optional user = users.select() .where(User_.email, EQUALS, email) .getOptionalResult(); // Find all matching List usersInCity = users.select() .where(User_.city, EQUALS, city) .getResultList(); // Count long count = users.count(); // Exists boolean exists = users.existsById(userId); ``` --- ## Filtering with Predicates [Kotlin] Combine conditions with `and` and `or`: ```kotlin // AND condition val users = orm.findAll( (User_.city eq city) and (User_.birthDate less LocalDate.of(2000, 1, 1)) ) // OR condition val users = orm.findAll( (User_.role eq adminRole) or (User_.role eq superUserRole) ) // Complex conditions val users = orm.entity(User::class) .select() .where( (User_.city eq city) and ( (User_.role eq adminRole) or (User_.birthDate greaterEq LocalDate.of(1990, 1, 1)) ) ) .resultList ``` ### Operators | Operator | Description | |----------|-------------| | `eq` | Equals | | `neq` | Not equals | | `less` | Less than | | `lessEq` | Less than or equals | | `greater` | Greater than | | `greaterEq` | Greater than or equals | | `like` | LIKE pattern match | | `notLike` | NOT LIKE | | `isNull` | IS NULL | | `isNotNull` | IS NOT NULL | | `inList` | IN (list) | | `notInList` | NOT IN (list) | ```kotlin val users = orm.findAll(User_.email like "%@example.com") val users = orm.findAll(User_.deletedAt.isNull()) val users = orm.findAll(User_.role inList listOf(adminRole, userRole)) ``` [Java] Combine conditions using the lambda-based `where` builder. The `it` parameter provides access to the condition factory, which you chain with `.and()` or `.or()` calls to compose compound predicates. ```java // AND condition List users = orm.entity(User.class) .select() .where(it -> it.where(User_.city, EQUALS, city) .and(it.where(User_.birthDate, LESS_THAN, LocalDate.of(2000, 1, 1)))) .getResultList(); // OR condition List users = orm.entity(User.class) .select() .where(it -> it.where(User_.role, EQUALS, adminRole) .or(it.where(User_.role, EQUALS, superUserRole))) .getResultList(); ``` ### Filtering (SQL Templates) SQL Templates let you write SQL directly while retaining type safety. Entity references and metamodel fields are interpolated into the template, and parameter values are bound safely. This approach is well suited for queries that use database-specific syntax, CTEs, or window functions that the DSL does not cover. ```java List users = orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{city} AND \{User_.birthDate} < \{LocalDate.of(2000, 1, 1)}""") .getResultList(User.class); ``` ### Operators | Operator | Description | |----------|-------------| | `EQUALS` | Equals | | `NOT_EQUALS` | Not equals | | `LESS_THAN` | Less than | | `LESS_THAN_OR_EQUAL` | Less than or equals | | `GREATER_THAN` | Greater than | | `GREATER_THAN_OR_EQUAL` | Greater than or equals | | `LIKE` | LIKE pattern match | | `NOT_LIKE` | NOT LIKE | | `IS_NULL` | IS NULL | | `IS_NOT_NULL` | IS NOT NULL | | `IN` | IN (list) | | `NOT_IN` | NOT IN (list) | ```java List users = orm.entity(User.class) .select() .where(User_.email, LIKE, "%@example.com") .getResultList(); ``` ### Composing Multiple Filters Multiple `where()` calls on the same query builder are combined with AND. This lets you build up filters incrementally, which is useful when conditions are added conditionally in application code. [Kotlin] ```kotlin val results = orm.entity(User::class) .select() .where(User_.active, EQUALS, true) .where(User_.city eq city) // AND-combined with previous where .resultList ``` Builder-style `where()` calls (with `and`/`or` predicates) compose with other `where()` calls in the same way: ```kotlin val results = orm.entity(User::class) .select() .where(User_.active, EQUALS, true) .where( // AND-combined with the active filter above (User_.role eq adminRole) or (User_.role eq superUserRole) ) .resultList ``` [Java] ```java List results = orm.entity(User.class) .select() .where(User_.active, EQUALS, true) .where(User_.city, EQUALS, city) // AND-combined with previous where .getResultList(); ``` Builder-style `where()` calls (with `and`/`or` predicates) compose with other `where()` calls in the same way: ```java List results = orm.entity(User.class) .select() .where(User_.active, EQUALS, true) .where(it -> it.where(User_.role, EQUALS, adminRole) // AND-combined with active filter .or(it.where(User_.role, EQUALS, superUserRole))) .getResultList(); ``` --- ## Ordering [Kotlin] Use `orderBy` to control result ordering. Pass multiple fields as arguments to sort by more than one column. Use `orderByDescending` for descending order on a single field. ```kotlin val users = orm.entity(User::class) .select() .orderBy(User_.name) .resultList // Descending val users = orm.entity(User::class) .select() .orderByDescending(User_.createdAt) .resultList // Multiple fields (all ascending) val users = orm.entity(User::class) .select() .orderBy(User_.lastName, User_.firstName) .resultList ``` Multiple `orderBy` and `orderByDescending` calls can be chained to build multi-column sort clauses with mixed directions. Each call appends to the existing ORDER BY clause rather than replacing it, so you can mix ascending and descending columns freely. ```kotlin // Mixed sort directions: last name ascending, first name descending val users = orm.entity(User::class) .select() .orderBy(User_.lastName) .orderByDescending(User_.firstName) .resultList ``` When an inline record (embedded component) is passed to `orderBy` or `orderByDescending`, Storm automatically expands it into its individual leaf columns using `flatten()`. For example, if `User_.fullName` is an inline record with `lastName` and `firstName` fields, `orderBy(User_.fullName)` produces `ORDER BY last_name, first_name`. The same expansion applies to `groupBy`. For full control over the ORDER BY clause (for example, to use SQL expressions or database-specific syntax), use the template overload. Metamodel fields are resolved to their column names automatically. ```kotlin // Mixed sort directions (template) val users = orm.entity(User::class) .select() .orderBy { "${User_.lastName}, ${User_.firstName} DESC" } .resultList ``` [Java] Use `orderBy` to sort results by one or more columns. Pass multiple fields as arguments for multi-column sorting. Use `orderByDescending` for descending order on a single field. ```java // Ascending (default) List users = orm.entity(User.class) .select() .orderBy(User_.name) .getResultList(); // Descending List users = orm.entity(User.class) .select() .orderByDescending(User_.createdAt) .getResultList(); // Multiple fields (all ascending) List users = orm.entity(User.class) .select() .orderBy(User_.lastName, User_.firstName) .getResultList(); ``` Chain `orderBy` and `orderByDescending` calls to mix ascending and descending columns. Each call appends to the ORDER BY clause. ```java // Mixed sort directions: last name ascending, first name descending List users = orm.entity(User.class) .select() .orderBy(User_.lastName) .orderByDescending(User_.firstName) .getResultList(); ``` When an inline record (embedded component) is passed to `orderBy` or `orderByDescending`, Storm automatically expands it into its individual leaf columns using `flatten()`. The same expansion applies to `groupBy`. For full control over the ORDER BY clause, use the template overload: ```java // Mixed sort directions (template) List users = orm.entity(User.class) .select() .orderBy(RAW."\{User_.lastName}, \{User_.firstName} DESC") .getResultList(); ``` ## Aggregation [Kotlin] To perform GROUP BY queries with aggregate functions like COUNT, SUM, or AVG, define a result data class with the desired columns and pass a custom SELECT expression. Interpolating an entity or projection type generates the column list automatically, so you do not have to enumerate columns manually. ```kotlin data class CityCount(val city: City, val count: Long) val counts: List = orm.entity(User::class) .select(CityCount::class) { "${City::class}, COUNT(*)" } .groupBy(User_.city) .resultList ``` [Java] Define a result record with the desired columns and pass a custom SELECT expression. The DSL approach uses `select(Class, template)` with `groupBy` to build the query. ```java record CityCount(City city, long count) {} List counts = orm.entity(User.class) .select(CityCount.class, RAW."\{City.class}, COUNT(*)") .groupBy(User_.city) .getResultList(); ``` ### Aggregation (SQL Templates) For aggregation queries that involve multiple tables, CTEs, or HAVING clauses, SQL Templates give you full control over the query structure while still mapping results to typed records. ```java List counts = orm.query(RAW.""" SELECT \{City.class}, COUNT(*) FROM \{User.class} GROUP BY \{User_.city}""") .getResultList(CityCount.class); ``` ## Data Retrieval Strategies When working with large result sets, Storm supports three strategies for retrieving subsets: manual offset/limit, offset-based pagination, and cursor-based scrolling. | Strategy | Navigation | Result type | Typical use | |----------|------------|-------------|-------------| | **Offset and Limit** | manual | `List` | simple queries with known bounds | | **Pagination** | page number | `Page` | UI lists, reports | | **Scrolling** | sequential cursor | `Window` | infinite scroll, batch processing | **Pagination** navigates by page number and includes a total count. It uses SQL `OFFSET` under the hood, which degrades on large tables. **Scrolling** uses keyset pagination for constant-time performance regardless of depth, but only supports sequential forward/backward navigation. For detailed usage, sorting, composite scrolling, `Window` type parameters, GROUP BY with scrolling, and REST cursor support, see [Pagination and Scrolling](pagination-and-scrolling.md). ### Quick examples [Kotlin] ```kotlin // Offset and limit val results = orm.entity(User::class).select() .orderBy(User_.createdAt) .offset(20).limit(10) .resultList // Pagination val page: Page = orm.entity(User::class).select() .where(User_.active, EQUALS, true) .page(Pageable.ofSize(10)) // Scrolling val window: Window = userRepository.scroll(Scrollable.of(User_.id, 20)) // next() is non-null when the window has content. // hasNext is informational; the developer decides whether to follow the cursor. val next = userRepository.scroll(window.next()) ``` [Java] ```java // Offset and limit var results = orm.entity(User.class).select() .orderBy(User_.createdAt) .offset(20).limit(10) .getResultList(); // Pagination Page page = orm.entity(User.class).select() .where(User_.active, EQUALS, true) .page(Pageable.ofSize(10)); // Scrolling Window window = userRepository.scroll(Scrollable.of(User_.id, 20)); // next() is non-null when the window has content. // hasNext() is informational; the developer decides whether to follow the cursor. var next = userRepository.scroll(window.next()); ``` ## Distinct Results Add `.distinct()` to eliminate duplicate rows from the result set. This is useful when selecting a related entity type from a query that could produce duplicates due to one-to-many relationships. [Kotlin] ```kotlin val cities = orm.entity(User::class) .select(City::class) .distinct() .resultList ``` [Java] ```java List cities = orm.entity(User.class) .select(City.class) .distinct() .getResultList(); ``` --- ## Streaming [Kotlin] For large result sets, use `selectAll()` or `select()` which return a Kotlin `Flow`. Rows are fetched lazily from the database as you collect, so memory usage stays constant regardless of result set size. Flow also handles resource cleanup automatically when collection completes or is cancelled. ```kotlin val users: Flow = orm.entity(User::class).selectAll() // Process each users.collect { user -> process(user) } // Transform and collect val emails: List = users.map { it.email }.toList() // Count val count: Int = users.count() ``` [Java] Java streams hold an open database cursor and JDBC resources. Unlike Kotlin's `Flow` (which handles cleanup automatically), Java `Stream` results must be explicitly closed. Always wrap them in a try-with-resources block to prevent connection leaks. ```java try (Stream users = orm.entity(User.class).selectAll()) { List emails = users.map(User::email).toList(); } ``` --- ## Joins [Kotlin] Storm automatically joins entities referenced by `@FK` fields. When you need to join entities that are not directly referenced in the result type (for example, filtering through a many-to-many join table), use explicit `innerJoin` or `leftJoin` calls. The `on` clause specifies which existing entity in the query the joined table relates to. ```kotlin val roles = orm.entity(Role::class) .select() .innerJoin(UserRole::class).on(Role::class) .whereAny(UserRole_.user eq user) .resultList ``` [Java] Storm automatically joins entities referenced by `@FK` fields. For entities not directly referenced in the result type, such as join tables in many-to-many relationships, use explicit `innerJoin` or `leftJoin` calls. The `on` clause specifies which existing entity in the query the joined table relates to. ```java List roles = orm.entity(Role.class) .select() .innerJoin(UserRole.class).on(Role.class) .where(UserRole_.user, EQUALS, user) .getResultList(); ``` ### Joins (SQL Templates) SQL Templates let you write JOIN clauses directly, which is useful when the join condition is not a simple foreign key match or when you need to join on computed expressions. ```java List roles = orm.query(RAW.""" SELECT \{Role.class} FROM \{Role.class} INNER JOIN \{UserRole.class} ON \{UserRole_.role} = \{Role_.id} WHERE \{UserRole_.user} = \{user.id()}""") .getResultList(Role.class); ``` --- ## Result Classes Query result classes can be: - **Plain records** -- Storm maps columns to fields (you write all SQL) - **`Data` implementations** -- enable SQL template helpers like `${Class::class}` - **`Entity`/`Projection`** -- full repository support with CRUD operations Choose the simplest option that meets your needs. See [SQL Templates](sql-templates.md) for details. --- ## Compound Fields in Queries When an inline record (embedded component) is used in a query clause, Storm automatically expands it into its constituent columns. This applies to WHERE, ORDER BY, and GROUP BY clauses. ### WHERE Clauses Inline records expand differently depending on the operator: **EQUALS / NOT_EQUALS** generate per-column AND conditions: [Kotlin] ```kotlin val owner = orm.entity(Owner::class) .select() .where(Owner_.address, EQUALS, address) .singleResult ``` [Java] ```java Owner owner = orm.entity(Owner.class) .select() .where(Owner_.address, EQUALS, address) .getSingleResult(); ``` ```sql WHERE o.address = ? AND o.city_id = ? ``` For NOT_EQUALS, the condition is wrapped in NOT: ```sql WHERE NOT (o.address = ? AND o.city_id = ?) ``` **Comparison operators** (GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL) generate lexicographic comparisons using nested OR/AND. This preserves the natural multi-column ordering: [Kotlin] ```kotlin val owners = orm.entity(Owner::class) .select() .where(Owner_.address, GREATER_THAN, address) .resultList ``` [Java] ```java List owners = orm.entity(Owner.class) .select() .where(Owner_.address, GREATER_THAN, address) .getResultList(); ``` ```sql WHERE (o.address > ? OR (o.address = ? AND o.city_id > ?)) ``` For GREATER_THAN_OR_EQUAL, only the last column uses the inclusive operator: ```sql WHERE (o.address > ? OR (o.address = ? AND o.city_id >= ?)) ``` Some databases (PostgreSQL, MySQL, MariaDB, Oracle) support native tuple comparison syntax, which Storm uses automatically when available: ```sql WHERE (o.address, o.city_id) > (?, ?) ``` **Unsupported operators.** LIKE, NOT_LIKE, IN, and NOT_IN do not have a meaningful multi-column interpretation and throw a `PersistenceException` when used with inline records. To filter on a sub-field, reference it directly: [Kotlin] ```kotlin val owners = orm.entity(Owner::class) .select() .where(Owner_.address.address, LIKE, "%Main%") .resultList ``` [Java] ```java List owners = orm.entity(Owner.class) .select() .where(Owner_.address.address, LIKE, "%Main%") .getResultList(); ``` ### ORDER BY Passing an inline record to `orderBy` or `orderByDescending` expands it into its leaf columns. For example, if `Owner_.address` is an inline record with `address` and `city` fields: ```kotlin val owners = orm.entity(Owner::class) .select() .orderBy(Owner_.address) .resultList ``` ```sql ORDER BY o.address, o.city_id ``` Using `orderByDescending` applies DESC to each expanded column: ```sql ORDER BY o.address DESC, o.city_id DESC ``` ### GROUP BY Inline records expand in GROUP BY the same way. This is particularly useful in combination with scrolling, where grouping by a column makes it unique in the result set. Wrap the metamodel with `.key()` to indicate it can serve as a cursor: ```kotlin data class CityOrderCount(val city: City, val count: Long) val orders = orm.entity(Order::class) val window = orders.select(CityOrderCount::class) { "${City::class}, COUNT(*)" } .groupBy(Order_.city) .scroll(Scrollable.of(Order_.city.key(), 20)) ``` See [Scrolling: GROUP BY](#group-by) for details. --- ## Common Patterns ### Checking Existence Use `existsBy` (Kotlin) or `.exists()` on the query builder (Java) to check whether a matching row exists without loading the full entity. [Kotlin] ```kotlin val exists: Boolean = orm.existsBy(User_.email, email) ``` [Java] ```java boolean exists = orm.entity(User.class) .select() .where(User_.email, EQUALS, email) .exists(); ``` ### Count with Filter Combine `where` with `count` to count rows matching a condition without loading the entities themselves. Storm translates this to a `SELECT COUNT(*)` query. [Kotlin] ```kotlin val count: Long = orm.entity(User::class) .select() .where(User_.city eq city) .count ``` [Java] ```java long count = orm.entity(User.class) .select() .where(User_.city, EQUALS, city) .getCount(); ``` ### Finding a Single Result When you expect at most one matching row, use `find` (Kotlin, returns `null` if not found) or `getOptionalResult` (Java, returns `Optional`). These methods throw if more than one row matches. [Kotlin] ```kotlin val user: User? = orm.find(User_.email eq email) ``` [Java] ```java Optional user = orm.entity(User.class) .select() .where(User_.email, EQUALS, email) .getOptionalResult(); ``` --- ## Tips 1. **Use the metamodel** -- `User_.email` catches typos at compile time; see [Metamodel](metamodel.md) 2. **Kotlin: choose your style** -- quick queries (`orm.find`, `orm.findAll`) for simple cases, query builder for complex operations 3. **Java: DSL or Templates** -- DSL for type-safe conditions, SQL Templates for complex SQL like CTEs, window functions, or database-specific features 4. **Entity graphs load in one query** -- related entities marked with `@FK` are JOINed automatically, no N+1 problems 5. **Close Java streams** -- always use try-with-resources with `Stream` results 6. **Combine conditions freely** -- use `and` / `or` in Kotlin, `it.where().and()` / `.or()` in Java to build complex predicates 7. **Always use the returned builder** -- `QueryBuilder` is immutable; methods like `where()`, `orderBy()`, and `limit()` return a new instance. Ignoring the return value silently loses the change. Chain calls or reassign the variable. ======================================== ## Source: pagination-and-scrolling.md ======================================== # Pagination and Scrolling Storm supports three strategies for retrieving subsets of a result set: manual offset/limit, offset-based pagination, and cursor-based scrolling. This page covers each in detail, including their trade-offs, type signatures, and advanced usage. For a quick overview, see [Queries: Data Retrieval Strategies](queries.md#data-retrieval-strategies). ## Choosing a Strategy Storm provides three ways to retrieve a subset of query results. The right choice depends on how your application navigates the data and how large the result set is. | Feature | Offset and Limit | Pagination | Scrolling | |---------|-----------------|------------|-----------| | Navigation | manual | page number | cursor | | Result type | `List` | `Page` | `Window` | | Count query | no | yes | no | | Random access | yes | yes | no | | Navigation tokens | no | `nextPageable()` / `previousPageable()` | `next()` / `previous()` | | Performance on large datasets | degrades with offset | degrades with offset | constant | **Offset and Limit** gives raw control with `offset()` and `limit()` on the query builder. Both pagination and offset/limit use SQL `OFFSET` under the hood, which degrades on large tables because the database must scan and discard all skipped rows. **Pagination** wraps offset/limit with a `Page` container that includes total counts and page metadata. This is useful for UIs that display "Page 3 of 12" or need random page access. **Scrolling** uses keyset pagination: it remembers the last value seen and asks the database for rows after (or before) that value. The database seeks directly to the cursor position using an index, so performance stays constant regardless of depth. The trade-off is that you can only move forward or backward from the current position. ## Offset and Limit For direct offset/limit control, use `offset` and `limit` on the query builder. Always combine these with `orderBy` to ensure deterministic ordering. [Kotlin] ```kotlin val results = orm.entity(User::class) .select() .orderBy(User_.createdAt) .offset(20) .limit(10) .resultList ``` [Java] ```java List results = orm.entity(User.class) .select() .orderBy(User_.createdAt) .offset(20) .limit(10) .getResultList(); ``` ## Pagination Pagination navigates by page number and returns a `Page`. Each request typically requires two queries: a `SELECT COUNT(*)` to determine the total number of results, and a data query with `OFFSET`/`LIMIT` for the content. Use the `page` terminal method on the query builder. Pass a `Pageable` to specify the page number and page size. The result is a `Page` containing the content, total count, and navigation methods. [Kotlin] ```kotlin val pageable = Pageable.ofSize(10) val page: Page = orm.entity(User::class) .select() .where(User_.active, EQUALS, true) .page(pageable) // Navigate if (page.hasNext()) { val nextPage = orm.entity(User::class) .select() .where(User_.active, EQUALS, true) .page(page.nextPageable()) } ``` [Java] ```java Pageable pageable = Pageable.ofSize(10); Page page = orm.entity(User.class) .select() .where(User_.active, EQUALS, true) .page(pageable); // Navigate if (page.hasNext()) { Page nextPage = orm.entity(User.class) .select() .where(User_.active, EQUALS, true) .page(page.nextPageable()); } ``` The `Page` record contains everything needed to build pagination controls: | Field / Method | Description | |---|---| | `content` | The list of results for the current page | | `totalCount` | Total number of matching rows across all pages | | `pageNumber()` | Zero-based index of the current page | | `pageSize()` | Maximum number of elements per page | | `totalPages()` | Computed total number of pages | | `hasNext()` / `hasPrevious()` | Whether adjacent pages exist | | `nextPageable()` / `previousPageable()` | Returns a `Pageable` for the adjacent page | ### Sorting Sort orders are specified on the `Pageable` using `sortBy` (ascending) and `sortByDescending` (descending). Multiple calls append columns to build a multi-column sort, and the orders carry over automatically when navigating with `nextPageable()` or `previousPageable()`. You do not need to call `orderBy` separately on the query builder. [Kotlin] ```kotlin // Single column, ascending val pageable = Pageable.ofSize(10).sortBy(User_.createdAt) // Single column, descending val pageable = Pageable.ofSize(10).sortByDescending(User_.createdAt) // Multi-column: last name ascending, then first name descending val pageable = Pageable.ofSize(10) .sortBy(User_.lastName) .sortByDescending(User_.firstName) ``` [Java] ```java // Single column, ascending Pageable pageable = Pageable.ofSize(10).sortBy(User_.createdAt); // Single column, descending Pageable pageable = Pageable.ofSize(10).sortByDescending(User_.createdAt); // Multi-column: last name ascending, then first name descending Pageable pageable = Pageable.ofSize(10) .sortBy(User_.lastName) .sortByDescending(User_.firstName); ``` For the full `Page` and `Pageable` API reference, see [Repositories: Offset-Based Pagination](repositories.md#offset-based-pagination). ## Scrolling Scrolling navigates sequentially using a cursor and returns a `Window`. A `Window` represents a portion of the result set: it contains the data, informational flags (`hasNext`, `hasPrevious`) that indicate whether adjacent results existed at query time, and navigation tokens for sequential traversal, but no total count or page number. The typed navigation methods `next()` and `previous()` are always available when the window has content, regardless of whether `hasNext` or `hasPrevious` is `true`. This allows the developer to decide whether to follow a cursor, since new data may appear after the query was executed. Under the hood, scrolling uses keyset pagination: it remembers the last value seen on the current page and asks the database for rows after (or before) that value. This avoids the performance cliff of `OFFSET` on large tables, because the database can seek directly to the cursor position using an index. > **Info:** Scrolling requires a stable sort order. The final sort column must be unique (typically the primary key). Using a non-unique sort column like `createdAt` without a tiebreaker will produce duplicate or missing rows at page boundaries. Use the [sort overload](#sorting-by-non-unique-columns) (`Scrollable.of(key, sort, size)`) when sorting by a non-unique column. The `scroll` method is available directly on repositories and on the query builder. It accepts a `Scrollable` that captures the cursor state and returns a `Window` containing: | Field / Method | Description | |-------|-------------| | `content()` | The list of results for this window. | | `hasNext()` | `true` if more results existed beyond this window at query time. | | `hasPrevious()` | `true` if this window was fetched with a cursor position (i.e., not the first page). | | `next()` | Returns a typed `Scrollable` for the next window, or `null` if the window is empty. | | `previous()` | Returns a typed `Scrollable` for the previous window, or `null` if the window is empty. | The `nextScrollable()` and `previousScrollable()` raw record component accessors also exist, returning `Scrollable`. The typed `next()` and `previous()` methods are preferred for programmatic navigation. Create a `Scrollable` using the factory methods, or obtain one from a `Window`: | Method | Purpose | SQL effect | |--------|---------|------------| | `Scrollable.of(key, size)` | Request for the first page (ascending). | `ORDER BY key ASC LIMIT size+1` | | `Scrollable.of(key, size).backward()` | Request for the first page (descending). | `ORDER BY key DESC LIMIT size+1` | | `window.next()` | Request for the next page after the current window. | `WHERE key > cursor ORDER BY key ASC LIMIT size+1` | | `window.previous()` | Request for the previous page before the current window. | `WHERE key < cursor ORDER BY key DESC LIMIT size+1` | The extra row (`size+1`) is used internally to determine the value of `hasNext`, then discarded from the returned content. **Result ordering.** Forward scrolling returns results in ascending key order. Backward scrolling (via `.backward()`) returns results in **descending** key order. If you need ascending order for display after navigating backward, reverse the list. **No total count.** Unlike pagination, scrolling does not include a total element count. A separate `COUNT(*)` query must execute the same joins, filters, and conditions as the main query, which can be expensive on large or complex result sets. Total counts are also inherently unstable: rows may be inserted or deleted while a user navigates through pages, so the count can become stale between requests. Scrolling is designed for sequential "load more" or infinite-scroll patterns where a total is rarely needed. If you do need a total count (for example, for a UI label like "showing 10 of 4,827 results"), call the `count` (Kotlin) or `getCount()` (Java) method on the query builder separately, keeping in mind that the value is a snapshot that may drift as the underlying data changes. **REST cursor support.** For REST APIs that need to pass scroll state as an opaque string (for example, as a query parameter), `Window` provides `nextCursor()` and `previousCursor()` methods that serialize the scroll position to a cursor string. These convenience methods are gated by the informational flags: `nextCursor()` returns `null` when `hasNext()` is `false`, and `previousCursor()` returns `null` when `hasPrevious()` is `false`. This makes them safe to use directly in REST responses without additional checks. The underlying `next()` and `previous()` methods remain available whenever the window has content, so server-side code can still follow a cursor even when the flags indicate no more results were seen at query time. To reconstruct a `Scrollable` from a cursor string, use `Scrollable.fromCursor(key, cursor)`. For details on supported cursor types, security considerations, and custom codec registration, see [Cursor Serialization](cursors.md). [Kotlin] ```kotlin // Serialize cursor for REST response val cursor: String? = window.nextCursor() // Client sends cursor back in next request val scrollable = Scrollable.fromCursor(User_.id, cursor) val next = userRepository.scroll(scrollable) ``` [Java] ```java // Serialize cursor for REST response String cursor = window.nextCursor(); // Client sends cursor back in next request var scrollable = Scrollable.fromCursor(User_.id, cursor); var next = userRepository.scroll(scrollable); ``` **Basic usage.** Pass a `Metamodel.Key` that identifies a unique, indexed column (typically the primary key) and the desired page size. The key determines both ordering and the cursor column. Fields annotated with `@UK` or `@PK` automatically generate `Metamodel.Key` instances in the metamodel. See [Metamodel](metamodel.md#unique-keys-uk-and-metamodelkey) for details. > **Nullable keys.** If a `@UK` field is nullable and the default `nullsDistinct = true` applies, scroll methods throw a `PersistenceException` at runtime. Either use a non-nullable type, or set `@UK(nullsDistinct = false)` if the database constraint prevents duplicate NULLs. See [Nullable Unique Keys](metamodel.md#nullable-unique-keys) for details. For repository convenience methods, see [Repositories: Scrolling](repositories.md#scrolling). Use `scroll` as a terminal operation on the query builder for filtering, joins, or projections: [Kotlin] ```kotlin val window = userRepository.select() .where(User_.active, EQUALS, true) .scroll(Scrollable.of(User_.id, 10)) ``` [Java] ```java var window = userRepository.select() .where(User_.active, EQUALS, true) .scroll(Scrollable.of(User_.id, 10)); ``` > **Warning:** The `scroll` method generates the ORDER BY clause from the key provided in the `Scrollable` (ascending for forward scrolling, descending for backward scrolling). Adding your own `orderBy()` call conflicts with the ordering that scrolling depends on, so Storm rejects the combination at runtime with a `PersistenceException`. [Kotlin] ```kotlin // Wrong: orderBy conflicts with scroll userRepository.select() .orderBy(User_.name) // PersistenceException at runtime .scroll(Scrollable.of(User_.id, 10)) // Correct: scroll handles ordering via the key userRepository.select() .scroll(Scrollable.of(User_.id, 10)) ``` [Java] ```java // Wrong: orderBy conflicts with scroll userRepository.select() .orderBy(User_.name) // PersistenceException at runtime .scroll(Scrollable.of(User_.id, 10)); // Correct: scroll handles ordering via the key userRepository.select() .scroll(Scrollable.of(User_.id, 10)); ``` ### Sorting by Non-Unique Columns The single-key `Scrollable.of(key, size)` uses the cursor column as both the sort column and the tiebreaker, which means the column must contain unique values. When you want to sort by a non-unique column (for example, a timestamp or status), use the overload that accepts a separate sort column: `Scrollable.of(key, sort, size)`. This accepts a unique `key` column (typically the primary key) as a tiebreaker for deterministic paging, and a `sort` column for the primary sort order. [Kotlin] ```kotlin // First page sorted by creation date ascending, with ID as tiebreaker val window = postRepository.select() .scroll(Scrollable.of(Post_.id, Post_.createdAt, 20)) // Next page (cursor values are captured in the Scrollable automatically). // next() is non-null whenever the window has content. // hasNext() is informational; the developer decides whether to follow the cursor. val next = postRepository.select() .scroll(window.next()) // First page sorted by creation date descending (most recent first) val latest = postRepository.select() .scroll(Scrollable.of(Post_.id, Post_.createdAt, 20).backward()) // Previous page val prev = postRepository.select() .scroll(window.previous()) ``` [Java] ```java // First page sorted by creation date, with ID as tiebreaker var window = postRepository.select() .scroll(Scrollable.of(Post_.id, Post_.createdAt, 20)); // Next page (cursor values are captured in the Scrollable automatically). // next() is non-null whenever the window has content. // You can check hasNext() if you only want to proceed when more results // were known to exist at query time, or follow the cursor unconditionally // to pick up data that may have arrived after the query. var next = window.next(); if (next != null) { var nextWindow = postRepository.select() .scroll(next); } // Previous page var previous = window.previous(); if (previous != null) { var prev = postRepository.select() .scroll(previous); } ``` The `Window` carries navigation tokens (`next()`, `previous()`) that encode the cursor values internally, so the client does not need to extract cursor values manually. The generated SQL uses a composite WHERE condition that maintains correct ordering even when `sort` values repeat: ```sql WHERE (created_at > ? OR (created_at = ? AND id > ?)) ORDER BY created_at ASC, id ASC LIMIT 21 ``` As with the single-key variant, scrolling manages ORDER BY internally and rejects any explicit `orderBy()` call. **Indexing.** For scrolling with sort to perform well, create a composite index that covers both columns in the correct order: ```sql CREATE INDEX idx_post_created_id ON post (created_at, id); ``` This allows the database to seek directly to the cursor position and scan forward, giving consistent performance regardless of page depth. ### GROUP BY and Aggregated Projections When a query uses GROUP BY, the grouped column produces unique values in the result set even if the column itself is not annotated with `@UK`. In this case, wrap the metamodel with `.key()` (Kotlin) or `Metamodel.key()` (Java) to indicate it can serve as a scrolling cursor: [Kotlin] ```kotlin val window = orm.query(Order::class) .select(Order_.city, "COUNT(*)") .groupBy(Order_.city) .scroll(Scrollable.of(Order_.city.key(), 20)) ``` [Java] ```java var window = orm.query(Order.class) .select(Order_.city, "COUNT(*)") .groupBy(Order_.city) .scroll(Scrollable.of(Metamodel.key(Order_.city), 20)); ``` See [Manual Key Wrapping](metamodel.md#manual-key-wrapping) for more details. ### Window Type Parameters `Window` is a record with a single type parameter: `R` is the result type. It provides result content, cursor-based string navigation (`nextCursor()`, `previousCursor()`), and typed `Scrollable` navigation via the generic `next()` and `previous()` convenience methods for programmatic traversal. The raw record component accessors `nextScrollable()` and `previousScrollable()` return `Scrollable`. The repository convenience method `scroll()` returns `Window`. The query builder `scroll()` also returns `Window`. For entity queries, `Window` carries `Scrollable` navigation tokens and the typed `next()` / `previous()` methods provide typed access. For queries where the result type differs from the entity type (for example, selecting into a data class that combines columns from multiple sources), `Window` does not carry navigation tokens because Storm cannot extract cursor values from a result type it does not know how to navigate. In this case, `next()` and `previous()` return `null` (even when the window has content), and `hasNext()` still works correctly as an informational flag. To continue scrolling, check `hasNext()` and construct the next `Scrollable` manually using cursor values from your result: [Kotlin] ```kotlin data class OrderSummary(val city: Ref, val orderCount: Long) : Data val window: Window = orm.selectFrom(Order::class, OrderSummary::class) { """${Order_.city.id}, COUNT(*)""" } .groupBy(Order_.city) .scroll(Scrollable.of(Order_.city.key(), 20)) // Navigation tokens are null because OrderSummary != Order. // Construct the next scrollable manually from the last result. // hasNext() is informational; the developer decides whether to follow the cursor. val lastCity = window.content.last().city.id() val next: Window = orm.selectFrom(Order::class, OrderSummary::class) { ... } .groupBy(Order_.city) .scroll(Scrollable.of(Order_.city.key(), lastCity, 20)) ``` [Java] ```java record OrderSummary(Ref city, long orderCount) implements Data {} Window window = orm.selectFrom(Order.class, OrderSummary.class, RAW."""SELECT \{Order_.city.id}, COUNT(*)""") .groupBy(Order_.city) .scroll(Scrollable.of(Metamodel.key(Order_.city), 20)); // Navigation tokens are null because OrderSummary != Order. // Construct the next scrollable manually from the last result. // hasNext() is informational; the developer decides whether to follow the cursor. var lastCity = window.content().getLast().city().id(); Window next = orm.selectFrom(Order.class, OrderSummary.class, ...) .groupBy(Order_.city) .scroll(Scrollable.of(Metamodel.key(Order_.city), lastCity, 20)); ``` ## Pagination vs Scrolling Summary | | Pagination | Scrolling | |---|---|---| | Request | `Pageable` | `Scrollable` | | Result | `Page` | `Window` | | Method | `page(pageable)` | `scroll(scrollable)` | | Navigate forward | `page.nextPageable()` | `window.next()` | | Navigate backward | `page.previousPageable()` | `window.previous()` | ======================================== ## Source: metamodel.md ======================================== # Static Metamodel The static metamodel is a code generation feature that creates companion classes for your entities at compile time. These generated classes provide type-safe references to entity fields, enabling the compiler to catch errors that would otherwise surface only at runtime. Using the metamodel is optional. Storm works without it using SQL Templates or string-based field references. However, for projects that want to leverage Storm's full capabilities, the metamodel provides significant benefits in terms of type safety, IDE support, and maintainability. ## Why Use a Metamodel? Storm uses Kotlin data classes and Java records as entities. While this stateless approach simplifies the programming model, it presents a challenge: how do you reference entity fields in a type-safe way without using reflection at runtime? The metamodel solves this by generating accessor classes during compilation. These classes provide direct access to record components without reflection, which offers two advantages: - **Performance.** No reflection overhead when accessing field metadata or values. - **Type safety.** The compiler verifies field references, catching typos and type mismatches before your code runs. ## What is the Metamodel? For each entity class, Storm generates a corresponding metamodel class with a `_` suffix (following the JPA naming convention): ``` ┌─────────────────┐ ┌─────────────────┐ │ Entity │ KSP / Annotation │ Metamodel │ │ │ Processor │ │ │ User.kt │ ─────────────────► │ User_.java │ │ City.kt │ │ City_.java │ │ Country.kt │ │ Country_.java │ └─────────────────┘ └─────────────────┘ ``` The metamodel contains typed references to each field that can be used in queries. ## Installation The metamodel is **optional**. Storm works without it using SQL Templates or string-based field references. However, if you want compile-time type safety for your queries, you need to configure a code generator that creates the metamodel classes during compilation. - **Kotlin projects** use KSP (Kotlin Symbol Processing) - **Java projects** use an annotation processor The generator scans your entity classes and creates corresponding metamodel classes (e.g., `User_` for `User`) in the same package. ### Gradle (Kotlin with KSP) ```kotlin plugins { id("com.google.devtools.ksp") version "2.0.21-1.0.28" } dependencies { ksp("st.orm:storm-metamodel-processor:@@STORM_VERSION@@") } ``` ### Gradle (Java) ```kotlin annotationProcessor("st.orm:storm-metamodel-processor:@@STORM_VERSION@@") ``` ### Maven (Java) ```xml st.orm storm-metamodel-processor @@STORM_VERSION@@ provided ``` > **Important:** Metamodel classes are generated at compile time. When you create or modify an entity, you must rebuild your project (or run the KSP/annotation processor task) before the corresponding metamodel class becomes available. Until then, your IDE will show errors for references like `User_`. ## Usage Once the metamodel is generated, you use the `_` suffixed classes in place of string-based field references throughout your queries. The metamodel provides type-safe field accessors that the compiler can verify, so a renamed or removed field produces a compile error rather than a runtime exception. The following examples demonstrate the metamodel in queries for both Kotlin and Java. [Kotlin] ```kotlin // Type-safe field reference val users = orm.findAll(User_.email eq email) // Type-safe access to nested fields throughout the entire entity graph val users = orm.findAll(User_.city.country.code eq "US") // Multiple conditions val users = orm.entity(User::class) .select() .where( (User_.city eq city) and (User_.birthDate less LocalDate.of(2000, 1, 1)) ) .resultList ``` [Java] ```java // Type-safe field reference List users = orm.entity(User.class) .select() .where(User_.email, EQUALS, email) .getResultList(); // Type-safe access to nested fields throughout the entire entity graph List users = orm.entity(User.class) .select() .where(User_.city.country.code, EQUALS, "US") .getResultList(); ``` ### SQL Templates (Java) ```java Optional user = orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.email} = \{email}""") .getOptionalResult(User.class); ``` ## Path Resolution Storm supports two forms of metamodel references: **nested paths** and **short form**. Understanding when to use each is important for writing correct queries. Consider an entity graph where `User` has a `city` field pointing to `City`, which has a `country` field pointing to `Country`: ``` ┌────────┐ ┌────────┐ ┌──────────┐ │ User │──────►│ City │──────►│ Country │ └────────┘ city └────────┘country└──────────┘ ``` ### Nested Paths (Fully Qualified) Nested paths traverse from the root entity through relationships by chaining field accessors: ```kotlin // Start from User, traverse city → country → name User_.city.country.name eq "United States" ``` Each step in the path corresponds to a foreign key relationship in your entity model: ``` User_.city → User has FK to City .country → City has FK to Country .name → Country.name column ``` **Why nested paths are always unambiguous:** When you write `User_.city.country.name`, Storm knows exactly which tables to join and in what order. Even if `Country` appears multiple times in your entity graph (e.g., via different relationships), the nested path explicitly identifies which occurrence you mean. Storm automatically generates the necessary JOINs based on the path. For the example above: ```sql SELECT ... FROM user u INNER JOIN city c ON u.city_id = c.id INNER JOIN country co ON c.country_id = co.id WHERE co.name = 'United States' ``` Each segment of the path gets its own table alias, and Storm tracks the mapping between paths and aliases internally. ### Short Form Short form uses the target table's metamodel directly: ```kotlin // Reference Country directly Country_.name eq "United States" ``` Short form works **only when the table appears exactly once** in the entity graph. If `Country` is referenced in multiple places, Storm cannot determine which one you mean. **Example where short form works:** ```kotlin data class User( @PK val id: Int = 0, val name: String, @FK val city: City // City → Country (only path to Country) ) : Entity // Short form works - Country appears only once in User's entity graph val users = orm.entity(User::class) .select() .whereAny(Country_.name eq "United States") // Resolves to User → City → Country .resultList ``` The short form `Country_.name` works here because Storm first establishes `User` as the root entity, then looks up `Country` in User's entity graph. Since there's only one path to `Country` (via `city.country`), it's unambiguous. Note the use of `whereAny` instead of `where`. The `where` method requires predicates typed to the root entity (`User`), while `whereAny` accepts predicates for any table in the entity graph. Since `Country_.name` produces a `Country`-typed predicate, `whereAny` is required. **Type safety considerations:** - **`where`** is fully type-safe. The predicate must be rooted at the query's entity type, so column lookup is guaranteed to succeed at runtime. - **`whereAny`** is type-safe for the values you pass (e.g., comparing a `String` field to a `String` value), but the column lookup may fail at runtime if the referenced table doesn't exist in the entity graph or appears multiple times (ambiguity). Use nested paths or ensure uniqueness to avoid runtime exceptions. **Example where short form fails:** When `Country` appears multiple times in the entity graph, Storm cannot determine which one you mean: ``` ┌────────┐ ┌──────────┐ ┌────►│ City │──────►│ Country │ (path 1: city.country) ┌────────┐ │ └────────┘country└──────────┘ │ User │────┤ └────────┘ │ ┌──────────┐ └─────────────────────►│ Country │ (path 2: birthCountry) birthCountry └──────────┘ ``` ```kotlin data class User( @PK val id: Int = 0, val name: String, @FK val city: City, // City → Country (path 1) @FK val birthCountry: Country // Direct reference (path 2) ) : Entity // ERROR: Multiple paths to Country in User's entity graph val users = orm.entity(User::class) .select() .whereAny(Country_.name eq "United States") .resultList // OK: Nested paths are unambiguous (and can use where since they're rooted at User_) val users = orm.entity(User::class) .select() .where(User_.city.country.name eq "United States") .resultList val users = orm.entity(User::class) .select() .where(User_.birthCountry.name eq "United States") .resultList ``` When Storm detects ambiguity, it throws an exception with a message indicating which paths are available. ### Custom Joins Sometimes you need to join a table that has no `@FK` relationship defined in your entity model. For example, you might query users and filter by their orders without adding an `orders` field to the `User` entity. Custom joins add these tables to the query at runtime, making them available for filtering and projection. Custom joins add tables that are not part of the entity graph: ``` Entity Graph Custom Join ───────────── ─────────── ┌────────┐ ┌────────┐ │ User │──────►│ City │ ┌─────────┐ └────────┘ └────────┘ ┌───►│ Order │ (added via innerJoin) │ │ └─────────┘ └───────────────────────────┘ (manual join) ``` When you add custom joins to a query, those joined tables can **only** be referenced using short form: ```kotlin val users = orm.entity(User::class) .select() .innerJoin(Order::class).on(User::class) // Custom join .whereAny(Order_.total greater BigDecimal(100)) // Short form required, use whereAny .resultList ``` Custom joins are not part of the entity graph traversal, so nested paths cannot reach them. The short form works here because Storm registers the custom join's alias. Use `whereAny` since the predicate references `Order`, not the root entity `User`. **Uniqueness still applies:** If you join the same table multiple times, you must use the `join` method with explicit aliases to disambiguate: ```kotlin val users = orm.entity(User::class) .select() .join(JoinType.inner(), Order::class, "recent").on(User::class) .join(JoinType.inner(), Order::class, "first").on(User::class) .where(/* use SQL template with explicit aliases */) .resultList ``` ### Resolution Order When resolving a metamodel reference, Storm follows this order: 1. **Nested path.** If a path is specified (e.g., `User_.city.country`), use the alias for that specific traversal. 2. **Unique table lookup.** If short form (e.g., `Country_`), check if the table appears exactly once in the entity graph or registered joins. 3. **Error.** If multiple paths exist, throw an exception indicating the ambiguity. ### Best Practices 1. **Prefer nested paths** for clarity and to avoid ambiguity issues 2. **Use short form** for custom joins (required) or when you're certain the table is unique 3. **Check error messages.** Storm tells you which paths are available when ambiguity is detected. ## Generated Code Understanding the generated code helps when debugging or reading compiler errors. The metamodel mirrors your entity structure, creating a static field for each entity field. Each field carries generic type parameters that encode both the root entity type and the field's value type, which is how the compiler enforces type safety in queries. ``` Entity Metamodel ────── ───────── User User_ ├── id: Int (PK) ├── id → Metamodel.Key ├── email: String ├── email → Metamodel ├── name: String ├── name → Metamodel └── city: City (FK) └── city → CityMetamodel ├── id ├── name └── country → CountryMetamodel ``` For an entity like: ```kotlin data class User( @PK val id: Int = 0, val email: String, val name: String, @FK val city: City ) : Entity ``` The metamodel generates an interface with typed field accessors: ```java @Generated("st.orm.metamodel.MetamodelProcessor") public interface User_ extends Metamodel { /** Represents the {@link User#id} field. */ AbstractKeyMetamodel id = ...; /** Represents the {@link User#email} field. */ AbstractMetamodel email = ...; /** Represents the {@link User#name} field. */ AbstractMetamodel name = ...; /** Represents the {@link User#city} foreign key. */ CityMetamodel city = ...; } ``` Foreign key fields like `city` generate their own metamodel classes, enabling navigation through relationships with full type safety. ## Unique Keys (`@UK`) and `Metamodel.Key` Use `@UK` on fields that have a unique constraint in the database. Fields annotated with `@UK` indicate that the corresponding column contains unique values. The metamodel processor generates `Metamodel.Key` instances for these fields, enabling type-safe single-result lookups and scrolling. The `@PK` annotation is meta-annotated with `@UK`, so primary key fields are automatically recognized as unique keys without needing an explicit `@UK` annotation. ### Defining Unique Keys [Kotlin] ```kotlin data class User( @PK val id: Int = 0, @UK val email: String, val name: String ) : Entity ``` [Java] ```java record User(@PK Integer id, @UK String email, String name ) implements Entity {} ``` The metamodel processor generates `Metamodel.Key` fields for `id` (via `@PK`) and `email` (via `@UK`): ``` User_ ├── id → Metamodel.Key (via @PK, which implies @UK) ├── email → Metamodel.Key (via @UK) └── name → Metamodel ``` ### Compound Unique Keys For compound unique constraints spanning multiple columns, use an inline record annotated with `@UK`. When the compound key columns overlap with other fields on the entity, combine `@UK` with `@Persist(insertable = false, updatable = false)` to prevent duplicate persistence: [Kotlin] ```kotlin data class UserEmailUk(val userId: Int, val email: String) data class SomeEntity( @PK val id: Int = 0, @FK val user: User, val email: String, @UK @Persist(insertable = false, updatable = false) val uniqueKey: UserEmailUk ) : Entity ``` [Java] ```java record UserEmailUk(int userId, String email) {} record SomeEntity(@PK Integer id, @Nonnull @FK User user, @Nonnull String email, @UK @Persist(insertable = false, updatable = false) UserEmailUk uniqueKey ) implements Entity {} ``` The metamodel processor generates a `Metamodel.Key` for the compound field, which can be used for lookups and scrolling just like a single-column key. ### Using Keys for Lookups `Metamodel.Key` enables type-safe single-result lookups through the repository: [Kotlin] ```kotlin val user: User? = userRepository.findBy(User_.email, "alice@example.com") val user: User = userRepository.getBy(User_.email, "alice@example.com") // throws if not found ``` [Java] ```java Optional user = userRepository.findBy(User_.email, "alice@example.com"); User user = userRepository.getBy(User_.email, "alice@example.com"); // throws if not found ``` ### Using Keys for Scrolling `Metamodel.Key` is also required for scrolling, where the cursor column must be unique: [Kotlin] ```kotlin val window: Window = userRepository.scroll(Scrollable.of(User_.id, 20)) // next() is non-null when the window has content. // hasNext() is informational; the developer decides whether to follow the cursor. val nextWindow: Window = userRepository.scroll(window.next()) ``` Compound unique keys work the same way. The inline record is used as the cursor value: ```kotlin val window: Window = repository.scroll(Scrollable.of(SomeEntity_.uniqueKey, 20)) val nextWindow: Window = repository.scroll(window.next()) ``` [Java] ```java Window window = userRepository.scroll(Scrollable.of(User_.id, 20)); // next() is non-null when the window has content. // hasNext() is informational; the developer decides whether to follow the cursor. Window next = userRepository.scroll(window.next()); ``` Compound unique keys work the same way: ```java Window window = repository.scroll(Scrollable.of(SomeEntity_.uniqueKey, 20)); Window next = repository.scroll(window.next()); ``` See [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling) for full details. ### Manual Key Wrapping For dynamically constructed metamodels or composite keys where the processor does not generate a `Key` instance, use `Metamodel.key()` (or the `.key()` extension in Kotlin) to wrap an existing metamodel: [Kotlin] ```kotlin val key: Metamodel.Key = Metamodel.key(Metamodel.of(User::class.java, "email")) ``` [Java] ```java Metamodel.Key key = Metamodel.key(Metamodel.of(User.class, "email")); ``` This is also useful when a column that is not annotated with `@UK` becomes unique in the context of a query, for example because of a GROUP BY clause. In that case, the column can serve as a scrolling cursor even though the metamodel processor did not generate a `Key` for it: [Kotlin] ```kotlin val ordersByCity = orm.query(Order::class) .select(Order_.city, "COUNT(*)") .groupBy(Order_.city) .scroll(Scrollable.of(Order_.city.key(), 20)) ``` [Java] ```java var ordersByCity = orm.query(Order.class) .select(Order_.city, "COUNT(*)") .groupBy(Order_.city) .scroll(Scrollable.of(Metamodel.key(Order_.city), 20)); ``` Callers are responsible for ensuring that the column contains unique values in the result set. ### Nullable Unique Keys In standard SQL, `NULL != NULL`. This means a `UNIQUE` constraint typically allows multiple rows with `NULL` in the unique column, because each `NULL` is considered distinct from every other `NULL`. While this behavior is well-defined in the SQL standard, it has practical implications for two Storm features: single-result lookups and scrolling. **Single-result lookups (`findBy`, `getBy`) are safe.** These methods throw if the query returns more than one row. Even if multiple `NULL` rows exist, the lookup either finds zero or one match (when searching for a non-null value) or throws an exception (when multiple rows match). There is no risk of silently returning the wrong result. **Scrolling is not safe with nullable keys.** Scrolling works by adding a `WHERE key > cursor` (or `WHERE key < cursor`) condition. In SQL, any comparison with `NULL` evaluates to `UNKNOWN`, which means rows with `NULL` in the key column are silently excluded from the result set. This can cause missing data without any error or indication that rows were skipped. Because of this, Storm validates nullable unique keys at two levels: 1. **Compile-time warning.** The metamodel processor emits a warning when a `@UK` field is nullable (a nullable type in Kotlin, or a reference type without `@Nonnull` in Java) and the default `nullsDistinct = true` applies. 2. **Runtime check.** The `scroll` method throws a `PersistenceException` if the key's metamodel indicates that nulls are distinct for a nullable field, preventing silent data loss. Database behavior varies. Some databases offer stricter NULL handling for unique constraints: - **PostgreSQL 15+** supports `NULLS NOT DISTINCT` on unique indexes, which rejects duplicate `NULL` values. - **SQL Server** allows only one `NULL` by default in a unique index (unless a filtered index is used). - **Most other databases** (MySQL, MariaDB, Oracle, H2) follow the SQL standard and allow multiple `NULL` values. The `@UK` annotation provides a `nullsDistinct` attribute to control this behavior: | Field | `nullsDistinct` | Effect | |-------|-----------------|--------| | `@UK @Nonnull String email` | (irrelevant) | Safe. No warning, no runtime check. | | `@UK int count` | (irrelevant) | Safe. Primitive is never null. | | `@UK String email` | `true` (default) | Compile-time warning. `scroll` throws `PersistenceException`. | | `@UK(nullsDistinct = false) String email` | `false` | No warning. `scroll` works (user asserts DB prevents duplicate NULLs). | When `nullsDistinct` is set to `false`, you are telling Storm that your database constraint prevents duplicate `NULL` values in the column. Storm trusts this assertion and skips both the compile-time warning and the runtime check. Use this only when your database actually enforces this guarantee (for example, with a `NULLS NOT DISTINCT` unique index in PostgreSQL 15+, or on SQL Server where unique indexes allow at most one `NULL` by default). The following examples show how to define unique keys that are safe for scrolling. [Kotlin] ```kotlin // Safe (non-nullable) data class User( @PK val id: Int = 0, @UK val email: String, // Non-nullable, safe for scrolling val name: String ) : Entity // Opt-in for nullable keys data class User( @PK val id: Int = 0, @UK(nullsDistinct = false) val email: String?, // DB prevents duplicate NULLs val name: String ) : Entity ``` [Java] ```java // Safe (non-nullable) record User(@PK Integer id, @UK @Nonnull String email, // Non-nullable, safe for scrolling String name ) implements Entity {} // Opt-in for nullable keys record User(@PK Integer id, @UK(nullsDistinct = false) String email, // DB prevents duplicate NULLs String name ) implements Entity {} ``` In most cases, the simplest approach is to ensure your unique key fields are non-nullable. If nullability is required, verify that your database constraint actually prevents duplicate `NULL` values before setting `nullsDistinct = false`. ## Working with Metamodel Programmatically Beyond compile-time query construction, the `Metamodel` interface provides several runtime methods for working with entity metadata and values programmatically. ### Extracting Field Values `Metamodel.getValue(record)` extracts the value of the field represented by a metamodel from a given record instance. This works for any metamodel, including nested paths. If any intermediate record in the path is `null`, the method returns `null`. [Kotlin] ```kotlin val user = User(id = 1, email = "alice@example.com", name = "Alice", city = someCity) // Extract the email value from the user record val email = User_.email.getValue(user) // "alice@example.com" // Extract a nested value through the entity graph val countryName = User_.city.country.name.getValue(user) // "United States" ``` [Java] ```java var user = new User(1, "alice@example.com", "Alice", someCity); // Extract the email value from the user record Object email = User_.email.getValue(user); // "alice@example.com" // Extract a nested value through the entity graph Object countryName = User_.city.country.name.getValue(user); // "United States" ``` ### Flattening Inline Records `Metamodel.flatten()` expands an inline record (embedded component) into its individual leaf column metamodels. If the metamodel already represents a leaf column, it returns a singleton list containing itself. This is the same expansion Storm performs internally for ORDER BY and GROUP BY clauses. [Kotlin] ```kotlin // If Address is an inline record with (street, cityId) fields: val leafColumns = Owner_.address.flatten() // Returns: [Owner_.address.street, Owner_.address.city] ``` [Java] ```java // If Address is an inline record with (street, cityId) fields: List> leafColumns = Owner_.address.flatten(); // Returns: [Owner_.address.street, Owner_.address.city] ``` ### Canonical Form for Equality Checks `Metamodel.canonical()` returns a path-independent form of a metamodel that captures only the table type and field name. Two metamodels that refer to the same underlying field (but are reached through different paths in the entity graph) will have equal canonical forms. This is useful for programmatic comparison of metamodels. [Kotlin] ```kotlin // These two metamodels reach the same Country.name field through different paths val path1 = User_.city.country.name val path2 = Order_.shippingAddress.country.name // Their canonical forms are equal path1.canonical() == path2.canonical() // true ``` [Java] ```java // These two metamodels reach the same Country.name field through different paths var path1 = User_.city.country.name; var path2 = Order_.shippingAddress.country.name; // Their canonical forms are equal path1.canonical().equals(path2.canonical()); // true ``` ### Wrapping as a Key `Metamodel.key(metamodel)` wraps any metamodel as a `Metamodel.Key`, indicating that the column can serve as a unique cursor for scrolling. If the metamodel already implements `Key`, it is returned as-is. See [Manual Key Wrapping](#manual-key-wrapping) for usage examples. ## `@GenerateMetamodel` Annotation By default, the metamodel processor generates metamodel classes for all records that implement `Entity` or `Projection`. If you have a plain record (or data class) that does not implement either interface but you still want a metamodel generated for it, annotate it with `@GenerateMetamodel`. This is useful for: - Inline records (embedded components) that you want to reference in queries via the metamodel - `Data` implementations used in custom SQL templates - Any non-entity record where you want compile-time type-safe field references [Kotlin] ```kotlin @GenerateMetamodel data class Address( val street: String, @FK val city: City ) // Now Address_ is available for type-safe references val addresses = orm.query { """ SELECT ${Address::class} FROM ${Address::class} WHERE ${Address_.street} LIKE ${"%Main%"} """ } ``` [Java] ```java @GenerateMetamodel record Address(String street, @FK City city) {} // Now Address_ is available for type-safe references var addresses = orm.query(RAW.""" SELECT \{Address.class} FROM \{Address.class} WHERE \{Address_.street} LIKE \{"%Main%"}"""); ``` The `@GenerateMetamodel` annotation is located in `st.orm.core.template` and requires the `storm-core` dependency at compile time (provided scope is sufficient). ## Benefits 1. **Compile-time safety.** Typos caught at compile time, not runtime. 2. **IDE support.** Auto-completion for field names. 3. **Refactoring.** Rename fields safely; the compiler catches all usages. 4. **Type checking.** Can't compare a String field to an Integer. ## Without the Metamodel The metamodel is not required. You can use Storm with SQL Templates (Java) or raw query methods and string-based field references. This approach works well for prototyping, small projects, or queries that are too dynamic to express through the DSL. The trade-off is that field references become strings, which the compiler cannot verify. Typos and type mismatches will surface as runtime exceptions rather than compile errors. ## Tips 1. **Rebuild after changes.** Run `./gradlew build` or `mvn compile` after adding or modifying entity fields. 2. **Check your IDE setup.** Ensure KSP (Kotlin) or annotation processing (Java) is enabled in your IDE settings. 3. **Use for all queries.** Consistent use of metamodel prevents runtime errors. ======================================== ## Source: refs.md ======================================== # Refs Refs are lightweight identifiers for entities, projections, and other data types that defer fetching until explicitly required. They optimize performance by avoiding unnecessary data retrieval and are useful for managing large object graphs. --- ## Using Refs in Entities To declare a relationship as a Ref, replace the direct type with `Ref` in the field declaration. Storm stores only the foreign key column value and does not generate a JOIN for the referenced table. This reduces the width of SELECT queries and avoids loading data you may never access. [Kotlin] ```kotlin data class User( @PK val id: Int = 0, val email: String, @FK val city: Ref // Lightweight reference ) : Entity ``` The `city` field contains only the foreign key ID, not the full `City` entity. Compare this with declaring `@FK val city: City`, which would load the full `City` (and its transitive `@FK` relationships) via auto-generated JOINs on every query. [Java] The Java API uses `Ref` in the same way as Kotlin. Declare the record component with `Ref` instead of `City` to store only the foreign key. ```java record User(@PK Integer id, String email, @FK Ref city // Lightweight reference ) implements Entity {} ``` The `city` field contains only the foreign key ID, not the full `City` entity. --- ## Fetching When you need the full referenced entity, call `fetch()`. This triggers a database lookup (or cache hit) on demand, loading only the data you actually need at the point you need it. [Kotlin] ```kotlin val user = orm.get(User_.id eq userId) val city: City = user.city.fetch() // Loads from database ``` [Java] Call `fetch()` to load the referenced entity on demand. ```java Optional user = orm.entity(User.class) .select() .where(User_.id, EQUALS, userId) .getOptionalResult(); City city = user.map(u -> u.city().fetch()).orElse(null); // Loads from database ``` --- ## Preventing Circular Dependencies Without Refs, an entity that references its own type would cause infinite recursion during auto-join generation: `User` joins `User`, which joins `User`, and so on. Declaring the self-referential field as `Ref` breaks the cycle. Storm stores only the foreign key and does not attempt to join the table to itself. This pattern applies to any recursive or hierarchical data model, such as organizational trees, threaded comments, or referral chains. [Kotlin] ```kotlin data class User( @PK val id: Int = 0, val email: String, @FK val city: City, @FK val invitedBy: Ref? // Self-reference ) : Entity ``` [Java] ```java record User(@PK Integer id, String email, @FK City city, @Nullable @FK Ref invitedBy // Self-reference ) implements Entity {} ``` --- ## Selecting Refs When you need to collect entity identifiers without loading full rows, select refs directly. This is useful for building ID lists to pass into subsequent queries (e.g., batch lookups or IN clauses) without the memory overhead of full entity hydration. [Kotlin] ```kotlin val role: Role = ... val userRefs: Flow> = orm.entity(UserRole::class) .selectRef(User::class) .where(UserRole_.role eq role) .resultFlow ``` [Java] Selecting refs in Java returns a `List` of `Ref` objects. You can also use SQL templates to achieve the same result with more control over the query structure. ```java Role role = ...; List> users = orm.entity(UserRole.class) .selectRef(User.class) .where(UserRole_.role, EQUALS, role) .getResultList(); ``` Using SQL Templates: ```java List> users = orm.query(RAW.""" SELECT \{select(User.class, SelectMode.PK)} FROM \{UserRole.class} WHERE \{role}""") .getRefList(User.class, Integer.class); ``` --- ## Using Refs in Queries [Kotlin] Refs integrate directly into query filter expressions. You can pass a collection of Refs to an `inRefs` clause, which generates an `IN (...)` SQL expression using only the primary key values. This lets you chain queries efficiently: select refs from one query, then use them as filters in the next. ```kotlin val userRefs: List> = ... val roles: List = orm.entity(UserRole::class) .select(Role::class) .distinct() .where(UserRole_.user inRefs userRefs) .resultList ``` [Java] Refs can be used directly in where clauses: ```java List> users = ...; List roles = orm.entity(UserRole.class) .select(Role.class) .distinct() .whereRef(UserRole_.user, users) .getResultList(); ``` Using SQL Templates: ```java List> users = ...; List roles = orm.query(RAW.""" SELECT DISTINCT \{Role.class} FROM \{UserRole.class} WHERE \{users}""") .getResultList(Role.class); ``` --- ## Creating Refs You can create Refs programmatically from a type and ID, or extract one from an existing entity. [Kotlin] ```kotlin // From type and ID val userRef: Ref = Ref.of(User::class.java, 42) // From existing entity val user: User = ... val ref: Ref = Ref.of(user) ``` [Java] ```java // From type and ID Ref userRef = Ref.of(User.class, 42); // From existing entity User user = ...; Ref ref = Ref.of(user); ``` --- ## Detached Ref Behavior Refs created with `Ref.of(type, primaryKey)` are **detached**: they carry the entity type and primary key but have no connection to a database context. This has important implications for fetching behavior. - Calling `fetch()` on a detached ref throws a `PersistenceException` because there is no database connection available to retrieve the record. - Calling `fetchOrNull()` returns `null` for the same reason. - The `isFetchable()` method returns `false` for detached refs. By contrast, refs created with `Ref.of(entity)` wrap an already-loaded entity instance. Calling `fetch()` or `fetchOrNull()` on such a ref returns the wrapped entity without any database access. The `isFetchable()` method also returns `false` (since it does not need to fetch), but `isLoaded()` returns `true`. | Factory method | Holds data? | `fetch()` behavior | `isFetchable()` | |----------------|-------------|-------------------|------------------| | `Ref.of(type, primaryKey)` | No (ID only) | Throws `PersistenceException` | `false` | | `Ref.of(entity)` | Yes (full entity) | Returns the wrapped entity | `false` | | Loaded by Storm (from query) | Yes (after fetch) | Returns entity or fetches from DB/cache | `true` | Use `Ref.of(entity)` when you already have the entity in memory and want to wrap it as a ref (for example, to pass into a method that expects `Ref`). Use `Ref.of(type, primaryKey)` when you only have the ID and want a lightweight identifier for equality checks, map keys, or later resolution within a transaction context. --- ## Aggregation with Refs [Kotlin] Refs are particularly useful in aggregation queries where you group by a foreign key. Instead of loading the full related entity for each group, you can select only the primary key as a Ref. This keeps the query lightweight while still giving you a typed identifier to use in subsequent lookups if needed. ```kotlin data class GroupedByCity( val city: Ref, val count: Long ) val counts: Map, Long> = orm.entity(User::class) .select(GroupedByCity::class) { "${select(City::class, SelectMode.PK)}, COUNT(*)" } .groupBy(User_.city) .resultList .associate { it.city to it.count } ``` [Java] ```java record GroupedByCity(Ref city, long count) {} Map, Long> counts = orm.entity(User.class) .select(GroupedByCity.class, RAW."\{select(City.class, SelectMode.PK)}, COUNT(*)") .groupBy(User_.city) .getResultList().stream() .collect(toMap(GroupedByCity::city, GroupedByCity::count)); ``` Using SQL Templates: ```java Map, Long> counts = orm.query(RAW.""" SELECT \{select(City.class, SelectMode.PK)}, COUNT(*) FROM \{User.class} GROUP BY \{User_.city}""") .getResultList(GroupedByCity.class).stream() .collect(toMap(GroupedByCity::city, GroupedByCity::count)); ``` --- ## Use Cases The following patterns illustrate the main scenarios where Refs provide concrete benefits over loading full entities. The common thread is reducing the amount of data loaded from the database until the moment it is actually needed. ### Optimizing Memory When processing large collections of entities, loading full object graphs for each row can exhaust available memory. Refs store only the entity type and primary key (typically 16-32 bytes per reference, versus hundreds of bytes or more for a fully hydrated entity with nested relationships). ```kotlin // Instead of loading full User objects val users: List = ... // Each User has all fields loaded // Load only IDs val userRefs: List> = ... // Only IDs in memory ``` ### Efficient Collections Refs implement `equals()` and `hashCode()` based on their entity type and primary key, making them reliable keys in maps and sets. This lets you build lookup structures keyed by entity identity without loading the full entity data. ```kotlin val userScores: Map, Int> = ... // Access by ref without loading full entity val score = userScores[Ref.of(User::class.java, userId)] ``` ### Deferred Loading Refs enable a controlled form of lazy loading without proxies or bytecode manipulation. The entity field is declared as a Ref, and the calling code decides if and when to call `fetch()`. This makes the loading decision explicit in the code rather than hidden behind an ORM proxy. ```kotlin data class Report( @PK val id: Int = 0, @FK val author: Ref, // Don't load user automatically val content: String ) : Entity // Later, when you need the author val report = orm.find(Report_.id eq reportId) if (needsAuthorInfo) { val author = report?.author?.fetch() } ``` ## Fetching Behavior Understanding how `fetch()` resolves its target helps you predict performance and avoid runtime errors. - `fetch()` checks the [entity cache](entity-cache.md) before querying the database. If the entity was already loaded in the current transaction, no additional query is issued. - Multiple Refs pointing to the same entity share the cached instance within a transaction, preserving object identity. - Calling `fetch()` on a detached Ref created with `Ref.of(type, id)` will fail unless an active transaction context is available. ## Tips 1. **Use Refs for optional relationships.** Avoid loading data you might not need. 2. **Use Refs for self-references.** Prevent circular loading in hierarchical data. 3. **Use Refs in aggregations.** Get counts by FK without loading full entities. 4. **Refs are reliable map keys.** They provide lightweight, identity-based comparison. ======================================== ## Source: transactions.md ======================================== # Transactions Transaction management is fundamental to database programming. Storm takes a practical approach: rather than inventing new abstractions, it provides first-class support for standard transaction semantics while integrating seamlessly with your existing infrastructure. Storm works directly with JDBC transactions and supports both programmatic and declarative transaction management. For Kotlin, Storm provides a coroutine-friendly API inspired by Exposed. For Java, Storm integrates with Spring's transaction management or works directly with JDBC connections. --- [Kotlin] Storm for Kotlin provides a fully programmatic transaction solution (following the style popularized by [Exposed](https://github.com/JetBrains/Exposed)) that is **completely coroutine-friendly**. It supports **all isolation levels and propagation modes** found in traditional transaction management systems. You can freely switch coroutine dispatchers within a transaction (offload CPU-bound work to `Dispatchers.Default` or IO work to `Dispatchers.IO`) and still remain in the **same active transaction**. While Storm's `transaction { }` blocks look similar to Exposed's, Storm goes further by supporting all seven standard propagation modes (`REQUIRED`, `REQUIRES_NEW`, `NESTED`, `MANDATORY`, `SUPPORTS`, `NOT_SUPPORTED`, `NEVER`). Exposed's native transaction API only supports basic nesting (shared transaction) and savepoint-based nesting (`useNestedTransactions = true`), without the ability to suspend an outer transaction, enforce transactional context, or run non-transactionally. See [Storm vs Exposed](comparison.md#storm-vs-exposed) for a detailed comparison. The API is designed around Kotlin's type system and coroutine model. Import the transaction functions and enums from `st.orm.template`: ```kotlin import st.orm.template.transaction import st.orm.template.transactionBlocking import st.orm.template.TransactionPropagation.* import st.orm.template.TransactionIsolation.* ``` ### Suspend Transactions Use `transaction` for coroutine code: ```kotlin transaction { orm.removeAll() orm insert User(email = "alice@example.com", name = "Alice") // Commits automatically on success, rolls back on exception } ``` Suspend transactions allow **context switching** without losing the active transaction: ```kotlin transaction { val orders = orderRepository.findPendingOrders() withContext(Dispatchers.Default) { // CPU-bound work on another dispatcher heavyComputation(orders) } // Still in the same transaction orderRepository.update(order.copy(pending = false)) } ``` ### Blocking Transactions Use `transactionBlocking` for synchronous code: ```kotlin transactionBlocking { orm.removeAll() orm insert User(email = "alice@example.com", name = "Alice") // Commits automatically on success, rolls back on exception } ``` ### Transaction Propagation Propagation modes are one of the most powerful features of enterprise transaction management, yet they're often misunderstood. They control how transactions interact when code calls another transactional method. This is essential for building composable services where each method can define its transactional requirements independently. Storm supports all seven standard propagation modes. Understanding when to use each mode helps you build robust, maintainable applications where components work correctly both standalone and when composed together. #### REQUIRED (Default) Joins an existing transaction if one is active, otherwise creates a new one. This is the most common mode: it allows methods to participate in a larger transactional context while still working standalone. When called without an existing transaction, a new transaction is started: ``` [BEGIN] → insert(user) → insert(order) → [COMMIT] ``` When called within an existing transaction, the operations join that transaction. All operations commit or rollback together: ``` [BEGIN] ↓ insert(user) ↓ ┌─ transaction(REQUIRED) ─┐ │ insert(order) │ ← joins outer transaction └─────────────────────────┘ ↓ insert(payment) ↓ [COMMIT] ← all three inserts committed together ``` In this example, `orderService.createOrder()` participates in the same transaction. If either operation fails, both are rolled back: ```kotlin transaction(propagation = REQUIRED) { userRepository.insert(user) orderService.createOrder(order) // Joins this transaction } ``` **Use cases:** The default for most operations. Use when operations should be atomic with their caller. #### REQUIRES_NEW Always creates a new, independent transaction. If an outer transaction exists, it is suspended until the inner transaction completes. The inner transaction commits or rolls back independently of the outer one. The following diagram shows the outer transaction being suspended while the inner transaction runs. Notice that the inner transaction commits before the outer transaction fails, so the audit log persists even though the outer transaction rolls back: ``` [BEGIN outer] ↓ insert(user) ↓ ~~~ outer suspended ~~~ ↓ [BEGIN inner] ↓ insert(audit_log) ↓ [COMMIT inner] ← committed independently ↓ ~~~ outer resumed ~~~ ↓ insert(order) ↓ [ROLLBACK outer] ← audit_log survives! ``` This pattern is useful for audit logging. The audit record is preserved regardless of whether the business operation succeeds: ```kotlin transaction { userRepository.insert(user) // Audit log commits even if outer transaction fails transaction(propagation = REQUIRES_NEW) { auditRepository.insert(AuditLog("User creation attempted")) } orderRepository.insert(order) // If this fails, audit log is preserved } ``` **Use cases:** Audit logging, error tracking, metrics recording, or any operation that must persist regardless of the outer transaction's outcome. #### NESTED Creates a savepoint within the current transaction. If the nested block fails, only changes since the savepoint are rolled back, and the outer transaction can continue. Unlike `REQUIRES_NEW`, nested transactions share the same database connection and only fully commit when the outer transaction commits. If no transaction exists, behaves like `REQUIRED`. When the nested block succeeds, the savepoint is released and all changes commit together with the outer transaction: ``` [BEGIN] ↓ insert(order) ↓ [SAVEPOINT] ↓ insert(discount) ↓ [RELEASE SAVEPOINT] ↓ insert(payment) ↓ [COMMIT] ← all three inserts committed ``` When the nested block fails or calls `setRollbackOnly()`, only changes within the savepoint are discarded. The outer transaction continues with its prior work intact: ``` [BEGIN] ↓ insert(order) ✓ kept ↓ [SAVEPOINT] ↓ insert(discount) ✗ discarded insert(bonus) ✗ discarded ↓ [ROLLBACK TO SAVEPOINT] ↓ insert(payment) ✓ kept ↓ [COMMIT] ← order + payment committed, discount + bonus discarded ``` This pattern is useful for optional operations that shouldn't abort the main flow. Here, the discount is applied if a valid promo code exists, but the order proceeds either way: ```kotlin transaction { val order = orderRepository.insert(newOrder) transaction(propagation = NESTED) { val promo = promoRepository.findByCode(promoCode) ?: return@transaction discountRepository.insert(Discount(order.id, promo.amount)) if (promo.expired) { setRollbackOnly() // Rolls back the discount insert } } // Continues regardless of whether discount was applied paymentRepository.insert(Payment(order.id, calculateTotal(order))) } ``` **Use cases:** Optional features that shouldn't abort the main flow, retry logic within a transaction, or "best effort" operations. #### MANDATORY Requires an active transaction; throws `PersistenceException` if none exists. Use this to enforce that a method is never called outside a transactional context. This is a defensive programming technique to catch integration errors early. ``` No transaction active: transaction(MANDATORY) → ✗ PersistenceException Transaction active: [BEGIN] ↓ transaction(MANDATORY) → ✓ joins outer ↓ [COMMIT] ``` This pattern is useful for operations that must never run standalone. A fund transfer should always be part of a larger transactional context: ```kotlin // In a repository or service that must run within a transaction fun transferFunds(from: Account, to: Account, amount: BigDecimal) { transaction(propagation = MANDATORY) { // Guaranteed to be in a transaction. Fails fast if not. accountRepository.debit(from, amount) accountRepository.credit(to, amount) } } ``` **Use cases:** Critical operations that must be part of a larger transaction, enforcing transactional boundaries in service layers. #### SUPPORTS Uses an existing transaction if available, otherwise runs without one. The code adapts to its calling context: transactional when called from a transaction, non-transactional otherwise. ``` No transaction active: transaction(SUPPORTS) → runs without transaction Transaction active: [BEGIN] ↓ transaction(SUPPORTS) → joins outer transaction ↓ [COMMIT] ``` This pattern is useful for read operations that don't require transactional guarantees but benefit from them when available: ```kotlin fun findUserById(id: Long): User? { return transaction(propagation = SUPPORTS) { // Benefits from transactional consistency if caller has a transaction, // but works fine standalone for simple lookups userRepository.findById(id) } } ``` **Use cases:** Read-only operations, caching layers, or queries that benefit from transactional consistency when available but don't require it. #### NOT_SUPPORTED Suspends any active transaction and runs non-transactionally. The outer transaction resumes after the block completes. The suspended transaction's locks are retained, but this block won't see uncommitted changes from it. ``` [BEGIN outer] ↓ insert(order) ↓ ~~~ outer suspended ~~~ ↓ callExternalApi() ← runs without transaction ↓ ~~~ outer resumed ~~~ ↓ insert(confirmation) ↓ [COMMIT outer] ``` This pattern is useful for operations that shouldn't hold database resources or need to see committed data: ```kotlin transaction { orderRepository.insert(order) // External API call shouldn't hold database locks transaction(propagation = NOT_SUPPORTED) { paymentGateway.processPayment(order.total) // May take time } orderRepository.markAsPaid(order.id) } ``` **Use cases:** External API calls, long-running computations, operations that must see committed data from other transactions, or reducing lock contention. #### NEVER Fails with `PersistenceException` if a transaction is active. Use this to enforce that code runs outside any transactional context. This is the opposite of `MANDATORY`, serving as a defensive check to prevent accidental transactional execution. ``` No transaction active: transaction(NEVER) → ✓ runs without transaction Transaction active: [BEGIN] ↓ transaction(NEVER) → ✗ PersistenceException ``` This pattern is useful for operations that should never participate in a transaction, such as batch jobs that manage their own transaction boundaries: ```kotlin fun runBatchJob() { transaction(propagation = NEVER) { // Ensures this is never accidentally called within another transaction // Each batch item will manage its own transaction items.forEach { item -> transaction { processItem(item) } } } } ``` **Use cases:** Batch operations with custom transaction boundaries, operations that must see real-time committed data, or enforcing architectural boundaries. #### Propagation Summary | Mode | No Active Tx | Active Tx Exists | |------|--------------|------------------| | `REQUIRED` | Create new | Join existing | | `REQUIRES_NEW` | Create new | Suspend outer, create new | | `NESTED` | Create new | Create savepoint | | `MANDATORY` | **Error** | Join existing | | `SUPPORTS` | Run without tx | Join existing | | `NOT_SUPPORTED` | Run without tx | Suspend outer, run without tx | | `NEVER` | Run without tx | **Error** | ### Isolation Levels Isolation levels are the database's answer to concurrency. When multiple transactions run simultaneously, they can interfere with each other in various ways. The SQL standard defines four isolation levels, each preventing different types of concurrency anomalies. Storm exposes all four standard isolation levels through its API, giving you full control over the consistency-performance trade-off. Most applications work fine with the database's default isolation level (typically `READ_COMMITTED`), but understanding when to use higher levels is crucial for building correct applications. #### Concurrency Phenomena Before diving into isolation levels, it's important to understand the three phenomena they prevent. Each represents a different way concurrent transactions can produce unexpected results: | Phenomenon | Description | |------------|-------------| | **Dirty Read** | Reading uncommitted changes from another transaction that might roll back | | **Non-Repeatable Read** | Reading the same row twice yields different values because another transaction modified it | | **Phantom Read** | Re-executing a query returns new rows that another transaction inserted | #### READ_UNCOMMITTED The lowest isolation level. Transactions can see uncommitted changes from other transactions, which means you might read data that will never actually be committed (dirty reads). This offers the highest concurrency but the weakest consistency guarantees. The following timeline shows two concurrent transactions. Transaction A reads a user that Transaction B inserted but hasn't committed yet. When Transaction B rolls back, the data Transaction A read effectively never existed: ``` Time Transaction A Transaction B ───────────────────────────────────────────────────────────────────── t1 [BEGIN] t2 [BEGIN] t3 INSERT user ('Alice') t4 SELECT → sees 'Alice' (not committed yet) ↑ dirty read! t5 [ROLLBACK] t6 SELECT → empty ↑ data disappeared! t7 [COMMIT] ``` This level is rarely used in practice, but can be useful when you need approximate results and maximum performance: ```kotlin transaction(isolation = READ_UNCOMMITTED) { // Can see uncommitted changes - use with caution val count = userRepository.count() // May include uncommitted rows } ``` **Use cases:** Approximate counts for dashboards, monitoring queries, or any scenario where "close enough" is acceptable and performance matters more than accuracy. > **Note:** At `READ_UNCOMMITTED` and `READ_COMMITTED` isolation levels, Storm returns fresh data from the database on every read rather than cached instances. This ensures repeated reads see the latest database state. Dirty checking remains available at all isolation levels. Storm stores observed state for detecting changes even when not returning cached instances. See [dirty checking](dirty-checking.md) for details. #### READ_COMMITTED Transactions only see data that has been committed. This prevents dirty reads: you will never see data that might be rolled back. However, if you read the same row twice, you might get different values if another transaction modified and committed it in between (non-repeatable read). In this timeline, Transaction A reads a balance of 1000. While it's still running, Transaction B updates and commits a new balance. When Transaction A reads again, it sees the new value: ``` Time Transaction A Transaction B ───────────────────────────────────────────────────────────────────── t1 [BEGIN] t2 SELECT balance → 1000 t3 [BEGIN] t4 UPDATE balance = 500 t5 [COMMIT] t6 SELECT balance → 500 ↑ non-repeatable read! t7 [COMMIT] ``` This is the default isolation level for most databases and applications. It provides a good balance between consistency and concurrency: ```kotlin transaction(isolation = READ_COMMITTED) { val user = userRepository.findById(id) // Another transaction might modify the user here val sameUser = userRepository.findById(id) // sameUser might have different values than user } ``` **Use cases:** The default choice for most applications. Suitable for operations where seeing the latest committed data is more important than having a consistent snapshot throughout the transaction. > **Note:** Storm's [entity cache](entity-cache.md) behavior varies by isolation level. At `READ_COMMITTED`, fresh data is fetched on each read. At `REPEATABLE_READ` and above, cached instances are returned for consistent entity identity. #### REPEATABLE_READ Guarantees that if you read a row once, subsequent reads return the same data, even if other transactions modify and commit changes to that row. The transaction works with a consistent snapshot taken at the start. However, phantom reads may still occur: new rows inserted by other transactions can appear in range queries. This timeline shows Transaction A getting consistent results for the same row, even though Transaction B modified it. The snapshot isolation ensures Transaction A sees the value as of when it started: ``` Time Transaction A Transaction B ───────────────────────────────────────────────────────────────────── t1 [BEGIN] t2 SELECT balance → 1000 t3 [BEGIN] t4 UPDATE balance = 500 t5 [COMMIT] t6 SELECT balance → 1000 ↑ same value (snapshot) t7 [COMMIT] ``` However, phantom reads can still occur with range queries. New rows that match the query criteria can appear between executions: ``` Time Transaction A Transaction B ───────────────────────────────────────────────────────────────────── t1 [BEGIN] t2 SELECT pending orders → 3 rows t3 [BEGIN] t4 INSERT new pending order t5 [COMMIT] t6 SELECT pending orders → 4 rows ↑ phantom row! t7 [COMMIT] ``` This level is useful when you need consistent reads throughout a transaction, such as generating reports or performing calculations that must be internally consistent: ```kotlin transaction(isolation = REPEATABLE_READ) { val user = userRepository.findById(id) // Even if another transaction modifies this user and commits, // we'll keep seeing the original values processUser(user) val sameUser = userRepository.findById(id) // Guaranteed: user == sameUser } ``` **Use cases:** Financial calculations, generating reports, audit trails, or any scenario where you need a stable view of the data throughout the transaction. #### SERIALIZABLE The highest isolation level. Transactions execute as if they were run one after another (serially), even though they may actually run concurrently. This prevents all concurrency phenomena, including phantom reads. The database achieves this through locking or optimistic concurrency control, which may cause transactions to block or fail and retry. In this timeline, Transaction B's insert is blocked (or will fail on commit) because Transaction A has read the range of pending orders. This ensures Transaction A sees a consistent set of rows throughout: ``` Time Transaction A Transaction B ───────────────────────────────────────────────────────────────────── t1 [BEGIN] t2 SELECT pending orders → 3 rows t3 [BEGIN] t4 INSERT new pending order ↑ BLOCKED (or fails on commit) t5 SELECT pending orders → 3 rows ↑ no phantoms t6 [COMMIT] t7 ↑ now proceeds (or retries) t8 [COMMIT] ``` Use this level when correctness is critical and you cannot tolerate any anomalies. Be prepared for lower throughput and potential retry logic for failed transactions: ```kotlin transaction(isolation = SERIALIZABLE) { // Check seat availability and book atomically val availableSeats = seatRepository.findAvailable(flightId) if (availableSeats.isNotEmpty()) { // No other transaction can insert/modify seats for this flight // until we commit, which prevents double-booking seatRepository.book(availableSeats.first(), passengerId) } } ``` **Use cases:** Booking systems, inventory management, financial transfers, or any operation where race conditions could cause serious problems like double-booking or overselling. #### Isolation Level Summary | Level | Dirty Read | Non-Repeatable Read | Phantom Read | Performance | |-------|------------|---------------------|--------------|-------------| | `READ_UNCOMMITTED` | Possible | Possible | Possible | Highest | | `READ_COMMITTED` | Prevented | Possible | Possible | High | | `REPEATABLE_READ` | Prevented | Prevented | Possible* | Medium | | `SERIALIZABLE` | Prevented | Prevented | Prevented | Lowest | *Some databases (e.g., PostgreSQL, MySQL/InnoDB) also prevent phantom reads at `REPEATABLE_READ` using snapshot isolation. #### Choosing an Isolation Level Start with `READ_COMMITTED` (often the database default) and only increase isolation when you have a specific consistency requirement. Here's a guide for common scenarios: **Simple CRUD operations:** Use `READ_COMMITTED`. Seeing the latest committed data is usually what you want: ```kotlin transaction(isolation = READ_COMMITTED) { userRepository.update(user) } ``` **Reports and calculations:** Use `REPEATABLE_READ` when you need multiple queries to see a consistent snapshot. This ensures totals, counts, and details all reflect the same point in time: ```kotlin transaction(isolation = REPEATABLE_READ) { val total = orderRepository.sumByUser(userId) val count = orderRepository.countByUser(userId) val average = total / count // Safe: total and count are consistent } ``` **Critical operations with race conditions:** Use `SERIALIZABLE` when concurrent transactions could cause problems like double-booking or overselling. The performance cost is worth the correctness guarantee: ```kotlin transaction(isolation = SERIALIZABLE) { val inventory = inventoryRepository.findByProduct(productId) if (inventory.quantity >= requestedQuantity) { // Without SERIALIZABLE, two concurrent transactions could both // pass this check and oversell inventoryRepository.decrease(productId, requestedQuantity) orderRepository.create(order) } } ``` ### Transaction Timeout Long-running transactions hold database locks and consume connection pool resources. Setting a timeout ensures that a stuck or unexpectedly slow transaction is automatically rolled back rather than blocking indefinitely. The timeout is measured from the start of the transaction block. ```kotlin transaction(timeoutSeconds = 30) { orm.removeAll() delay(35_000) // Will cause timeout } ``` ### Read-Only Transactions Marking a transaction as read-only allows the database to apply optimizations such as skipping write-ahead logging or acquiring lighter locks. This is a hint, not an enforcement mechanism; the database may or may not reject writes depending on the driver and database engine. ```kotlin transaction(readOnly = true) { // Hints to the database that no modifications will occur val users = orm.findAll() } ``` ### Manual Rollback Sometimes you need to abort a transaction based on a runtime condition rather than an exception. Calling `setRollbackOnly()` marks the transaction for rollback without throwing. The block continues executing, but the transaction rolls back when it completes instead of committing. ```kotlin transaction { orm.removeAll() if (someCondition) { setRollbackOnly() // Mark for rollback } // Transaction will roll back instead of commit } ``` ### Transaction Callbacks Database transactions often need to trigger side effects, but only when the outcome is certain. Sending a confirmation email before the order is committed risks notifying a customer about an order that never persisted. Conversely, cleanup logic (releasing external locks, closing temporary resources) should run after a rollback, not during regular flow where it might mask the real failure. Storm's `onCommit` and `onRollback` callbacks solve this by letting you register logic that fires **after** the physical transaction completes. Callbacks are registered inside the transaction block but execute outside it, once the outcome is final. #### Basic Usage Register callbacks anywhere inside a `transaction` or `transactionBlocking` block: ```kotlin transaction { val order = orderRepository.insert(newOrder) inventoryRepository.decrease(order.productId, order.quantity) onCommit { // Only runs after the transaction has successfully committed. // The order and inventory changes are durable at this point. emailService.sendOrderConfirmation(order) eventBus.publish(OrderCreatedEvent(order.id)) } onRollback { // Only runs after the transaction has rolled back. // No changes were persisted. metrics.increment("orders.failed") } } ``` Both variants work identically with `transactionBlocking`: ```kotlin transactionBlocking { cacheRepository.update(entry) onCommit { cache.invalidate(entry.key) // Evict stale cache entry only after new data is durable } } ``` #### When Callbacks Fire Callbacks are deferred until the transaction outcome is determined. The following table summarizes the trigger conditions: | Scenario | `onCommit` | `onRollback` | |----------|------------|--------------| | Block completes normally | Fires | Does not fire | | Block throws an exception | Does not fire | Fires | | `setRollbackOnly()` called, block completes | Does not fire | Fires | | Transaction timeout expires | Does not fire | Fires | | Commit itself throws (e.g., constraint violation during flush) | Does not fire | Fires | The key guarantee is that `onCommit` callbacks only execute when data is actually durable. If the commit itself fails for any reason, `onCommit` callbacks are skipped and `onRollback` callbacks run instead. This timeline shows the execution order for a successful transaction: ``` [BEGIN] ↓ insert(order) onCommit { sendEmail() } ← registered, not yet executed onRollback { logFailure() } ← registered, not yet executed ↓ [COMMIT] ← transaction commits successfully ↓ sendEmail() ← onCommit fires now (onRollback is discarded) ``` And for a failed transaction: ``` [BEGIN] ↓ insert(order) onCommit { sendEmail() } ← registered, not yet executed onRollback { logFailure() } ← registered, not yet executed ↓ decreaseInventory() ↓ ✗ exception thrown ↓ [ROLLBACK] ← transaction rolls back ↓ logFailure() ← onRollback fires now (onCommit is discarded) ``` #### Multiple Callbacks and Ordering You can register any number of callbacks. They execute in registration order, which makes it straightforward to reason about sequencing when multiple components register their own callbacks: ```kotlin transaction { val user = userRepository.insert(newUser) val profile = profileRepository.insert(Profile(userId = user.id)) onCommit { searchIndex.addUser(user) } // 1st onCommit { cache.warm(user.id) } // 2nd onCommit { eventBus.publish(UserCreated(user)) } // 3rd } // After commit: searchIndex → cache → eventBus, in that order ``` #### Exception Handling in Callbacks If a callback throws, the remaining callbacks still execute. This prevents one failing callback from silently skipping others. The first exception is surfaced to the caller; any subsequent exceptions are attached as suppressed: ```kotlin transaction { orderRepository.insert(order) onCommit { throw RuntimeException("email failed") } // throws, but... onCommit { cache.invalidate(order.productId) } // ...still executes } // Caller sees RuntimeException("email failed") // cache.invalidate() ran successfully ``` When the transaction itself fails and a rollback callback also throws, the callback exception is added as suppressed to the original transaction exception: ```kotlin try { transaction { onRollback { throw RuntimeException("cleanup failed") } throw IllegalStateException("business error") } } catch (e: IllegalStateException) { // e.message == "business error" ← primary exception // e.suppressed[0].message == "cleanup failed" ← callback exception } ``` This design ensures that the root cause of a failure is never masked by callback errors. #### Propagation Interaction Callbacks are tied to the **physical** transaction, not the logical scope. This distinction matters when nesting transactions with different propagation modes. **Joining propagations (`REQUIRED`, `NESTED`, `SUPPORTS`, `MANDATORY`):** Callbacks registered in an inner scope are deferred to the outer physical transaction. They fire when the outermost transaction commits or rolls back. This is the correct behavior, because in a joined transaction, the inner scope's changes are not durable until the outer transaction commits. ``` [BEGIN outer] ↓ insert(user) ↓ ┌─ transaction(REQUIRED) ──────────────────────┐ │ insert(order) │ │ onCommit { notify(order) } ← deferred │ └──────────────────────────────────────────────┘ ↓ insert(payment) onCommit { sendReceipt() } ← also deferred ↓ [COMMIT outer] ↓ notify(order) ← inner callback fires now sendReceipt() ← outer callback fires now ``` A practical example: the inner service registers a callback, but it only fires when the outer transaction actually commits. If the outer transaction rolls back, the inner callback is discarded along with it: ```kotlin // Outer transaction transaction { userRepository.insert(user) // Inner REQUIRED: joins the outer transaction transaction(propagation = REQUIRED) { orderRepository.insert(order) onCommit { eventBus.publish(OrderCreated(order.id)) } } // At this point, the inner onCommit has NOT fired yet. // The order is not yet durable. paymentRepository.insert(payment) } // NOW the outer commits, and the inner's onCommit fires. ``` If the outer transaction rolls back (explicitly or via exception), the inner callback never fires: ```kotlin transaction { transaction(propagation = REQUIRED) { orderRepository.insert(order) onCommit { eventBus.publish(OrderCreated(order.id)) } } setRollbackOnly() // Outer rolls back everything } // onCommit never fires. The order was never durable. ``` **`REQUIRES_NEW`:** Creates an independent physical transaction. Callbacks registered in the inner scope fire when the **inner** transaction completes, regardless of the outer transaction's outcome: ``` [BEGIN outer] ↓ insert(user) ↓ ~~~ outer suspended ~~~ ↓ [BEGIN inner] ↓ insert(audit_log) onCommit { notify() } ↓ [COMMIT inner] ↓ notify() ← fires immediately, inner is committed ↓ ~~~ outer resumed ~~~ ↓ [ROLLBACK outer] ← does not affect inner's callbacks ``` This is especially useful for audit logging or event publishing that must survive regardless of the outer outcome: ```kotlin transaction { userRepository.insert(user) transaction(propagation = REQUIRES_NEW) { auditRepository.insert(AuditLog("User creation attempted")) onCommit { auditMetrics.increment("audit.committed") } } // Inner onCommit has already fired here. setRollbackOnly() // Outer rolls back, but audit is committed and notified } ``` **`NESTED` (savepoint):** Shares the outer physical transaction. Even though the nested scope can roll back independently (to the savepoint), callbacks are deferred to the outer transaction. This is because savepoint changes only become durable when the outer transaction commits: ``` [BEGIN outer] ↓ insert(order) ↓ [SAVEPOINT] ↓ insert(discount) onCommit { notify() } ← deferred to outer ↓ [RELEASE SAVEPOINT] ↓ [COMMIT outer] ↓ notify() ← fires now ``` The following table summarizes callback behavior across propagation modes: | Propagation | Callback scope | When callbacks fire | |-------------|---------------|---------------------| | `REQUIRED` | Deferred to outer | When outermost transaction commits/rolls back | | `REQUIRES_NEW` | Own scope | When inner transaction commits/rolls back | | `NESTED` | Deferred to outer | When outermost transaction commits/rolls back | | `SUPPORTS` | Deferred to outer (if tx exists) | When outermost transaction commits/rolls back | | `MANDATORY` | Deferred to outer | When outermost transaction commits/rolls back | | `NOT_SUPPORTED` | Own scope | When inner block completes/throws | | `NEVER` | Own scope | When inner block completes/throws | #### Common Patterns **Cache invalidation after write:** ```kotlin transaction { val updatedProduct = productRepository.update(product) onCommit { // Only evict after the update is durable. // Evicting before commit risks serving stale data from the database // while the cache is empty and the transaction hasn't committed yet. productCache.evict(updatedProduct.id) } } ``` **Event publishing:** ```kotlin transaction { val savedOrder = orderRepository.insert(order) paymentRepository.insert(Payment(orderId = savedOrder.id, amount = total)) onCommit { // Publish domain events only after all writes are durable. // Subscribers can safely query the database for the new data. eventBus.publish(OrderPlacedEvent(savedOrder.id, total)) } onRollback { // Track failed order attempts for monitoring metrics.increment("orders.failed") logger.warn("Order placement rolled back for customer ${order.customerId}") } } ``` **Releasing external resources:** ```kotlin transaction { val lockToken = distributedLock.acquire("import-job") onCommit { distributedLock.release(lockToken) } onRollback { distributedLock.release(lockToken) cleanupPartialImport() } importService.runImport(data) } ``` ### Global Transaction Options Set defaults for all transactions: ```kotlin setGlobalTransactionOptions( propagation = REQUIRED, isolation = null, // Use database default timeoutSeconds = null, readOnly = false ) ``` ### Scoped Transaction Options When you need different transaction settings for a specific section of code without changing global defaults, use scoped options. All transactions created within the scope inherit the overridden settings. This is useful for test harnesses, batch processing regions, or any bounded context that needs distinct transaction behavior. ```kotlin withTransactionOptions(timeoutSeconds = 60) { transaction { // Uses 60 second timeout orm.removeAll() } } withTransactionOptionsBlocking(isolation = SERIALIZABLE) { transactionBlocking { // Uses SERIALIZABLE isolation orm.removeAll() } } ``` ### Spring-Managed Transactions While Storm's programmatic transaction API works standalone, many applications use Spring's transaction management for its declarative `@Transactional` support and integration with other Spring components. Storm integrates seamlessly with Spring's transaction management. When `@EnableTransactionIntegration` is configured, Storm's programmatic `transaction` blocks automatically detect and participate in Spring-managed transactions. This gives you the best of both worlds: Spring's declarative transaction boundaries with Storm's coroutine-friendly transaction blocks. #### Configuration Enable Spring integration in your configuration class: ```kotlin @EnableTransactionIntegration @Configuration class ORMConfiguration(private val dataSource: DataSource) { @Bean fun ormTemplate() = ORMTemplate.of(dataSource) } ``` #### Combining Declarative and Programmatic Transactions You can use Spring's `@Transactional` annotation alongside Storm's programmatic `transaction` blocks. Storm will join the existing Spring transaction: ```kotlin @Service class UserService(private val orm: ORMTemplate) { @Transactional suspend fun createUserWithOrders(user: User, orders: List) { // Spring starts the transaction transaction { // Storm joins the Spring transaction (REQUIRED propagation by default) orm insert user } transaction { // Still in the same Spring transaction orders.forEach { orm insert it } } // Spring commits when the method returns successfully } } ``` #### Propagation Interaction Storm's propagation modes work with Spring transactions: ```kotlin @Transactional suspend fun processWithAudit(user: User) { transaction { orm insert user } // REQUIRES_NEW creates an independent transaction, even within Spring's transaction transaction(propagation = REQUIRES_NEW) { auditRepository.log("User created: ${user.id}") // Commits independently - audit survives even if outer transaction rolls back } } ``` #### Suspend Functions with @Transactional For suspend functions, use Spring's `@Transactional` with the coroutine-aware transaction manager: ```kotlin @Configuration @EnableTransactionManagement class TransactionConfig { @Bean fun transactionManager(dataSource: DataSource): ReactiveTransactionManager { return DataSourceTransactionManager(dataSource) } } @Service class OrderService(private val orm: ORMTemplate) { @Transactional suspend fun placeOrder(order: Order): Order { val savedOrder = orm insert order // Can switch dispatchers while staying in the same transaction withContext(Dispatchers.Default) { calculateLoyaltyPoints(savedOrder) } return savedOrder } } ``` #### Using Storm Without @Transactional You can also use Storm's programmatic transactions without Spring's `@Transactional`. Storm manages the transaction lifecycle directly: ```kotlin @Service class UserService(private val orm: ORMTemplate) { // No @Transactional needed - Storm handles it suspend fun createUser(user: User): User { return transaction { orm insert user } } // Explicit propagation and isolation suspend fun transferFunds(from: Account, to: Account, amount: BigDecimal) { transaction( propagation = REQUIRED, isolation = SERIALIZABLE ) { accountRepository.debit(from, amount) accountRepository.credit(to, amount) } } } ``` [Java] Storm for Java follows the principle of integration over invention. Rather than providing its own transaction API, Storm works with your existing transaction infrastructure. Whether you use Spring's `@Transactional` annotation, programmatic `TransactionTemplate`, or direct JDBC connection management, Storm participates correctly in the active transaction. This approach has several benefits: no new APIs to learn, full compatibility with existing code, and consistent behavior across your application. Storm simply uses the JDBC connection associated with the current transaction. ### Spring-Managed Transactions Spring's transaction management is the most common approach for Java enterprise applications. Storm integrates naturally with Spring's `@Transactional` annotation, participating in the same transaction as other Spring-managed components like JPA repositories, JDBC templates, or other data access code. #### Configuration Configure Storm with Spring's transaction management: ```java @Configuration @EnableTransactionManagement public class ORMConfiguration { @Bean public ORMTemplate ormTemplate(DataSource dataSource) { return ORMTemplate.of(dataSource); } @Bean public PlatformTransactionManager transactionManager(DataSource dataSource) { return new DataSourceTransactionManager(dataSource); } } ``` #### Declarative Transactions with @Transactional Use Spring's `@Transactional` annotation on service methods. Storm automatically participates in the active transaction: ```java @Service public class UserService { private final ORMTemplate orm; public UserService(ORMTemplate orm) { this.orm = orm; } @Transactional public void createUserWithOrders(User user, List orders) { // Storm uses the Spring-managed transaction orm.entity(User.class).insert(user); for (Order order : orders) { orm.entity(Order.class).insert(order); } // Spring commits when the method returns successfully // Rolls back automatically on unchecked exceptions } @Transactional(readOnly = true) public List findUsersByName(String name) { return orm.entity(User.class) .select() .where(User_.name, EQUALS, name) .getResultList(); } @Transactional(isolation = Isolation.SERIALIZABLE) public void transferFunds(Account from, Account to, BigDecimal amount) { orm.entity(Account.class).update(from.debit(amount)); orm.entity(Account.class).update(to.credit(amount)); } } ``` #### Propagation with @Transactional Spring's propagation modes control how transactions interact: ```java @Service public class OrderService { @Transactional public void placeOrder(Order order) { orm.entity(Order.class).insert(order); // Audit log commits independently - survives even if outer transaction rolls back auditService.logOrderCreated(order); inventoryService.decreaseStock(order.getItems()); } } @Service public class AuditService { @Transactional(propagation = Propagation.REQUIRES_NEW) public void logOrderCreated(Order order) { orm.entity(AuditLog.class).insert(new AuditLog("Order created: " + order.getId())); // Commits in its own transaction } } ``` #### Programmatic Transactions While `@Transactional` works well for most cases, sometimes you need finer control over transaction boundaries. For example, processing a batch where each item should be in its own transaction, or conditionally rolling back based on runtime conditions. Spring's `TransactionTemplate` provides this control while still integrating with Spring's transaction infrastructure. ```java @Service public class BatchService { private final TransactionTemplate transactionTemplate; private final ORMTemplate orm; public BatchService(PlatformTransactionManager transactionManager, ORMTemplate orm) { this.transactionTemplate = new TransactionTemplate(transactionManager); this.orm = orm; } public void processBatch(List items) { for (Item item : items) { // Each item processed in its own transaction transactionTemplate.execute(status -> { orm.entity(Item.class).update(item.markProcessed()); return null; }); } } public User createUserOrRollback(User user, boolean shouldRollback) { return transactionTemplate.execute(status -> { User saved = orm.entity(User.class).insert(user); if (shouldRollback) { status.setRollbackOnly(); // Mark for rollback } return saved; }); } } ``` Configure `TransactionTemplate` with specific settings: ```java TransactionTemplate template = new TransactionTemplate(transactionManager); template.setIsolationLevel(TransactionDefinition.ISOLATION_SERIALIZABLE); template.setTimeout(30); // 30 seconds template.setReadOnly(true); List users = template.execute(status -> { return orm.entity(User.class).selectAll().getResultList(); }); ``` ### JDBC Transactions For applications not using Spring, or for maximum control, you can manage transactions directly through JDBC. Storm works with any JDBC connection. Create an `ORMTemplate` from the connection and use it within your transaction scope. ```java try (Connection connection = dataSource.getConnection()) { connection.setAutoCommit(false); try { var orm = ORMTemplate.of(connection); orm.entity(User.class).insert(user); orm.entity(Order.class).insert(order); connection.commit(); } catch (Exception e) { connection.rollback(); throw e; } } ``` ### JPA EntityManager Storm can coexist with JPA in the same application. This is useful when migrating from JPA to Storm gradually, or when you want to use Storm for specific operations (like bulk inserts or complex queries) while keeping JPA for others. Storm can create an `ORMTemplate` directly from a JPA `EntityManager`, sharing the same underlying connection and transaction. ```java @Service public class HybridService { @PersistenceContext private EntityManager entityManager; @Transactional public void processWithBothOrms(User user) { // Use Storm for efficient bulk operations var orm = ORMTemplate.of(entityManager); orm.entity(User.class).insert(user); // JPA and Storm share the same transaction entityManager.flush(); } } ``` --- ## Important Notes Understanding these nuances helps avoid common pitfalls when working with transactions. ### Concurrency Launching concurrent work inside a transaction using `async`, `launch`, or other parallel coroutine builders is **not supported**. Database transactions are bound to the calling thread/coroutine. Use sequential operations or split work into separate transactions if parallelism is required. ### RollbackOnly Semantics - In `NESTED` propagation: rolls back to the savepoint, preserving outer transaction's work - In `REQUIRED` or `REQUIRES_NEW`: affects the entire transaction scope ### Context Switching (Kotlin) Within any transactional scope, you can switch dispatchers (e.g., `withContext(Dispatchers.Default)`) and still access the **same active transaction**. This allows offloading CPU-bound work without breaking transactional context. ======================================== ## Source: spring-integration.md ======================================== # Spring Integration Storm integrates seamlessly with Spring Framework and Spring Boot for dependency injection, transaction management, and repository auto-wiring. This guide covers setup for both languages. ## Installation Storm provides Spring Boot Starter modules that auto-configure everything you need. If you use the starter, you do not need to add `storm-kotlin-spring` or `storm-spring` separately; the starter includes them. ### Spring Boot Starter (Recommended) The starter modules provide zero-configuration setup: an `ORMTemplate` bean is created automatically from the `DataSource`, repositories are discovered from the application's base package, and (for Kotlin) transaction integration is enabled. See [Spring Boot Starter](#spring-boot-starter) for full details. [Kotlin] ```kotlin // Gradle (Kotlin DSL) implementation("st.orm:storm-kotlin-spring-boot-starter:@@STORM_VERSION@@") ``` ```xml st.orm storm-kotlin-spring-boot-starter @@STORM_VERSION@@ ``` [Java] ```xml st.orm storm-spring-boot-starter @@STORM_VERSION@@ ``` ```kotlin // Gradle (Kotlin DSL) implementation("st.orm:storm-spring-boot-starter:@@STORM_VERSION@@") ``` ### Spring Integration Without Auto-Configuration If you prefer manual configuration, or need to customize the setup beyond what the starter provides, use the integration modules directly: [Kotlin] ```kotlin // Gradle (Kotlin DSL) implementation("st.orm:storm-kotlin-spring:@@STORM_VERSION@@") ``` ```xml st.orm storm-kotlin-spring @@STORM_VERSION@@ ``` [Java] ```xml st.orm storm-spring @@STORM_VERSION@@ ``` ```kotlin // Gradle (Kotlin DSL) implementation("st.orm:storm-spring:@@STORM_VERSION@@") ``` The Spring integration modules provide transaction integration and repository auto-discovery. They are in addition to the base `storm-kotlin` or `storm-java21` dependency. --- ## Configuration [Kotlin] The minimum setup requires a single `ORMTemplate` bean. This bean is the entry point for all Storm operations and takes a standard `DataSource` as its only dependency. Spring Boot applications typically have a `DataSource` already configured through `application.properties`, so the `ORMTemplate` bean is the only Storm-specific configuration you need to add. ```kotlin @Configuration @EnableTransactionManagement class ORMConfiguration(private val dataSource: DataSource) { @Bean fun ormTemplate(): ORMTemplate = dataSource.orm } ``` ### Transaction Integration By default, Storm manages its own transactions independently of Spring's transaction context. The `@EnableTransactionIntegration` annotation bridges the two systems so that Storm's programmatic `transaction` and `transactionBlocking` blocks participate in Spring-managed transactions. Without this annotation, a transaction block inside a `@Transactional` method would open a separate database connection and transaction. ```kotlin @EnableTransactionIntegration @Configuration class ORMConfiguration(private val dataSource: DataSource) { @Bean fun ormTemplate(): ORMTemplate = dataSource.orm } ``` This allows combining Spring's `@Transactional` with Storm's programmatic `transaction` blocks: ```kotlin @Transactional fun processUsers() { // Spring manages outer transaction transactionBlocking { // Participates in Spring transaction orm.removeAll() } } ``` ### Repository Injection Storm repositories are interfaces with default method implementations. Spring cannot discover them automatically because they are not annotated with `@Component` or `@Repository`. The `RepositoryBeanFactoryPostProcessor` scans specified packages for interfaces that extend `EntityRepository` or `ProjectionRepository` and registers them as Spring beans. This makes them available for constructor injection like any other Spring-managed dependency. ```kotlin @Configuration class AcmeRepositoryBeanFactoryPostProcessor : RepositoryBeanFactoryPostProcessor() { override val repositoryBasePackages: Array get() = arrayOf("com.acme.repository") } ``` Define repositories: ```kotlin interface UserRepository : EntityRepository { fun findByEmail(email: String): User? = find(User_.email eq email) } ``` Inject into services: ```kotlin @Service class UserService( private val userRepository: UserRepository ) { fun findUser(email: String): User? = userRepository.findByEmail(email) } ``` ### Using @Transactional ```kotlin @Service class UserService( private val orm: ORMTemplate ) { @Transactional fun createUser(email: String, name: String): User { return orm insert User(email = email, name = name) } @Transactional(readOnly = true) fun findUsers(): List { return orm.findAll() } } ``` [Java] The configuration mirrors the Kotlin setup. Define a single `ORMTemplate` bean that wraps the Spring-managed `DataSource`. ```java @Configuration public class ORMConfiguration { private final DataSource dataSource; public ORMConfiguration(DataSource dataSource) { this.dataSource = dataSource; } @Bean public ORMTemplate ormTemplate() { return ORMTemplate.of(dataSource); } } ``` ### Repository Injection Register a `RepositoryBeanFactoryPostProcessor` that scans your repository packages. This works identically to the Kotlin version: Storm discovers interfaces extending `EntityRepository` or `ProjectionRepository` and registers them as Spring beans. ```java @Configuration public class AcmeRepositoryBeanFactoryPostProcessor extends RepositoryBeanFactoryPostProcessor { @Override public String[] getRepositoryBasePackages() { return new String[] { "com.acme.repository" }; } } ``` Define repositories: ```java public interface UserRepository extends EntityRepository { default Optional findByEmail(String email) { return select() .where(User_.email, EQUALS, email) .getOptionalResult(); } } ``` Inject into services: ```java @Service public class UserService { private final UserRepository userRepository; public UserService(UserRepository userRepository) { this.userRepository = userRepository; } public Optional findUser(String email) { return userRepository.findByEmail(email); } } ``` ### Using @Transactional ```java @Service public class UserService { private final ORMTemplate orm; public UserService(ORMTemplate orm) { this.orm = orm; } @Transactional public User createUser(String email, String name) { return orm.entity(User.class) .insertAndFetch(new User(null, email, name, null, null)); } @Transactional(readOnly = true) public List findUsers() { return orm.entity(User.class) .select() .getResultList(); } } ``` --- ## Production DataSource Configuration Storm works with any JDBC `DataSource` and does not manage connections itself. In production, you should configure a connection pool to handle connection lifecycle, validation, and recycling. HikariCP is the default connection pool in Spring Boot and a good choice for most applications. ### Adding HikariCP Spring Boot includes HikariCP by default when you add a `spring-boot-starter-jdbc` or `spring-boot-starter-data-jpa` dependency. If you are not using a starter that includes it, add HikariCP explicitly: ```xml com.zaxxer HikariCP ``` ### Pool Configuration Configure the pool in `application.yml`. A good starting point for pool size is `CPU cores * 2 + number of disk spindles`. For most cloud deployments with SSDs, this simplifies to roughly `CPU cores * 2`. A 4-core server would start with a pool of about 10 connections. ```yaml spring: datasource: url: jdbc:postgresql://localhost:5432/mydb username: myuser password: mypassword hikari: maximum-pool-size: 10 minimum-idle: 5 connection-timeout: 30000 # 30 seconds idle-timeout: 600000 # 10 minutes max-lifetime: 1800000 # 30 minutes validation-timeout: 5000 # 5 seconds connection-test-query: SELECT 1 # Only needed for drivers that don't support JDBC4 isValid() ``` | Property | Description | |----------|-------------| | `maximum-pool-size` | Upper bound on connections. Start with CPU cores * 2 and adjust based on load testing. | | `minimum-idle` | Minimum idle connections to maintain. Set equal to `maximum-pool-size` for consistent latency. | | `connection-timeout` | Maximum time (ms) to wait for a connection from the pool before throwing an exception. | | `idle-timeout` | Maximum time (ms) a connection can sit idle before being retired. | | `max-lifetime` | Maximum lifetime (ms) of a connection. Set slightly shorter than your database's connection timeout. | | `connection-test-query` | Validation query for drivers that do not support JDBC4's `isValid()`. Most modern drivers do not need this. | Storm obtains connections from the `DataSource` for each operation (or transaction) and returns them to the pool immediately afterward. This means connection pool tuning directly affects Storm's throughput and latency characteristics. --- ## Template Decorator The `TemplateDecorator` interface lets you customize how Storm resolves table names, column names, and foreign key column names. This is useful when your database uses a naming convention that differs from Storm's default camelCase-to-snake_case conversion, or when you need to add a schema prefix or other transformation globally. The decorator is passed as a `UnaryOperator` to the `ORMTemplate.of()` factory method. It receives the default decorator and returns a modified version. ### Available Resolvers | Method | Default Behavior | Use Case | |--------|------------------|----------| | `withTableNameResolver` | `CamelCase` to `snake_case` (e.g., `UserProfile` to `user_profile`) | Schema prefix, uppercase tables, custom naming | | `withColumnNameResolver` | `camelCase` to `snake_case` (e.g., `firstName` to `first_name`) | Uppercase columns, custom naming | | `withForeignKeyResolver` | `camelCase` to `snake_case` + `_id` suffix (e.g., `city` to `city_id`) | Custom FK naming conventions | ### Example: Uppercase Table and Column Names [Kotlin] ```kotlin val orm = dataSource.orm { decorator -> decorator .withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.DEFAULT)) .withColumnNameResolver(ColumnNameResolver.toUpperCase(ColumnNameResolver.DEFAULT)) } ``` [Java] ```java var orm = ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.DEFAULT)) .withColumnNameResolver(ColumnNameResolver.toUpperCase(ColumnNameResolver.DEFAULT)) ); ``` ### Example: Schema Prefix [Kotlin] ```kotlin val orm = dataSource.orm { decorator -> decorator.withTableNameResolver { type -> "myschema." + TableNameResolver.DEFAULT.resolveTableName(type) } } ``` [Java] ```java var orm = ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(type -> "myschema." + TableNameResolver.DEFAULT.resolveTableName(type)) ); ``` In Spring Boot, apply the decorator when defining your `ORMTemplate` bean. If you use the starter and want to customize the auto-configured template, define your own `ORMTemplate` bean and the starter's auto-configured one will back off: [Kotlin] ```kotlin @Configuration class StormConfig(private val dataSource: DataSource) { @Bean fun ormTemplate(): ORMTemplate = dataSource.orm { decorator -> decorator.withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.DEFAULT)) } } ``` [Java] ```java @Configuration public class StormConfig { private final DataSource dataSource; public StormConfig(DataSource dataSource) { this.dataSource = dataSource; } @Bean public ORMTemplate ormTemplate() { return ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.DEFAULT))); } } ``` --- ## Spring Boot Starter The Spring Boot Starter modules provide zero-configuration setup for Storm. Add the starter dependency and Storm auto-configures itself from the Spring Boot `DataSource`. ### What the Starter Provides The starter auto-configures: 1. **`ORMTemplate` bean** created from the auto-configured `DataSource`. If you define your own `ORMTemplate` bean, the auto-configured one backs off. 2. **Repository scanning** via `AutoConfiguredRepositoryBeanFactoryPostProcessor`, which discovers repository interfaces in the `@SpringBootApplication` base package (and its sub-packages). If you define your own `RepositoryBeanFactoryPostProcessor` bean, the auto-configured one backs off. 3. **Transaction integration** (Kotlin only) by automatically activating `SpringTransactionConfiguration`, removing the need for `@EnableTransactionIntegration`. 4. **Configuration properties** bound from `storm.*` in `application.yml`/`application.properties`, passed to the `ORMTemplate` via `StormConfig`. ### Minimal Spring Boot Setup (with Starter) With the starter, a complete Spring Boot application requires no Storm-specific configuration classes: ```kotlin @SpringBootApplication class Application fun main(args: Array) { runApplication(*args) } ``` ```properties spring.datasource.url=jdbc:postgresql://localhost:5432/mydb spring.datasource.username=myuser spring.datasource.password=mypassword ``` That is it. The starter creates the `ORMTemplate`, discovers repositories, and enables transaction integration automatically. ### Configuration via application.yml The starter binds Spring properties and builds a `StormConfig` that is passed to the `ORMTemplate` factory. Values not set in YAML fall back to system properties and then to built-in defaults: ```yaml storm: ansi-escaping: false update: default-mode: ENTITY dirty-check: INSTANCE max-shapes: 5 entity-cache: retention: default template-cache: size: 2048 validation: skip: false warnings-only: false schema-mode: none strict: false ``` The `schema-mode` property controls startup schema validation: `none` (default) skips validation, `warn` logs mismatches without blocking startup, and `fail` blocks startup if any entity definitions do not match the database schema. The `strict` property controls whether warnings (type narrowing, nullability mismatches) are treated as errors. See the [Configuration](configuration.md#schema-validation) guide for details. See the [Configuration](configuration.md) guide for a description of each property and the full precedence rules. ### Overriding Auto-Configuration Each auto-configured bean backs off when you provide your own. This lets you customize behavior incrementally. **Custom ORMTemplate:** ```kotlin @Configuration class StormConfig(private val dataSource: DataSource) { @Bean fun ormTemplate(): ORMTemplate = dataSource.orm { decorator -> decorator /* customize */ } } ``` **Custom repository scanning:** ```kotlin @Configuration class MyRepositoryPostProcessor : RepositoryBeanFactoryPostProcessor() { override val repositoryBasePackages: Array get() = arrayOf("com.myapp.repository", "com.myapp.other") } ``` ### Minimal Spring Boot Setup (without Starter) If you use the integration module directly (without the starter), you need to configure Storm manually: ```kotlin @SpringBootApplication @EnableTransactionManagement class Application @Configuration @EnableTransactionIntegration class StormConfig(private val dataSource: DataSource) { @Bean fun ormTemplate() = dataSource.orm } @Configuration class MyRepositoryBeanFactoryPostProcessor : RepositoryBeanFactoryPostProcessor() { override val repositoryBasePackages: Array get() = arrayOf("com.myapp.repository") } ``` This gives you: - Automatic DataSource from Spring Boot - Transaction integration between Spring and Storm - Repository auto-discovery and injection ## JPA Entity Manager Storm can create an `ORMTemplate` from a JPA `EntityManager`, which lets you use Storm queries within existing JPA transactions and services. This is particularly useful during incremental [migration from JPA](migration-from-jpa.md), where you can convert one repository or query at a time without changing your transaction management strategy. ```java @PersistenceContext private EntityManager entityManager; @Transactional public void doWork() { var orm = ORMTemplate.of(entityManager); // Use orm alongside existing JPA code } ``` ## Transaction Propagation When `@EnableTransactionIntegration` is active, Storm's programmatic transactions participate in Spring's transaction propagation. This means a `transaction` or `transactionBlocking` block checks for an existing Spring-managed transaction before starting a new one. If a transaction already exists, the block joins it. If not, it creates a new independent transaction. Understanding this behavior is important for controlling atomicity. When multiple operations must commit or roll back as a unit, they need to share the same transaction. When operations should be independent (for example, logging that should persist even if the main operation fails), they need separate transactions. ### Joining Existing Transactions ```kotlin @Transactional fun outerMethod() { // Spring starts a transaction transactionBlocking { // This block joins the Spring transaction orm.insert(user1) } transactionBlocking { // This block also joins the same transaction orm.insert(user2) } // Both inserts commit or rollback together } ``` ### Starting New Transactions Without an outer `@Transactional`, each `transactionBlocking` block starts and commits its own transaction independently. A failure in one block does not affect previously committed blocks. ```kotlin fun methodWithoutTransactional() { transactionBlocking { // Starts new transaction orm.insert(user1) } // Commits here transactionBlocking { // Starts another new transaction orm.insert(user2) } // Commits here } ``` ### Key Benefits of Programmatic Transactions 1. **Explicit boundaries.** See exactly where transactions start and end. 2. **Compile-time safety.** No risk of forgetting `@Transactional` on a method. 3. **Flexible composition.** Easily combine with Spring's declarative model. 4. **Reduced proxy overhead.** No need for Spring's transaction proxies in pure Storm code. ### Mixing Approaches You can use both styles in the same application: ```kotlin @Service class OrderService( private val orm: ORMTemplate, private val paymentService: PaymentService // Uses @Transactional ) { @Transactional fun processOrder(order: Order) { // Spring transaction transactionBlocking { // Participates in Spring transaction orm.insert(order) } // Other @Transactional services also participate paymentService.processPayment(order) } } ``` ## Tips 1. **Use the Spring Boot Starter.** It eliminates boilerplate configuration and auto-discovers your repositories. 2. **Use `@Transactional` for declarative transactions.** Simple and familiar for Spring developers. 3. **Use programmatic transactions for complex flows.** Nested transactions, savepoints, and explicit propagation are easier to express in code. 4. **Configure Storm via `application.yml`.** The starter builds a `StormConfig` from Spring properties and passes it to the `ORMTemplate`. 5. **One `ORMTemplate` bean is enough.** Inject it into services or let repositories use it automatically. 6. **Works with any DataSource.** HikariCP, Tomcat pool, or any other connection pool that Spring Boot configures. ======================================== ## Source: dialects.md ======================================== # Database Dialects Storm works with any JDBC-compatible database using standard SQL. However, databases diverge on features like upserts, pagination, JSON handling, and native data types. Dialect packages let Storm take advantage of these database-specific capabilities while keeping your application code portable. Your entities, repositories, and queries stay the same regardless of which database you use; only the dialect dependency changes. ## Supported Databases | | Database | Dialect Package | Key Features | |---|----------|-----------------|--------------| | ![Oracle](https://img.shields.io/badge/Oracle-F80000?logo=oracle&logoColor=white) | Oracle | `storm-oracle` | Merge (`MERGE INTO`), sequences | | ![SQL Server](https://img.shields.io/badge/SQL_Server-CC2927?logo=microsoftsqlserver&logoColor=white) | MS SQL Server | `storm-mssqlserver` | Merge (`MERGE INTO`), identity columns | | ![PostgreSQL](https://img.shields.io/badge/PostgreSQL-4169E1?logo=postgresql&logoColor=white) | PostgreSQL | `storm-postgresql` | Upsert (`ON CONFLICT`), JSONB, arrays | | ![MySQL](https://img.shields.io/badge/MySQL-4479A1?logo=mysql&logoColor=white) | MySQL | `storm-mysql` | Upsert (`ON DUPLICATE KEY`), JSON | | ![MariaDB](https://img.shields.io/badge/MariaDB-003545?logo=mariadb&logoColor=white) | MariaDB | `storm-mariadb` | Upsert (`ON DUPLICATE KEY`), JSON | | ![SQLite](https://img.shields.io/badge/SQLite-003B57?logo=sqlite&logoColor=white) | SQLite | `storm-sqlite` | Upsert (`ON CONFLICT`), file-based storage | | ![H2](https://img.shields.io/badge/H2-0000bb?logoColor=white) | H2 | `storm-h2` | Merge (`MERGE INTO`), sequences, native UUID | ## Installation Add the dialect dependency for your database. Dialects are runtime-only dependencies: they do not affect your compile-time code or entity definitions. Your entity classes, repositories, and queries are written against Storm's core API, not against any specific dialect. This means you can switch databases by changing a single dependency without modifying application code. ### Maven ```xml st.orm storm-oracle @@STORM_VERSION@@ runtime st.orm storm-mssqlserver @@STORM_VERSION@@ runtime st.orm storm-postgresql @@STORM_VERSION@@ runtime st.orm storm-mysql @@STORM_VERSION@@ runtime st.orm storm-mariadb @@STORM_VERSION@@ runtime st.orm storm-sqlite @@STORM_VERSION@@ runtime st.orm storm-h2 @@STORM_VERSION@@ runtime ``` ### Gradle (Groovy DSL) ```groovy // Oracle runtimeOnly 'st.orm:storm-oracle:@@STORM_VERSION@@' // MS SQL Server runtimeOnly 'st.orm:storm-mssqlserver:@@STORM_VERSION@@' // PostgreSQL runtimeOnly 'st.orm:storm-postgresql:@@STORM_VERSION@@' // MySQL runtimeOnly 'st.orm:storm-mysql:@@STORM_VERSION@@' // MariaDB runtimeOnly 'st.orm:storm-mariadb:@@STORM_VERSION@@' // SQLite runtimeOnly 'st.orm:storm-sqlite:@@STORM_VERSION@@' // H2 runtimeOnly 'st.orm:storm-h2:@@STORM_VERSION@@' ``` ### Gradle (Kotlin DSL) ```kotlin // Oracle runtimeOnly("st.orm:storm-oracle:@@STORM_VERSION@@") // MS SQL Server runtimeOnly("st.orm:storm-mssqlserver:@@STORM_VERSION@@") // PostgreSQL runtimeOnly("st.orm:storm-postgresql:@@STORM_VERSION@@") // MySQL runtimeOnly("st.orm:storm-mysql:@@STORM_VERSION@@") // MariaDB runtimeOnly("st.orm:storm-mariadb:@@STORM_VERSION@@") // SQLite runtimeOnly("st.orm:storm-sqlite:@@STORM_VERSION@@") // H2 runtimeOnly("st.orm:storm-h2:@@STORM_VERSION@@") ``` ## Automatic Detection Storm automatically detects the appropriate dialect based on the JDBC connection URL. No additional configuration is required. When your application starts, Storm queries the `ServiceLoader` for available dialect implementations, inspects the JDBC URL, and selects the matching dialect. This means adding or switching a dialect is purely a dependency change with no code or configuration modifications. For example, with the connection URL `jdbc:postgresql://localhost:5432/mydb`, Storm will automatically use the PostgreSQL dialect. ## Database-Specific Features ### Upsert Support Upsert operations are the primary reason most applications need a dialect. Without a dialect, Storm cannot generate the database-specific INSERT ... ON CONFLICT or MERGE syntax required for atomic upsert operations. Each database uses its own native syntax: | Database | SQL Strategy | Conflict Detection | |----------|--------------|--------------------| | Oracle | `MERGE INTO ...` | Explicit match conditions | | MS SQL Server | `MERGE INTO ...` | Explicit match conditions | | PostgreSQL | `INSERT ... ON CONFLICT DO UPDATE` | Targets a specific unique constraint or index | | MySQL | `INSERT ... ON DUPLICATE KEY UPDATE` | Primary key or any unique constraint | | MariaDB | `INSERT ... ON DUPLICATE KEY UPDATE` | Primary key or any unique constraint | | SQLite | `INSERT ... ON CONFLICT DO UPDATE` | Targets a specific unique constraint | | H2 | `MERGE INTO ...` | Explicit match conditions | See [Upserts](upserts.md) for usage examples. ### JSON Support PostgreSQL's JSONB and MySQL/MariaDB's JSON types are fully supported when using the corresponding dialect with a JSON serialization library (`storm-jackson2`/`storm-jackson3` or `storm-kotlinx-serialization`). See [JSON Support](json.md) for details. ### Database-Specific Data Types Beyond SQL syntax differences, databases support different native data types. Dialects handle the mapping between Kotlin/Java types and database-specific types automatically, so you can use idiomatic types in your entities without worrying about the underlying storage format. - **Oracle:** NUMBER, CLOB, sequences for ID generation - **MS SQL Server:** NVARCHAR, UNIQUEIDENTIFIER, IDENTITY - **PostgreSQL:** JSONB, UUID, arrays, INET, CIDR - **MySQL/MariaDB:** JSON, TINYINT for booleans, ENUM - **SQLite:** Dynamic typing, AUTOINCREMENT, file-based storage - **H2:** Native UUID, sequences, ARRAY types ## Without a Dialect Storm works without a specific dialect package by generating standard SQL. The core framework handles entity mapping, queries, joins, transactions, streaming, dirty checking, and caching using only standard SQL. However, some features require database-specific syntax and will be unavailable without a dialect: - **Upsert operations** require database-specific syntax - **Database-specific optimizations** such as native pagination strategies All other features (entity mapping, queries, joins, transactions, streaming, dirty checking, and caching) work identically regardless of dialect. ## Testing with SQLite SQLite is a lightweight option for testing. It stores data in a single file (or in memory) and requires no server process. Add the `storm-sqlite` dialect dependency to enable SQLite-specific features like upsert support. [Kotlin] ```kotlin val dataSource = SQLiteDataSource().apply { url = "jdbc:sqlite::memory:" } val orm = ORMTemplate.of(dataSource) ``` [Java] ```java var dataSource = new SQLiteDataSource(); dataSource.setUrl("jdbc:sqlite::memory:"); var orm = ORMTemplate.of(dataSource); ``` Note that SQLite does not support sequences, row-level locking, or `INFORMATION_SCHEMA`. Constraint discovery uses JDBC metadata, and locking relies on SQLite's file-level locking mechanism. ## Testing with H2 H2 is an in-memory Java SQL database that starts instantly and requires no external processes, making it the default choice for unit tests. Because H2 runs in-process, tests start in milliseconds and do not require Docker, network access, or database installation. [Kotlin] ```kotlin val dataSource = JdbcDataSource().apply { setUrl("jdbc:h2:mem:test;DB_CLOSE_DELAY=-1") } val orm = ORMTemplate.of(dataSource) ``` [Java] ```java var dataSource = new JdbcDataSource(); dataSource.setUrl("jdbc:h2:mem:test;DB_CLOSE_DELAY=-1"); var orm = ORMTemplate.of(dataSource); ``` For basic testing without upsert support, H2 works without any dialect dependency. To enable upsert support and other H2-specific optimizations (native UUID handling, tuple comparisons), add the `storm-h2` dialect dependency. ## Integration Testing with Real Databases While H2 is excellent for fast unit tests, it does not support all database-specific features (JSONB, arrays, database-specific functions). For thorough testing, you should also run integration tests against your production database. Each dialect module includes a `docker-compose.yml` file that starts the corresponding database in a container, making integration testing straightforward. For example, to test with PostgreSQL: ```bash cd storm-postgresql docker-compose up -d mvn test -pl storm-postgresql ``` ## Tips 1. **Always include the dialect** for production databases to unlock all features 2. **Use H2 or SQLite** for unit tests; add `storm-h2` or `storm-sqlite` for upsert support 3. **Dialect is runtime-only**; it doesn't affect your compile-time code or entity definitions 4. **One dialect per application**; Storm auto-detects the right dialect from your connection URL 5. **Test with both**: Use H2/SQLite for fast unit tests and the production dialect for integration tests --- ## See Also - [Upserts](upserts.md) for dialect-specific upsert strategies and usage examples - [JSON](json.md) for database-specific JSON column support ======================================== ## Source: testing.md ======================================== # Testing Writing tests for database code can involve repetitive setup: creating a `DataSource`, running schema scripts, obtaining an `ORMTemplate`, and wiring everything together before the first assertion. Storm's test support module reduces this to a single annotation, letting you focus on the behavior you are testing rather than infrastructure. The module provides two categories of functionality: 1. **JUnit 5 integration** (`@StormTest`) for automatic database setup, script execution, and parameter injection. 2. **Statement capture** (`SqlCapture`) for recording and inspecting SQL statements generated during test execution. This component is framework-agnostic and works independently of JUnit. --- ## Installation Add `storm-test` as a test dependency. **Maven:** ```xml st.orm storm-test test ``` **Gradle (Kotlin DSL):** ```kotlin testImplementation("st.orm:storm-test") ``` The module uses H2 as its default in-memory database. To use H2, add it as a test dependency if it is not already present: ```xml com.h2database h2 test ``` ### JUnit 5 is Optional JUnit 5 (`junit-jupiter-api`) is an optional dependency of `storm-test`. It is not pulled in transitively, so it does not appear on your classpath unless you add it yourself. Most projects already have JUnit Jupiter as a test dependency, in which case the `@StormTest` annotation and `StormExtension` are available automatically with no extra configuration. If you only need `SqlCapture` and `CapturedSql` (for example, in a project that uses TestNG, or for development-time debugging outside of any test framework), `storm-test` works without JUnit on the classpath. The JUnit-specific classes simply remain unused. --- ## JUnit 5 Integration ### @StormTest The `@StormTest` annotation activates the Storm JUnit 5 extension on a test class. It creates an in-memory H2 database, optionally executes SQL scripts, and injects test method parameters automatically. A minimal example: [Kotlin] ```kotlin @StormTest(scripts = ["/schema.sql", "/data.sql"]) class UserRepositoryTest { @Test fun `should find all users`(orm: ORMTemplate) { val users = orm.entity(User::class).findAll() users.size shouldBe 3 } } ``` [Java] ```java @StormTest(scripts = {"/schema.sql", "/data.sql"}) class UserRepositoryTest { @Test void shouldFindAllUsers(ORMTemplate orm) { var users = orm.entity(User.class).findAll(); assertEquals(3, users.size()); } } ``` The annotation accepts the following attributes: | Attribute | Default | Description | |------------|---------------------------------|-------------------------------------------------------------------------------------------| | `scripts` | `{}` | Classpath SQL scripts to execute before tests run. Executed once per test class. | | `url` | `""` | JDBC URL. Defaults to an H2 in-memory database with a unique name derived from the class. Ignored when a static `dataSource()` factory method is present (see [DataSource Factory Method](#datasource-factory-method)). | | `username` | `"sa"` | Database username. Ignored when a static `dataSource()` factory method is present. | | `password` | `""` | Database password. Ignored when a static `dataSource()` factory method is present. | ### Parameter Injection Test methods can declare parameters of the following types, and Storm will resolve them automatically: | Parameter type | What is injected | |--------------------|---------------------------------------------------------------------------------| | `DataSource` | The test database connection. | | `SqlCapture` | A fresh capture instance for recording SQL statements (see below). | | Any type with a static `of(DataSource)` factory method | An instance created via that factory method. This covers `ORMTemplate` and custom types that follow the same pattern. | The factory method resolution also supports Kotlin companion objects. If a class has a `Companion` field with an `of(DataSource)` method, Storm will use it. This means `ORMTemplate` works seamlessly in both Kotlin and Java tests without any additional configuration. ### Example: Full Test Class [Kotlin] ```kotlin @StormTest(scripts = ["/schema.sql", "/data.sql"]) class ItemRepositoryTest { @Test fun `should insert and retrieve`(orm: ORMTemplate) { orm.entity(Item::class).insert(Item(name = "NewItem")) val items = orm.entity(Item::class).findAll() items.size shouldBe 4 } @Test fun `should inject data source`(dataSource: DataSource) { dataSource.connection.use { conn -> conn.createStatement().use { stmt -> stmt.executeQuery("SELECT COUNT(*) FROM item").use { rs -> rs.next() shouldBe true rs.getInt(1) shouldBe 3 } } } } } ``` [Java] ```java record Item(@PK Integer id, String name) implements Entity {} @StormTest(scripts = {"/schema.sql", "/data.sql"}) class ItemRepositoryTest { @Test void shouldInsertAndRetrieve(ORMTemplate orm) { orm.entity(Item.class).insert(new Item(0, "NewItem")); var items = orm.entity(Item.class).findAll(); assertEquals(4, items.size()); } @Test void shouldInjectDataSource(DataSource dataSource) throws Exception { try (var conn = dataSource.getConnection(); var stmt = conn.createStatement(); var rs = stmt.executeQuery("SELECT COUNT(*) FROM item")) { assertTrue(rs.next()); assertTrue(rs.getInt(1) >= 3); } } } ``` ### Using a Custom Database By default, `@StormTest` creates an H2 in-memory database. This works well for dialect-agnostic logic, but H2 has its own SQL dialect. If your schema scripts or queries use database-specific syntax (for example, PostgreSQL's `SERIAL` type, MySQL's `AUTO_INCREMENT`, or Oracle's sequence syntax), they will not run against H2. In these cases, you need to test against the actual target database. To point `@StormTest` at a different database, specify a JDBC URL. Storm auto-detects the correct `SqlDialect` from the URL: ```java @StormTest( url = "jdbc:postgresql://localhost:5432/testdb", username = "testuser", password = "testpass", scripts = {"/schema.sql", "/data.sql"} ) class PostgresTest { // ... } ``` This requires a running database instance at the given URL. For local development you can start one manually (the dialect modules include `docker-compose.yml` files as a reference), but for automated and CI testing, [Testcontainers](https://testcontainers.com/) is the recommended approach. Testcontainers starts a disposable Docker container before the test and tears it down afterwards, so tests remain self-contained and reproducible. ### DataSource Factory Method Since `@StormTest` takes its URL as a compile-time annotation attribute, it cannot receive the dynamic URL that Testcontainers assigns at runtime. To solve this, define a static `dataSource()` method on the test class. When `StormExtension` finds this method, it uses the returned `DataSource` instead of creating one from the annotation's `url`, `username`, and `password` attributes. SQL scripts still execute against the returned `DataSource`, and all parameter injection (including `ORMTemplate`, `SqlCapture`, and `DataSource`) works as usual. [Kotlin] ```kotlin @StormTest(scripts = ["/schema-postgres.sql", "/data.sql"]) @Testcontainers class PostgresTest { companion object { @Container val postgres = PostgreSQLContainer("postgres:latest") .withDatabaseName("test") .withUsername("test") .withPassword("test") @JvmStatic fun dataSource(): DataSource { val dataSource = PGSimpleDataSource() dataSource.setUrl(postgres.jdbcUrl) dataSource.user = postgres.username dataSource.password = postgres.password return dataSource } } @Test fun `should use PostgreSQL dialect`(orm: ORMTemplate) { // orm is connected to the Testcontainers PostgreSQL instance, // scripts have been executed, and parameter injection works as usual. } } ``` [Java] ```java @StormTest(scripts = {"/schema-postgres.sql", "/data.sql"}) @Testcontainers class PostgresTest { @Container static PostgreSQLContainer postgres = new PostgreSQLContainer<>("postgres:latest") .withDatabaseName("test") .withUsername("test") .withPassword("test"); static DataSource dataSource() { var dataSource = new PGSimpleDataSource(); dataSource.setUrl(postgres.getJdbcUrl()); dataSource.setUser(postgres.getUsername()); dataSource.setPassword(postgres.getPassword()); return dataSource; } @Test void shouldUsePostgreSQLDialect(ORMTemplate orm) { // orm is connected to the Testcontainers PostgreSQL instance, // scripts have been executed, and parameter injection works as usual. } } ``` The factory method must be static, take no arguments, and return a `DataSource`. Kotlin companion object methods are also supported. --- ## Statement Capture When testing database code, knowing _what_ SQL is executed is often as important as knowing _whether_ the operation succeeded. A test might pass because the correct rows were returned, but the underlying query could be inefficient, missing a filter, or using unexpected parameters. `SqlCapture` gives you visibility into the SQL that Storm generates, so you can write assertions not just on results, but on the queries themselves. `SqlCapture` records every SQL statement generated during a block of code, along with its operation type (`SELECT`, `INSERT`, `UPDATE`, `DELETE`) and bound parameter values. It provides a high-level API designed for test assertions: count statements, filter by operation type, and inspect individual queries. `SqlCapture` is framework-agnostic. It does not depend on JUnit and can be used with any test framework, or even outside of tests entirely (for example, in development-time debugging or diagnostics). ### Use Cases **Verifying query counts.** After refactoring a repository method or changing entity relationships, you want to confirm that the number of SQL statements has not changed unexpectedly. A simple count assertion catches regressions early. **Asserting operation types.** When testing a service method that should only read data, you can assert that no `INSERT`, `UPDATE`, or `DELETE` statements were generated. This is a lightweight way to verify that read-only operations remain read-only. **Inspecting SQL structure.** For custom queries or complex filter logic, you may want to verify that the generated SQL contains specific clauses (such as a `WHERE` condition or a `JOIN`) or that the correct parameters were bound. This is especially useful when testing query builder logic that constructs dynamic predicates. **Debugging during development.** When a query does not return the expected results, wrapping the operation in a `SqlCapture` block lets you print the exact SQL and parameters without configuring logging or attaching a debugger. ### Basic Usage Wrap any Storm operation in a `run`, `execute`, or `executeThrowing` call to capture the SQL statements it generates: [Kotlin] ```kotlin val capture = SqlCapture() capture.run { orm.entity(User::class).findAll() } capture.count(Operation.SELECT) shouldBe 1 ``` [Java] ```java var capture = new SqlCapture(); capture.run(() -> orm.entity(User.class).findAll()); assertEquals(1, capture.count(Operation.SELECT)); ``` The `execute` variant returns the result of the captured operation, so you can combine capture with normal test assertions in a single step: [Kotlin] ```kotlin val capture = SqlCapture() val users = capture.execute { orm.entity(User::class).findAll() } users.size shouldBe 3 capture.count(Operation.SELECT) shouldBe 1 ``` [Java] ```java var capture = new SqlCapture(); List users = capture.execute(() -> orm.entity(User.class).findAll()); assertEquals(3, users.size()); assertEquals(1, capture.count(Operation.SELECT)); ``` ### Capture Methods | Method | Description | |---------------------|-------------------------------------------------------------------------| | `run(Runnable)` | Captures SQL during the action. Returns nothing. | | `execute(Supplier)` | Captures SQL during the action. Returns the action's result. | | `executeThrowing(Callable)` | Same as `execute`, but allows checked exceptions. | All three methods are scoped: only SQL statements generated within the block are recorded. Code running before or after the block, or on other threads, is not affected. ### Inspecting Captured Statements Each captured statement is represented as a `CapturedSql` record with three fields: | Field | Type | Description | |--------------|-------------------|---------------------------------------------------------------------------------| | `operation` | `Operation` | The SQL operation type: `SELECT`, `INSERT`, `UPDATE`, `DELETE`, or `UNDEFINED`. | | `statement` | `String` | The SQL text with `?` placeholders for bind variables. | | `parameters` | `List` | The bound parameter values in order. | Query the capture results using `count()`, `statements()`, or their filtered variants: ```java // Total statement count int total = capture.count(); // Count by operation type int selects = capture.count(Operation.SELECT); int inserts = capture.count(Operation.INSERT); // Get all captured statements List all = capture.statements(); // Filter by operation type List selectStmts = capture.statements(Operation.SELECT); // Inspect a specific statement CapturedSql stmt = selectStmts.getFirst(); String sql = stmt.statement(); // SQL with ? placeholders List params = stmt.parameters(); // Bound parameter values Operation op = stmt.operation(); // SELECT, INSERT, UPDATE, DELETE, or UNDEFINED ``` ### Accumulation and Clearing Statements accumulate across multiple `run`/`execute` calls on the same `SqlCapture` instance. This is useful when you want to measure the total SQL activity of a sequence of operations. Use `clear()` to reset between captures when you need to measure operations independently: ```java capture.run(() -> orm.entity(User.class).findAll()); capture.run(() -> orm.entity(User.class).findAll()); assertEquals(2, capture.count(Operation.SELECT)); capture.clear(); assertEquals(0, capture.count()); ``` ### Verifying Query Counts A count assertion is the simplest and most common use of `SqlCapture`. It protects against regressions where a code change inadvertently introduces extra queries: [Kotlin] ```kotlin @Test fun `bulk insert should use single statement`(orm: ORMTemplate, capture: SqlCapture) { val items = listOf(Item(name = "A"), Item(name = "B"), Item(name = "C")) capture.run { orm.entity(Item::class).insertAll(items) } capture.count(Operation.INSERT) shouldBe 1 } ``` [Java] ```java @Test void bulkInsertShouldUseSingleStatement(ORMTemplate orm, SqlCapture capture) { var items = List.of(new Item(0, "A"), new Item(0, "B"), new Item(0, "C")); capture.run(() -> orm.entity(Item.class).insertAll(items)); assertEquals(1, capture.count(Operation.INSERT)); } ``` ### Verifying Statement Content For finer-grained assertions, inspect the SQL text and bound parameters of individual statements. This is useful when testing custom query logic to ensure the correct filters and parameters are applied: ```java @Test void findByIdShouldUseWhereClause(ORMTemplate orm, SqlCapture capture) { capture.run(() -> orm.entity(User.class).findById(42)); var stmts = capture.statements(Operation.SELECT); assertEquals(1, stmts.size()); assertTrue(stmts.getFirst().statement().toUpperCase().contains("WHERE")); assertEquals(List.of(42), stmts.getFirst().parameters()); } ``` ### Asserting Read-Only Behavior When a service method should only read data, you can verify that no write operations were generated: ```java @Test void reportGenerationShouldBeReadOnly(ORMTemplate orm, SqlCapture capture) { capture.run(() -> generateReport(orm)); assertEquals(0, capture.count(Operation.INSERT)); assertEquals(0, capture.count(Operation.UPDATE)); assertEquals(0, capture.count(Operation.DELETE)); } ``` --- ## With JUnit 5 Parameter Injection When using `@StormTest`, a fresh `SqlCapture` instance is automatically injected into each test method that declares it as a parameter. This means you do not need to create one manually, and each test starts with a clean slate: ```java @StormTest(scripts = {"/schema.sql", "/data.sql"}) class QueryCountTest { @Test void insertShouldGenerateOneStatement(ORMTemplate orm, SqlCapture capture) { capture.run(() -> orm.entity(Item.class).insert(new Item(0, "Test"))); assertEquals(1, capture.count(Operation.INSERT)); } @Test void eachTestGetsAFreshCapture(SqlCapture capture) { // No statements from previous tests assertEquals(0, capture.count()); } } ``` --- ## Ktor Testing The `storm-ktor-test` module provides a `testStormApplication` function that combines Storm's H2 setup with Ktor's `testApplication` builder. It creates an in-memory database, executes SQL scripts, and exposes a `StormTestScope` with `stormDataSource`, `stormOrm`, and `stormSqlCapture`. ```kotlin @Test fun `GET users returns list`() = testStormApplication( scripts = listOf("/schema.sql", "/data.sql"), ) { scope -> application { install(Storm) { dataSource = scope.stormDataSource } routing { userRoutes() } } client.get("/users").apply { assertEquals(HttpStatusCode.OK, status) } } ``` You can also combine the existing `@StormTest` annotation with Ktor's `testApplication` for a more concise setup: ```kotlin @StormTest(scripts = ["/schema.sql", "/data.sql"]) class UserRouteTest { @Test fun `users endpoint returns data`(dataSource: DataSource) = testApplication { application { install(Storm) { this.dataSource = dataSource } routing { userRoutes() } } client.get("/users").apply { assertEquals(HttpStatusCode.OK, status) } } } ``` See [Ktor Integration](ktor-integration.md#testing) for more details. --- ## Tips 1. **Keep SQL scripts small and focused.** Each test class should set up only the tables and data it needs. This keeps tests fast and independent. 2. **Use `SqlCapture` to verify query counts.** Asserting the number of statements an operation produces is an effective way to catch unintended query changes during refactoring. 3. **Clear between captures** when a single test method needs to measure multiple operations independently. 4. **Prefer `@StormTest` over manual setup.** It eliminates boilerplate and ensures consistent database lifecycle management across test classes. 5. **`SqlCapture` is thread-local.** Captures are bound to the calling thread, so multi-threaded tests will only record statements from the thread that called `run`/`execute`. ======================================== ## Source: converters.md ======================================== # Converters Storm maps record components to database columns using built-in type support for standard Java and JDBC types. When your entity contains a type that is not directly supported by the JDBC driver, or when you want a custom mapping between your domain model and the database, you need a converter. A converter is a bidirectional transformer that translates between a JDBC-compatible type (the "database type") and your entity's field type (the "entity type"). Storm's converter system is designed around a simple interface with clear lifecycle semantics, and it supports both explicit and automatic application. --- ## The Converter Interface The `Converter` interface defines two methods: ```java public interface Converter { /** * Converts an entity value to a database column value. */ D toDatabase(@Nullable E value); /** * Converts a database column value to an entity value. */ E fromDatabase(@Nullable D dbValue); } ``` The type parameters are: | Parameter | Role | Constraint | |---|---|---| | `D` | The database-visible type | Must be a type that JDBC can handle natively (e.g., `String`, `Integer`, `BigDecimal`, `Timestamp`). | | `E` | The entity value type | The type of the record component in your entity. | Both methods receive a possibly-null value and may return null. This allows converters to handle nullable columns naturally. ### Requirements Every converter class must provide a **public no-argument constructor**. Storm instantiates converters via classpath scanning and cannot inject dependencies. If your converter needs external state, use a static configuration pattern or a lookup in the constructor. --- ## Applying Converters Storm provides three ways to control conversion: ### 1. Explicit Converter Use the `@Convert` annotation on a record component to specify exactly which converter to use: [Kotlin] ```kotlin @DbTable("product") data class Product( @PK val id: Int, val name: String, @Convert(converter = MoneyConverter::class) val price: Money ) : Entity ``` [Java] ```java @DbTable("product") public record Product( @PK int id, String name, @Convert(converter = MoneyConverter.class) Money price ) implements Entity {} ``` When `@Convert` specifies a converter, that converter is always used, regardless of any auto-apply converters that might match. ### 2. Auto-Apply (Default Converter) Annotate a converter class with `@DefaultConverter` to make it automatically apply whenever its entity type (`E`) matches a record component and no explicit `@Convert` is present: [Kotlin] ```kotlin @DefaultConverter class MoneyConverter : Converter { override fun toDatabase(value: Money?): BigDecimal? = value?.amount override fun fromDatabase(dbValue: BigDecimal?): Money? = dbValue?.let { Money(it) } } ``` With this converter registered, any `Money` component in any entity will automatically use `MoneyConverter` without needing `@Convert`: ```kotlin @DbTable("product") data class Product( @PK val id: Int, val name: String, val price: Money // Automatically uses MoneyConverter. ) : Entity ``` [Java] ```java @DefaultConverter public class MoneyConverter implements Converter { @Override public BigDecimal toDatabase(Money value) { return value != null ? value.amount() : null; } @Override public Money fromDatabase(BigDecimal dbValue) { return dbValue != null ? new Money(dbValue) : null; } } ``` With this converter registered, any `Money` component in any entity will automatically use `MoneyConverter` without needing `@Convert`: ```java @DbTable("product") public record Product( @PK int id, String name, Money price // Automatically uses MoneyConverter. ) implements Entity {} ``` ### 3. Disabling Conversion If an auto-apply converter would match a component but you want the built-in mapping instead, disable it explicitly: [Kotlin] ```kotlin @DbTable("product") data class Product( @PK val id: Int, val name: String, @Convert(disableConversion = true) val rawPrice: BigDecimal ) : Entity ``` [Java] ```java @DbTable("product") public record Product( @PK int id, String name, @Convert(disableConversion = true) BigDecimal rawPrice ) implements Entity {} ``` --- ## Resolution Order When Storm encounters a record component during mapping, it resolves the converter in this order: ``` 1. Is there an explicit @Convert(converter = ...) annotation? └── YES → Use that converter. └── NO → Continue. 2. Is there an @Convert(disableConversion = true) annotation? └── YES → Use built-in mapping (no converter). └── NO → Continue. 3. Is there exactly one @DefaultConverter that matches type E? └── YES → Use that auto-apply converter. └── NO (zero matches) → Use built-in mapping. └── NO (multiple matches) → ERROR: ambiguous converters. ``` When multiple `@DefaultConverter` classes match the same entity type and no explicit `@Convert` is present, Storm fails with a clear error message identifying the conflicting converters. Resolve the conflict by adding an explicit `@Convert` annotation on the component. --- ## Built-In Type Support Storm handles the following types natively without any converter: | Category | Types | |---|---| | **Primitives and wrappers** | `boolean`, `byte`, `short`, `int`, `long`, `float`, `double`, `char` and their boxed equivalents | | **Strings** | `String` | | **Numeric** | `BigDecimal`, `BigInteger` | | **Date/Time** | `LocalDate`, `LocalTime`, `LocalDateTime`, `Instant`, `OffsetDateTime`, `ZonedDateTime` | | **Binary** | `ByteBuffer` (read-only) | | **Enums** | `Enum` types (by name or ordinal via `@DbEnum`) | | **Other** | `UUID` | If your entity field is one of these types, you do not need a converter. Custom converters are only needed for types not in this list. --- ## Practical Examples ### Money Type A domain-specific value type for monetary amounts: [Kotlin] ```kotlin data class Money(val amount: BigDecimal) @DefaultConverter class MoneyConverter : Converter { override fun toDatabase(value: Money?): BigDecimal? = value?.amount override fun fromDatabase(dbValue: BigDecimal?): Money? = dbValue?.let { Money(it) } } ``` [Java] ```java public record Money(BigDecimal amount) {} @DefaultConverter public class MoneyConverter implements Converter { @Override public BigDecimal toDatabase(Money value) { return value != null ? value.amount() : null; } @Override public Money fromDatabase(BigDecimal dbValue) { return dbValue != null ? new Money(dbValue) : null; } } ``` ### Encrypted Field Transparent encryption for sensitive columns. The database stores the encrypted text, and the application sees the plaintext: [Kotlin] ```kotlin class EncryptedStringConverter : Converter { private val cipher = EncryptionService.instance() override fun toDatabase(value: String?): String? = value?.let { cipher.encrypt(it) } override fun fromDatabase(dbValue: String?): String? = dbValue?.let { cipher.decrypt(it) } } ``` Apply it explicitly on sensitive fields: ```kotlin @DbTable("user") data class User( @PK val id: Int, val name: String, @Convert(converter = EncryptedStringConverter::class) val socialSecurityNumber: String ) : Entity ``` [Java] ```java public class EncryptedStringConverter implements Converter { private final EncryptionService cipher = EncryptionService.instance(); @Override public String toDatabase(String value) { return value != null ? cipher.encrypt(value) : null; } @Override public String fromDatabase(String dbValue) { return dbValue != null ? cipher.decrypt(dbValue) : null; } } ``` Apply it explicitly on sensitive fields: ```java @DbTable("user") public record User( @PK int id, String name, @Convert(converter = EncryptedStringConverter.class) String socialSecurityNumber ) implements Entity {} ``` --- ## See Also To understand how Storm maps database columns to constructor parameters, see [Hydration](hydration.md). ======================================== ## Source: json.md ======================================== # JSON Support Storm provides first-class support for JSON columns, allowing you to store and query JSON data directly in your entities. Annotate a field with `@Json` and Storm handles serialization/deserialization automatically. ## Installation Storm supports two JSON serialization libraries. Choose the one that fits your project: ### Jackson (Kotlin & Java) Works with both Kotlin and Java projects. Two variants are available, matching the two major Jackson versions. Both modules require Jackson to be present on the classpath; they do not bring Jackson as a transitive dependency. If you are using Spring Boot, Jackson is already included and the version is managed for you: Spring Boot 3.x ships with Jackson 2, while Spring Boot 4+ ships with Jackson 3. Choose the Storm module that matches your Jackson version. If you are not using Spring Boot, add Jackson to your project directly alongside the corresponding Storm module. **Jackson 2** (requires Jackson 2.17+): ```xml st.orm storm-jackson2 @@STORM_VERSION@@ ``` ```groovy implementation 'st.orm:storm-jackson2:@@STORM_VERSION@@' ``` **Jackson 3** (requires Jackson 3.0+): ```xml st.orm storm-jackson3 @@STORM_VERSION@@ ``` ```groovy implementation 'st.orm:storm-jackson3:@@STORM_VERSION@@' ``` The two modules are mutually exclusive on the classpath. Both provide the same public API (`st.orm.jackson` package), so switching between them requires only changing the Maven dependency. ### Kotlinx Serialization (Kotlin) A Kotlin-native option with compile-time safety. Requires the `kotlinx-serialization` Gradle plugin. ```kotlin plugins { kotlin("plugin.serialization") version "2.0.0" } dependencies { implementation("st.orm:storm-kotlinx-serialization:@@STORM_VERSION@@") } ``` Storm auto-detects the serialization library at runtime. Just add the dependency and it works. --- ## JSON Columns Use `@Json` to map a field to a JSON column. [Kotlin] ```kotlin data class User( @PK val id: Int = 0, val email: String, @Json val preferences: Map ) : Entity ``` The `preferences` field is automatically serialized to JSON when writing and deserialized when reading. [Java] ```java record User(@PK Integer id, String email, @Json Map preferences ) implements Entity {} ``` The `preferences` field is automatically serialized to JSON when writing and deserialized when reading. ## Complex Types JSON columns are not limited to maps and primitive collections. You can store structured domain objects directly, preserving their full type hierarchy during serialization and deserialization. This is useful when the nested object has a well-defined shape but does not need its own database table. [Kotlin] When using kotlinx.serialization, annotate the nested type with `@Serializable`. Jackson discovers types automatically through reflection, so no additional annotation is needed. ```kotlin @Serializable // For kotlinx.serialization data class Address( val street: String, val city: String, val postalCode: String ) data class User( @PK val id: Int = 0, val email: String, @Json val address: Address ) : Entity ``` [Java] Structured domain objects work the same way in Java. Jackson handles serialization automatically for Java records without additional annotations. ```java record Address(String street, String city, String postalCode) {} record User(@PK Integer id, String email, @Json Address address ) implements Entity {} ``` ## JSON Aggregation JSON aggregation solves the problem of loading one-to-many or many-to-many relationships in a single query. Instead of issuing separate queries or relying on lazy loading, you can use SQL aggregation functions like `JSON_OBJECTAGG` to collect related rows into a JSON array within the main query result. Storm then deserializes that array back into a typed collection on the result object. This approach eliminates N+1 query problems for relationship loading at the cost of shifting serialization work to both the database and the application layer. It works best when the aggregated collection is moderate in size (see the performance section below). [Kotlin] ```kotlin data class RolesByUser( val user: User, @Json val roles: List ) interface UserRepository : EntityRepository { fun getUserRoles(): List = select(RolesByUser::class) { "${User::class}, JSON_OBJECTAGG(${Role::class})" } .innerJoin(UserRole::class).on(User::class) .groupBy(User_.id) .resultList } ``` [Java] The same aggregation pattern applies in Java using string templates. The `JSON_OBJECTAGG` function collects related entities into a JSON object that Storm deserializes into the annotated `@Json` field. ```java record RolesByUser(User user, @Json List roles) {} interface UserRepository extends EntityRepository { default List getUserRoles() { return select(RolesByUser.class, RAW."\{User.class}, JSON_OBJECTAGG(\{Role.class})") .innerJoin(UserRole.class).on(User.class) .groupBy(User_.id) .getResultList(); } } ``` --- ## Database Support JSON storage works differently across databases: | Database | JSON Type | Notes | |----------|-----------|-------| | PostgreSQL | `JSONB` | Binary format, indexable | | MySQL | `JSON` | Native JSON type | | MariaDB | `JSON` | Alias for LONGTEXT with validation | | Oracle | `JSON` | Native JSON (21c+) | | MS SQL Server | `NVARCHAR(MAX)` | Stored as text | | H2 | `CLOB` | Stored as text | ## Use Cases JSON columns are most valuable when relational normalization would add complexity without proportional benefit. The following patterns illustrate the three main scenarios where JSON storage is the right choice. ### Flexible Schema When different rows need different sets of attributes, a JSON column avoids the overhead of schema migrations and sparse nullable columns. This is common in product catalogs, configuration storage, and user-defined fields. ```kotlin data class Product( @PK val id: Int = 0, val name: String, @Json val attributes: Map // Size, color, weight, etc. ) : Entity ``` ### Denormalized Data Storing a snapshot of related data directly in the parent row avoids joins at read time and preserves the exact state at the moment of creation. This is useful for data that should not change retroactively, such as a shipping address on an order or the line items at the time of purchase. ```kotlin data class Order( @PK val id: Int = 0, val orderDate: LocalDate, @Json val shippingAddress: Address, // Snapshot at order time @Json val items: List // Denormalized for fast access ) : Entity ``` ### Aggregation Results Fetch one-to-many or many-to-many relationships in a single query using JSON aggregation. This is the primary alternative to issuing multiple queries or using lazy loading. The trade-off is that the aggregated data arrives as a serialized blob rather than discrete rows, so it works best when the client consumes the collection as a whole rather than filtering or paging within it. ## Tips 1. **Use for truly dynamic data.** Don't use JSON to avoid proper schema design. 2. **Consider query patterns.** JSON columns are harder to filter and index than normalized columns. 3. **Size limits.** Be aware of column size limits for large JSON documents. ## JSON Aggregation Performance JSON aggregation (`JSON_OBJECTAGG`, `JSON_ARRAYAGG`) is suitable for mappings with a **moderate size**. For larger datasets or extensive mappings, split queries into separate parts to avoid: - Memory pressure from large JSON documents - Slow serialization/deserialization ### When to Split Queries **Use JSON aggregation when:** - The aggregated collection typically has < 100 items - The JSON payload is under ~1MB - The data is read-heavy and benefits from single-query loading **Split into separate queries when:** - Collections can grow unbounded (e.g., all orders for a customer) - You need pagination on the related data - The aggregated data is rarely accessed ### Example: Splitting Large Relationships Instead of aggregating all roles per user: ```kotlin // Might be slow for users with many roles data class RolesByUser(val user: User, @Json val roles: List) ``` Query separately: ```kotlin // Fetch users val users = orm.findAll() // Batch fetch roles and group by user val rolesByUser = orm.findAll(UserRole_.user inList users) .groupBy({ it.user }, { it.role }) ``` This approach gives you control over pagination, caching, and memory usage. --- ## See Also - [Queries](queries.md) - aggregation with JSON - [Dialects](dialects.md) - database-specific JSON support - [Entities](entities.md) - `@Json` annotation - [Entity Serialization](serialization.md) - serializing entities with `Ref` fields to JSON (for REST APIs, not database columns) ======================================== ## Source: polymorphism.md ======================================== # Polymorphism Storm supports polymorphic entity hierarchies using sealed types. Instead of the proxy-based inheritance strategies found in traditional ORMs, Storm leverages sealed interfaces and data classes (Kotlin) or records (Java) to provide compile-time type safety with exhaustive pattern matching. The sealed type hierarchy tells the compiler exactly which subtypes exist, so a `when` (Kotlin) or `switch` (Java) expression over a polymorphic result is guaranteed to cover all cases. Storm provides three inheritance strategies: **Single-Table**, **Joined Table**, and **Polymorphic FK**. The strategy is detected automatically from how you structure the sealed type hierarchy. Single-Table stores a discriminator value in the entity's table and requires `@Discriminator` on the sealed interface. Joined Table supports an optional `@Discriminator`: when present, a physical discriminator column is stored in the base table; when absent, Storm resolves the concrete type at query time by checking which extension table has a matching row. Polymorphic FK stores discriminator values in the *referencing* entity instead, so the sealed interface itself needs no discriminator annotation. ## Decision Guide Before diving into the details, use this summary to choose the right strategy for your use case: | Strategy | Best For | Trade-offs | |----------|----------|------------| | [Single-Table](#single-table-inheritance) | Simple hierarchies, few fields per subtype | Fast queries, sparse columns | | [Joined Table](#joined-table-inheritance) | Complex hierarchies, many fields per subtype | Normalized storage, JOIN cost | | [Polymorphic FK](#polymorphic-foreign-keys) | References to different entity types | Flexible, requires type column | **When to use which:** Start with Single-Table when your subtypes share most of their fields and you want the simplest, fastest queries. Switch to Joined Table when subtypes carry many distinct fields and you prefer a clean, normalized schema without NULL columns. Choose Polymorphic FK when the subtypes are independent entities (like posts and photos) that share a common trait (like being commentable), and you need a foreign key that can point to any of them. --- ## Overview Each strategy maps a sealed type hierarchy to the database in a different way. The choice depends on how many subtype-specific fields you have, how normalized you want the schema, and whether the subtypes are logically "the same entity" or independent entities that share a common trait. See [Choosing a Strategy](#choosing-a-strategy) for a decision tree. ``` Strategy Tables FK Columns Use Case ──────── ────── ────────── ──────── Single-Table 1 shared table 1 column Simple hierarchies, fast queries ┌────────────┐ (regular FK) │ pet │ └────────────┘ Joined Table 1 base + 1 column Normalized schemas, many N extension (FK to base) subtype-specific fields ┌────────────┐ │ pet │ ├────────────┤ │ cat │ │ dog │ └────────────┘ Polymorphic FK N independent 2 columns Comment-on-anything, tables (type + id) tagging, auditing ┌────────┐ │ post │ │ photo │ └────────┘ ``` Single-Table puts everything in one table and is the fastest for queries (no JOINs), but subtype-specific columns are NULL for rows that belong to other subtypes. Joined Table eliminates the NULL columns by splitting subtype-specific fields into their own extension tables, at the cost of LEFT JOINs on every query. Polymorphic FK is fundamentally different: the subtypes are independent entities with separate tables, and the polymorphism lives in the foreign key that references them. ### Strategy Comparison The following table summarizes the key differences between the three strategies. Each trade-off matters in different situations: query performance favors Single-Table, schema cleanliness favors Joined Table, and flexibility across unrelated entity types favors Polymorphic FK. | Aspect | Single-Table | Joined Table | Polymorphic FK | |--------|-------------|-------------|----------------| | **Tables** | One shared table | Base table + extension tables | Separate independent tables | | **Discriminator** | In the shared table | In the base table (optional1) | In the *referencing* entity | | **Unused columns** | NULL for other subtypes | None (normalized) | None | | **Query performance** | Fast (no JOINs) | Moderate (LEFT JOINs) | Variable (per-type lookup) | | **Schema normalization** | Low | High | High | | **FK from other entities** | Single column | Single column (to base) | Two columns (type + id)2 | | **Adding subtypes** | Add columns to shared table | Add new extension table | Add new table | 1 When `@Discriminator` is omitted, Storm resolves the concrete type at query time by generating an expression that checks which extension table has a matching row. See [The `@Discriminator` Annotation](#the-discriminator-annotation) for details.
2 Because the subtypes are independent tables with no shared base table, a single FK column cannot identify both the target table and the target row. The discriminator column identifies the table, and the ID column identifies the row. See [Polymorphic Foreign Keys](#polymorphic-foreign-keys) for details. Each strategy has strengths that make it the natural choice in certain scenarios. The sections below cover each one in detail. --- ## Strategy Detection Storm detects the inheritance strategy by inspecting the sealed type hierarchy. You do not specify the strategy as a string or enum; it is inferred from the type structure and annotations. This keeps the entity definitions declarative: the class hierarchy itself tells Storm everything it needs to know. | Sealed interface extends | Annotations | Detected Strategy | |--------------------------|-------------|-------------------| | `Entity` | `@Discriminator` | **Single-Table** | | `Entity` | `@Polymorphic(JOINED)` (with or without `@Discriminator`) | **Joined Table** | | `Data` (not Entity) | (none required) | **Polymorphic FK** | The key distinction is whether the sealed interface extends `Entity` (making it a table-backed entity) or `Data` (making it a pure type constraint for polymorphic foreign keys). Detection happens once per type and is cached, so the cost of inspecting the hierarchy is paid only on first access. For Joined Table, `@Polymorphic(JOINED)` is the deciding factor. Neither `@DbTable` nor `@Discriminator` influence strategy detection for this type. This means you can freely add or remove `@Discriminator` on a Joined Table hierarchy to switch between explicit and implicit type resolution without changing the inheritance strategy itself. ### Validation Rules Storm validates sealed hierarchies when the model is first accessed. If any rule is violated, a clear error message describes the problem. This catches configuration mistakes at startup rather than at query time, so you find out about structural issues immediately rather than when a specific query happens to trigger the wrong code path. The following rules are enforced. Some apply universally, while others are specific to a particular strategy. | Rule | Applies to | |------|-----------| | All permitted subclasses must be data classes (Kotlin) or records (Java) | All strategies | | All subtypes must have the same `@PK` field type and generation strategy | All strategies | | Discriminator values must be unique across all subtypes | All strategies | | `@Discriminator` on subtypes must not specify a `column` attribute | All strategies | | The sealed interface must be annotated with `@Discriminator` | Single-Table | | `@Discriminator` on the sealed interface must not specify a `value` attribute | Single-Table, Joined Table (when `@Discriminator` is present) | | Subtypes must not have `@DbTable` | Single-Table | | Must have at least one common field across all subtypes | Joined Table | | All subtypes must independently implement `Entity` | Polymorphic FK | | The sealed interface must not have `@Discriminator` | Polymorphic FK | | The sealed interface must not have `@Polymorphic` | Polymorphic FK | For example, if two subtypes in a Single-Table hierarchy both declare `@Discriminator("animal")`, Storm will report a duplicate discriminator value error on first use. Similarly, if a Joined Table hierarchy has no fields in common across all subtypes, Storm will reject the hierarchy because there is nothing to put in the base table. --- ## The `@Discriminator` Annotation The `@Discriminator` annotation configures how Storm maps between types and database discriminator values. It serves a different purpose depending on where it is placed. On a **sealed entity interface** using Single-Table inheritance, `@Discriminator` is **required** and declares which column in the database table holds the discriminator. If you omit the `column` attribute, the default column name is `"dtype"`, which is consistent with JPA's `@DiscriminatorColumn` convention. For **Joined Table** inheritance, `@Discriminator` is **optional**. When present, a physical discriminator column is stored in the base table, just like Single-Table. When absent, Storm resolves the concrete type at query time by generating a `CASE` expression that checks which extension table has a matching row (via `LEFT JOIN` and `IS NOT NULL` on the extension table's primary key). This aligns with Hibernate's behavior for `@Inheritance(strategy = JOINED)` without `@DiscriminatorColumn`. When no `@Discriminator` is present, every subtype always gets an extension table (even if it has no subtype-specific fields), because the extension table row serves as the type marker. On a **concrete subtype**, `@Discriminator` is optional and sets the value stored in the discriminator column for that subtype. Without it, Storm uses the simple class name (e.g., `"Cat"`, `"Dog"`) for Single-Table and Joined Table, or the resolved table name (e.g., `"post"`, `"photo"`) for Polymorphic FK. On a **FK field** pointing to a sealed `Data` type (Polymorphic FK), `@Discriminator` is optional and customizes the discriminator column name in the referencing entity's table. Without it, Storm derives the column name from the field name (e.g., a field named `target` produces a column `target_type`). ### Usage Contexts The table below summarizes where `@Discriminator` can be placed, whether it is required, and what it controls. The `Target` column refers to the annotation target type in Java. | Context | Target | Required? | Purpose | Default | |---------|--------|-----------|---------|---------| | Sealed interface | `TYPE` | **Yes** (Single-Table), Optional (Joined) | Set discriminator column name | `"dtype"` | | Concrete subtype | `TYPE` | No | Set discriminator value | Simple class name | | FK field (Polymorphic FK) | `FIELD` | No | Set discriminator column in referencing table | `"{fieldName}_type"` | The following examples show how to apply the annotation in each context. [Kotlin] ```kotlin // On the sealed interface: required for Single-Table, optional for Joined Table @Discriminator // uses default column name "dtype" sealed interface Pet : Entity { val name: String } // Or with a custom column name @Discriminator(column = "pet_type") sealed interface Pet : Entity { val name: String } // Joined Table without @Discriminator: type is resolved via extension table PKs @Polymorphic(JOINED) sealed interface Pet : Entity { val name: String } // On a subtype: customize the discriminator value (optional) @Discriminator("LARGE_DOG") data class Dog( @PK override val id: Int = 0, override val name: String, val weight: Int ) : Pet ``` [Java] ```java // On the sealed interface: required for Single-Table, optional for Joined Table @Discriminator // uses default column name "dtype" sealed interface Pet extends Entity permits Cat, Dog { String name(); } // Or with a custom column name @Discriminator(column = "pet_type") sealed interface Pet extends Entity permits Cat, Dog { String name(); } // Joined Table without @Discriminator: type is resolved via extension table PKs @Polymorphic(JOINED) sealed interface Pet extends Entity permits Cat, Dog { String name(); } // On a subtype: customize the discriminator value (optional) @Discriminator("LARGE_DOG") record Dog(@PK Integer id, String name, int weight) implements Pet {} ``` Discriminator values default to the simple class name (e.g., `"Cat"`, `"Dog"`) for Single-Table and Joined Table, or the resolved table name for Polymorphic FK. ### Discriminator Types The `@Discriminator` annotation supports a `type()` attribute that controls the SQL column type used for the discriminator. This attribute is only meaningful on the sealed interface (where it defines the column type); on subtypes and FK fields it is ignored. Storm supports three discriminator types: | Type | SQL Column | Value Format | Example | |------|-----------|-------------|---------| | `STRING` (default) | `VARCHAR` | Class name or custom string | `"Cat"`, `"LARGE_DOG"` | | `INTEGER` | `INTEGER` | Integer parsed from `value()` | `"1"`, `"2"` | | `CHAR` | `CHAR(1)` | Single character from `value()` | `"C"`, `"D"` | `STRING` is the default and works well for most cases: the discriminator column stores human-readable values like the class name. `INTEGER` is useful when your schema already uses numeric type codes, or when you want a compact discriminator that matches an existing integer column. `CHAR` provides a middle ground: a single character is more compact than a full string but still readable, and maps to a fixed-width `CHAR(1)` column. When using `INTEGER` or `CHAR`, every subtype must declare an explicit `@Discriminator` value, since numeric and character values cannot be derived automatically from the class name. #### STRING (default) The default type. The discriminator column is `VARCHAR`, and values are either the simple class name or a custom string. [Kotlin] ```kotlin @Discriminator sealed interface Pet : Entity data class Cat(@PK val id: Int = 0, val name: String) : Pet data class Dog(@PK val id: Int = 0, val name: String) : Pet // Discriminator values: "Cat", "Dog" ``` [Java] ```java @Discriminator sealed interface Pet extends Entity permits Cat, Dog {} record Cat(@PK Integer id, String name) implements Pet {} record Dog(@PK Integer id, String name) implements Pet {} // Discriminator values: "Cat", "Dog" ``` #### INTEGER The discriminator column is `INTEGER`. Each subtype must specify a numeric value via `@Discriminator("...")`. [Kotlin] ```kotlin @Discriminator(type = DiscriminatorType.INTEGER) @DbTable("vehicle") sealed interface Vehicle : Entity @Discriminator("1") data class Car(@PK val id: Int = 0, val model: String) : Vehicle @Discriminator("2") data class Truck(@PK val id: Int = 0, val payload: Int) : Vehicle ``` [Java] ```java @Discriminator(type = DiscriminatorType.INTEGER) @DbTable("vehicle") sealed interface Vehicle extends Entity permits Car, Truck {} @Discriminator("1") record Car(@PK Integer id, String model) implements Vehicle {} @Discriminator("2") record Truck(@PK Integer id, int payload) implements Vehicle {} ``` #### CHAR The discriminator column is `CHAR(1)`. Each subtype must specify a single-character value via `@Discriminator("...")`. [Kotlin] ```kotlin @Discriminator(type = DiscriminatorType.CHAR) sealed interface Status : Entity @Discriminator("A") data class Active(@PK val id: Int = 0, val since: LocalDate) : Status @Discriminator("I") data class Inactive(@PK val id: Int = 0, val reason: String) : Status ``` [Java] ```java @Discriminator(type = DiscriminatorType.CHAR) sealed interface Status extends Entity permits Active, Inactive {} @Discriminator("A") record Active(@PK Integer id, LocalDate since) implements Status {} @Discriminator("I") record Inactive(@PK Integer id, String reason) implements Status {} ``` The `type()` attribute works with all three inheritance strategies that use a discriminator: Single-Table, Joined Table (with `@Discriminator`), and Polymorphic FK. --- ## Single-Table Inheritance All subtypes share a single database table. A discriminator column distinguishes between subtypes, and subtype-specific columns are NULL for rows belonging to other subtypes. Because all data lives in one table, queries require no JOINs, which keeps them fast and straightforward. The trade-off is that the table accumulates columns from all subtypes, which can become unwieldy if subtypes have many distinct fields. This strategy maps naturally to the common pattern of a single table with a type column. ### Database Schema ```sql CREATE TABLE pet ( id INTEGER AUTO_INCREMENT PRIMARY KEY, dtype VARCHAR(50) NOT NULL, -- discriminator column name VARCHAR(255), -- shared by all subtypes indoor BOOLEAN, -- Cat-specific (NULL for Dogs) weight INTEGER -- Dog-specific (NULL for Cats) ); ``` The discriminator column (`dtype`) stores the subtype name and is automatically populated by Storm during inserts. Subtype-specific columns use NULL as their zero-value for rows that belong to a different subtype: ``` pet table ┌────┬───────┬──────────┬────────┬────────┐ │ id │ dtype │ name │ indoor │ weight │ ├────┼───────┼──────────┼────────┼────────┤ │ 1 │ Cat │ Whiskers │ true │ NULL │ │ 2 │ Cat │ Luna │ false │ NULL │ │ 3 │ Dog │ Rex │ NULL │ 30 │ │ 4 │ Dog │ Max │ NULL │ 15 │ └────┴───────┴──────────┴────────┴────────┘ ``` ### Defining Entities The sealed interface is the entity. Any sealed interface extending `Entity` without `@Polymorphic(JOINED)` is detected as Single-Table. The sealed interface must be annotated with `@Discriminator` to declare the discriminator column. Subtypes are data classes (Kotlin) or records (Java) that implement the sealed interface. Each subtype defines its own fields; fields shared across all subtypes (like `id` and `name` above) go into the shared table alongside subtype-specific fields. [Kotlin] ```kotlin @Discriminator sealed interface Pet : Entity data class Cat( @PK val id: Int = 0, val name: String, val indoor: Boolean ) : Pet data class Dog( @PK val id: Int = 0, val name: String, val weight: Int ) : Pet ``` [Java] ```java @Discriminator sealed interface Pet extends Entity permits Cat, Dog {} record Cat(@PK Integer id, String name, boolean indoor) implements Pet {} record Dog(@PK Integer id, String name, int weight) implements Pet {} ``` The table name (`pet`) is derived automatically from the class name. Use `@DbTable` only if the table name differs from the default (e.g., `@DbTable("animals")`). ### CRUD Operations All CRUD operations go through the sealed interface type. Storm determines the concrete subtype at runtime: on SELECT, it reads the discriminator value from the result set; on INSERT and UPDATE, it inspects the record's runtime class. [Kotlin] ```kotlin val pets = orm.entity(Pet::class) // Select all pets - returns Cat and Dog instances val all: List = pets.select().resultList for (pet in all) { when (pet) { is Cat -> println("Cat: ${pet.name}, indoor=${pet.indoor}") is Dog -> println("Dog: ${pet.name}, ${pet.weight}kg") } } // Insert a new Cat pets.insert(Cat(name = "Bella", indoor = true)) // Update pets.update(Cat(id = 1, name = "Sir Whiskers", indoor = true)) // Remove pets.remove(somePet) ``` [Java] ```java var pets = orm.entity(Pet.class); // Select all pets - returns Cat and Dog instances var all = pets.select().getResultList(); for (var pet : all) { switch (pet) { case Cat c -> System.out.println("Cat: " + c.name() + ", indoor=" + c.indoor()); case Dog d -> System.out.println("Dog: " + d.name() + ", " + d.weight() + "kg"); } } // Insert a new Cat pets.insert(new Cat(null, "Bella", true)); // Update pets.update(new Cat(1, "Sir Whiskers", true)); // Remove pets.remove(somePet); ``` ### Generated SQL Storm automatically includes the discriminator column in SELECT queries and populates it during inserts. The discriminator value is derived from the record's class name (or from `@Discriminator` if customized). On UPDATE and DELETE, the discriminator is not included in the SET or WHERE clause because the primary key is sufficient to identify the row. The table below shows the SQL generated for each operation. Because all subtypes share one table, every operation is a single SQL statement. ``` Operation Generated SQL ───────── ───────────── SELECT all SELECT p.id, p.dtype, p.name, p.indoor, p.weight FROM pet p INSERT Cat INSERT INTO pet (dtype, name, indoor) VALUES ('Cat', 'Bella', true) INSERT Dog INSERT INTO pet (dtype, name, weight) VALUES ('Dog', 'Buddy', 25) UPDATE UPDATE pet SET name = 'Sir Whiskers', indoor = true WHERE id = 1 DELETE DELETE FROM pet WHERE id = 1 ``` Notice that INSERT only includes the columns relevant to the concrete subtype. Columns belonging to other subtypes are omitted entirely (they default to NULL in the database). The SELECT, by contrast, always includes all columns from all subtypes, because the query does not know in advance which subtypes will appear in the result set. ### Foreign Keys to Single-Table Entities Other entities reference the shared table with a regular single-column foreign key. Since all subtypes live in the same table, the FK column always points to one table regardless of which concrete subtype the row represents. This is one of the advantages of Single-Table: foreign key relationships are simple and standard. [Kotlin] ```kotlin data class Visit( @PK val id: Int = 0, @FK val pet: Ref // FK to pet.id ) : Entity ``` [Java] ```java record Visit(@PK Integer id, @FK Ref pet // FK to pet.id ) implements Entity {} ``` ``` visit table pet table ┌────┬────────┐ ┌────┬───────┬──────────┬────────┬────────┐ │ id │ pet_id │ │ id │ dtype │ name │ indoor │ weight │ ├────┼────────┤ ├────┼───────┼──────────┼────────┼────────┤ │ 1 │ 1 │─────────────────▶│ 1 │ Cat │ Whiskers │ true │ NULL │ │ 2 │ 3 │────────┐ │ 2 │ Cat │ Luna │ false │ NULL │ └────┴────────┘ └────────▶│ 3 │ Dog │ Rex │ NULL │ 30 │ │ 4 │ Dog │ Max │ NULL │ 15 │ └────┴───────┴──────────┴────────┴────────┘ ``` ### Hydration When Storm reads a result set for a sealed entity type, it uses the discriminator value to determine which concrete subtype to construct. The result set contains the union of all subtype columns, but each row only has meaningful values for the columns that belong to its subtype. Storm reads the discriminator first, resolves it to the corresponding record class, and then extracts only the fields that class declares. Fields belonging to other subtypes are ignored. ``` Result Set Row ┌────┬───────┬──────────┬────────┬────────┐ │ id │ dtype │ name │ indoor │ weight │ ├────┼───────┼──────────┼────────┼────────┤ │ 1 │ Cat │ Whiskers │ true │ NULL │ └────┴───┬───┴──────────┴────────┴────────┘ │ ▼ ┌─────────────────────────────┐ │ Discriminator: "Cat" │ │ │ │ │ ▼ │ │ Resolve to Cat.class │ │ │ │ │ ▼ │ │ Construct: │ │ Cat(id=1, │ │ name="Whiskers", │ │ indoor=true) │ └─────────────────────────────┘ ``` This means adding a new subtype with new fields only requires adding columns to the existing table and a new record class. No changes to existing subtypes or queries are needed. The sealed type hierarchy guarantees that Storm will use the correct record class for each discriminator value, and pattern matching ensures that application code handles the new subtype at every relevant point. --- ## Joined Table Inheritance Joined Table inheritance splits the data across multiple tables: a base table holds fields shared by all subtypes plus a discriminator column, and each subtype has its own extension table with subtype-specific fields. The extension table's primary key is also a foreign key to the base table, establishing a one-to-one relationship. This strategy works well when subtypes have many distinct fields and you want a normalized schema without NULL columns. The trade-off is that every query requires LEFT JOINs to the extension tables, and DML operations touch multiple tables within a single logical operation. In return, the schema stays clean: each table contains only the columns that are meaningful for its rows. ### With and Without `@Discriminator` Joined Table supports two modes of type resolution: **With `@Discriminator`** (explicit discriminator column): The base table includes a discriminator column (e.g., `dtype`) that stores the subtype name. This is the same approach as Single-Table. Extension tables only need rows for subtypes that have subtype-specific fields. **Without `@Discriminator`** (implicit type resolution): The base table has no discriminator column. Instead, Storm generates a `CASE` expression at query time that checks which extension table has a matching row. Every subtype must have an extension table, even if it has no subtype-specific fields, because the extension table row serves as the type marker. This aligns with Hibernate's default behavior for `@Inheritance(strategy = JOINED)` without `@DiscriminatorColumn`. ### Database Schema With `@Discriminator`: ```sql -- Base table: shared fields + discriminator CREATE TABLE pet ( id INTEGER AUTO_INCREMENT PRIMARY KEY, dtype VARCHAR(50) NOT NULL, name VARCHAR(255) ); -- Extension tables: subtype-specific fields CREATE TABLE cat ( id INTEGER PRIMARY KEY REFERENCES pet(id), indoor BOOLEAN ); CREATE TABLE dog ( id INTEGER PRIMARY KEY REFERENCES pet(id), weight INTEGER ); ``` Without `@Discriminator`: ```sql -- Base table: shared fields only, no discriminator column CREATE TABLE pet ( id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) ); -- Extension tables: subtype-specific fields CREATE TABLE cat ( id INTEGER PRIMARY KEY REFERENCES pet(id), indoor BOOLEAN ); CREATE TABLE dog ( id INTEGER PRIMARY KEY REFERENCES pet(id), weight INTEGER ); -- PK-only extension table for subtypes without extra fields CREATE TABLE bird ( id INTEGER PRIMARY KEY REFERENCES pet(id) ); ``` Note that `Bird` has no subtype-specific fields, but still needs an extension table when no discriminator is present. The extension table row acts as the type marker. Each extension table's primary key references the base table. This foreign key constraint ensures referential integrity: an extension row cannot exist without a corresponding base row, and the same ID is used across all tables for a given entity. ``` pet (base) cat (extension) ┌────┬───────┬──────────┐ ┌────┬────────┐ │ id │ dtype │ name │ │ id │ indoor │ ├────┼───────┼──────────┤ ├────┼────────┤ │ 1 │ Cat │ Whiskers │◀────────────▶│ 1 │ true │ │ 2 │ Cat │ Luna │◀────────────▶│ 2 │ false │ │ 3 │ Dog │ Rex │ └────┴────────┘ └────┴───────┴──────────┘ │ dog (extension) │ ┌────┬────────┐ │ │ id │ weight │ │ ├────┼────────┤ └─────────────────────────────▶│ 3 │ 30 │ └────┴────────┘ ``` ### Field Partitioning Storm automatically determines which fields belong to the base table and which belong to extension tables by comparing the fields across all subtypes. The rule is straightforward: fields that appear with the same name and type in every subtype go to the base table, while fields unique to a single subtype go to that subtype's extension table. The primary key is always in the base table. | Field | Cat | Dog | Location | |-------|-----|-----|----------| | `id` (Integer) | Yes | Yes | Base table | | `name` (String) | Yes | Yes | Base table | | `indoor` (boolean) | Yes | No | `cat` extension | | `weight` (int) | No | Yes | `dog` extension | This partitioning is computed once per sealed type and cached. You do not need to annotate fields to indicate which table they belong to; Storm infers it from the type structure. If a subtype has no extension-specific fields (all its fields are shared) and a `@Discriminator` is present, no extension table is needed for that subtype. Without `@Discriminator`, every subtype always requires an extension table (even if it only contains the primary key), because the extension table row serves as the type marker. ### Defining Entities Add `@Polymorphic(JOINED)` to the sealed interface to opt into this strategy. `@Discriminator` is optional: include it for a discriminator column in the base table, or omit it for implicit type resolution via extension table PKs. Table names for the base table and extension tables are derived automatically from the class names (`Pet` resolves to `pet`, `Cat` to `cat`, `Dog` to `dog`). Use `@DbTable` on the sealed interface or subtypes to override these names. [Kotlin] With `@Discriminator`: ```kotlin @Discriminator @Polymorphic(JOINED) sealed interface Pet : Entity { val name: String } data class Cat( @PK override val id: Int = 0, override val name: String, val indoor: Boolean ) : Pet data class Dog( @PK override val id: Int = 0, override val name: String, val weight: Int ) : Pet ``` Without `@Discriminator`: ```kotlin @Polymorphic(JOINED) sealed interface Pet : Entity { val name: String } data class Cat( @PK override val id: Int = 0, override val name: String, val indoor: Boolean ) : Pet data class Dog( @PK override val id: Int = 0, override val name: String, val weight: Int ) : Pet // Bird has no extension fields, but still gets an extension table data class Bird( @PK override val id: Int = 0, override val name: String ) : Pet ``` [Java] With `@Discriminator`: ```java @Discriminator @Polymorphic(JOINED) sealed interface Pet extends Entity permits Cat, Dog { String name(); } record Cat(@PK Integer id, String name, boolean indoor) implements Pet {} record Dog(@PK Integer id, String name, int weight) implements Pet {} ``` Without `@Discriminator`: ```java @Polymorphic(JOINED) sealed interface Pet extends Entity permits Cat, Dog, Bird { String name(); } record Cat(@PK Integer id, String name, boolean indoor) implements Pet {} record Dog(@PK Integer id, String name, int weight) implements Pet {} // Bird has no extension fields, but still gets an extension table record Bird(@PK Integer id, String name) implements Pet {} ``` ### CRUD Operations CRUD operations work through the sealed interface type, just like Single-Table. The API is identical. However, under the hood Storm generates multi-table SQL: inserts and updates touch both the base and extension tables, and deletes remove from extension tables first (to satisfy foreign key constraints) before removing the base row. > **Transactional context required.** All multi-table DML operations (insert, update, delete) for Joined Table entities execute within the current transaction. Because these operations touch multiple tables, they require a transactional context to guarantee atomicity. If any step fails, the entire operation rolls back. Make sure your code runs inside a `transaction {}` block (Kotlin), a Spring `@Transactional` method, or equivalent transactional scope. [Kotlin] ```kotlin val pets = orm.entity(Pet::class) // Select all - Storm auto-joins extension tables val all: List = pets.select().resultList // Insert a Cat - inserts into base table, then extension table pets.insert(Cat(name = "Bella", indoor = true)) // Update a Cat - updates both base and extension tables pets.update(Cat(id = 1, name = "Sir Whiskers", indoor = true)) // Remove - deletes from extension table first, then base table pets.remove(somePet) ``` [Java] ```java var pets = orm.entity(Pet.class); // Select all - Storm auto-joins extension tables var all = pets.select().getResultList(); // Insert a Cat - inserts into base table, then extension table pets.insert(new Cat(null, "Bella", true)); // Update a Cat - updates both base and extension tables pets.update(new Cat(1, "Sir Whiskers", true)); // Remove - deletes from extension table first, then base table pets.remove(somePet); ``` ### Generated SQL SELECT queries use LEFT JOINs to bring together the base and extension table columns. LEFT JOIN (rather than INNER JOIN) is used because each row matches only one extension table; the non-matching extension tables produce NULLs. With `@Discriminator`, the discriminator column is read directly from the base table: ```sql SELECT p.id, p.dtype, p.name, c.indoor, d.weight FROM pet p LEFT JOIN cat c ON p.id = c.id LEFT JOIN dog d ON p.id = d.id ``` Without `@Discriminator`, Storm generates a `CASE` expression that resolves the concrete type by checking which extension table has a matching row: ```sql SELECT p.id, CASE WHEN c.id IS NOT NULL THEN 'Cat' WHEN d.id IS NOT NULL THEN 'Dog' WHEN b.id IS NOT NULL THEN 'Bird' END, p.name, c.indoor, d.weight FROM pet p LEFT JOIN cat c ON p.id = c.id LEFT JOIN dog d ON p.id = d.id LEFT JOIN bird b ON p.id = b.id ``` Unlike Single-Table, DML operations for Joined Table entities are multi-statement: they involve more than one table. Storm executes all statements within the current transaction to ensure atomicity. Each operation follows a specific order to respect foreign key constraints between the base and extension tables. **INSERT** first writes to the base table (which owns the auto-generated primary key), then uses the generated key to insert into the extension table. The base table must come first because the extension table's primary key references it: ``` INSERT Cat(null, "Whiskers", true) ───────────────────────────────────────────────────────────────── Step 1: INSERT INTO pet (dtype, name) VALUES ('Cat', 'Whiskers') │ ▼ generated id = 5 Step 2: INSERT INTO cat (id, indoor) VALUES (5, true) ``` **UPDATE** follows the same order: shared fields are written to the base table first, then subtype-specific fields are written to the extension table. If a subtype has no extension-specific fields, the second statement is skipped entirely. ``` UPDATE Cat(1, "Sir Whiskers", true) ───────────────────────────────────────────────────────────────── Step 1: UPDATE pet SET name = 'Sir Whiskers' WHERE id = 1 Step 2: UPDATE cat SET indoor = true WHERE id = 1 ``` **DELETE** reverses the order: extension tables are deleted first to satisfy the foreign key constraint, then the base table row is removed. When deleting by ID without knowing the concrete type, Storm attempts to delete from all extension tables (at most one will have a matching row). ``` DELETE Pet(1) ───────────────────────────────────────────────────────────────── Step 1: DELETE FROM cat WHERE id = 1 (extension first) DELETE FROM dog WHERE id = 1 (all extensions) Step 2: DELETE FROM pet WHERE id = 1 (base last) ``` Note that SQL-level upsert operations (`INSERT ... ON CONFLICT`, `MERGE`, etc.) are not supported for Joined Table entities, because these SQL constructs are fundamentally single-table operations. Storm will throw a clear error if you attempt an upsert on a joined sealed entity. You can still use `insert()` and `update()` separately, which correctly handle the multi-table logic. ### Foreign Keys to Joined Table Entities Foreign keys reference the base table, just like Single-Table. From the referencing entity's perspective, there is no difference between pointing to a Single-Table or Joined Table entity. When Storm joins to a Joined Table entity (e.g., loading a `Visit` with its `Pet`), it automatically chains the extension table LEFT JOINs. [Kotlin] ```kotlin data class Visit( @PK val id: Int = 0, @FK val pet: Ref // FK to pet.id ) : Entity ``` [Java] ```java record Visit(@PK Integer id, @FK Ref pet // FK to pet.id ) implements Entity {} ``` When querying Visit with a join to Pet, Storm generates: ```sql SELECT v.*, p.id, p.dtype, p.name, c.indoor, d.weight FROM visit v INNER JOIN pet p ON v.pet_id = p.id LEFT JOIN cat c ON p.id = c.id LEFT JOIN dog d ON p.id = d.id ``` ### Hydration Hydration works the same way as Single-Table: the discriminator value determines the concrete subtype. The only difference is that subtype-specific field values come from different tables in the result set (via the LEFT JOINs), rather than from NULL columns in a shared table. ``` Result Set (after JOINs) ┌────┬───────┬──────────┬────────┬────────┐ │ id │ dtype │ name │ indoor │ weight │ ├────┼───────┼──────────┼────────┼────────┤ │ 1 │ Cat │ Whiskers │ true │ NULL │ ← indoor from cat table │ 3 │ Dog │ Rex │ NULL │ 30 │ ← weight from dog table └────┴───────┴──────────┴────────┴────────┘ │ ▼ ┌──────────────────────────────────────────────┐ │ Row 1: dtype = "Cat" │ │ → Cat(id=1, name="Whiskers", indoor=true) │ │ │ │ Row 3: dtype = "Dog" │ │ → Dog(id=3, name="Rex", weight=30) │ └──────────────────────────────────────────────┘ ``` Adding a new subtype means creating a new extension table and a new record class. The base table gains no new columns, and existing subtypes are not affected. This makes Joined Table a good fit for hierarchies that evolve over time, since adding a subtype does not alter the schema of any existing table. ### Type Changes Storm supports changing an entity's subtype via update. For example, if a `Cat` needs to become a `Dog`, you can update it by passing a `Dog` instance with the same primary key: [Kotlin] ```kotlin // Convert a Cat to a Dog (same ID, different subtype) pets.update(Dog(id = existingCatId, name = "Rex", weight = 30)) ``` [Java] ```java // Convert a Cat to a Dog (same ID, different subtype) pets.update(new Dog(existingCatId, "Rex", 30)); ``` Under the hood, Storm executes three operations: 1. **UPDATE** the base table with the new shared field values (and the new discriminator value, if present). 2. **DELETE** the old extension table row (e.g., remove the row from `cat`). 3. **INSERT** a new extension table row (e.g., insert a row into `dog`). This sequence ensures that the base table row is preserved (keeping all foreign key references intact), while the subtype-specific data is swapped. Foreign key references from other entities should always target the base table, so the type change is transparent to referencing entities. Type changes require a transactional context for atomicity, since the operation spans multiple tables. This works for both discriminated and discriminator-less Joined Table inheritance. ### Batch Operations Storm supports batch operations with mixed subtypes. You can pass a list containing different concrete subtypes to `insert()`, `update()`, or `remove()`, and Storm handles them correctly. [Kotlin] ```kotlin // Insert a mix of Cats and Dogs in one call pets.insert(listOf( Cat(name = "Whiskers", indoor = true), Dog(name = "Rex", weight = 30), Cat(name = "Luna", indoor = false) )) // Update mixed subtypes pets.update(listOf(updatedCat, updatedDog)) // Remove mixed subtypes pets.remove(listOf(someCat, someDog)) ``` [Java] ```java // Insert a mix of Cats and Dogs in one call pets.insert(List.of( new Cat(null, "Whiskers", true), new Dog(null, "Rex", 30), new Cat(null, "Luna", false) )); // Update mixed subtypes pets.update(List.of(updatedCat, updatedDog)); // Remove mixed subtypes pets.remove(List.of(someCat, someDog)); ``` For the base table, Storm issues a single batch statement covering all entities regardless of subtype. For extension tables, Storm partitions the entities by subtype and issues a separate batch statement per extension table. This means a batch insert of 2 Cats and 1 Dog results in one batch INSERT into the `pet` base table (3 rows), one batch INSERT into the `cat` extension table (2 rows), and one batch INSERT into the `dog` extension table (1 row). --- ## Polymorphic Foreign Keys Sometimes a foreign key needs to point to different tables depending on context. A comment might reference a post, a photo, or any other commentable entity. Each target type has its own independent table with its own schema. The sealed interface is NOT an entity itself; it serves purely as a type constraint for the FK relationship. This strategy differs fundamentally from Single-Table and Joined Table. In those strategies, the sealed interface represents a single logical table (or table group) in the database. With Polymorphic FK, the sealed interface represents a set of unrelated tables, and the polymorphism is expressed through a two-column foreign key: one column identifies which table, and the other identifies which row. This strategy is best for cross-cutting concerns like comments, tags, likes, or audit logs that apply to multiple unrelated entity types. ### Database Schema The target entities live in their own independent tables with no shared base table. The referencing entity stores two columns: a discriminator that identifies the target table, and an ID that identifies the row within that table. ```sql -- Independent tables (no shared base table) CREATE TABLE post (id INTEGER AUTO_INCREMENT PRIMARY KEY, title VARCHAR(255)); CREATE TABLE photo (id INTEGER AUTO_INCREMENT PRIMARY KEY, url VARCHAR(255)); -- Referencing table with discriminator + FK columns CREATE TABLE comment ( id INTEGER AUTO_INCREMENT PRIMARY KEY, text VARCHAR(255), target_type VARCHAR(50), -- discriminator: "post" or "photo" target_id INTEGER -- FK value (points to post.id or photo.id) ); ``` Note that `target_id` cannot have a database-level foreign key constraint, because it may point to different tables depending on the value of `target_type`. Referential integrity must be maintained at the application level. ``` comment table ┌────┬──────────────┬─────────────┬───────────┐ │ id │ text │ target_type │ target_id │ ├────┼──────────────┼─────────────┼───────────┤ │ 1 │ Nice post! │ post │ 1 │──────────▶ post.id = 1 │ 2 │ Great photo! │ photo │ 1 │──────────▶ photo.id = 1 │ 3 │ Love it! │ post │ 2 │──────────▶ post.id = 2 └────┴──────────────┴─────────────┴───────────┘ post table photo table ┌────┬──────────────┐ ┌────┬────────────┐ │ id │ title │ │ id │ url │ ├────┼──────────────┤ ├────┼────────────┤ │ 1 │ Hello World │ │ 1 │ photo1.jpg │ │ 2 │ Second Post │ │ 2 │ photo2.jpg │ └────┴──────────────┘ └────┴────────────┘ ``` ### Defining Entities The sealed interface extends `Data` (not `Entity`) and does NOT have `@DbTable`. This is what distinguishes Polymorphic FK from the other two strategies: the sealed interface is not table-backed. Each subtype is an independent entity with its own `@PK` and its own table. Table names are derived from the class name by the table name resolver (e.g., `Post` resolves to `post`). > **Why `Data` and not `Entity`?** In Storm, `Entity` represents a type that maps to a specific database table. For Polymorphic FK, the sealed interface does not correspond to any table; it is a pure type-level grouping of unrelated entities. `Data` is the correct marker because it tells Storm "this type participates in SQL generation (column resolution, type mapping) but has no table of its own." Each subtype independently implements `Entity` because each one *does* map to its own table. This separation is what makes the two-column foreign key possible: the discriminator identifies which subtype (and therefore which table), and the ID identifies the row within that table. The referencing entity uses `@FK Ref` to declare the polymorphic foreign key. `Ref` is required here because the target spans multiple independent tables, so it cannot be eagerly loaded via a JOIN. The `Ref` acts as a lightweight handle that stores the concrete type and ID, and can be fetched on demand. When Storm encounters an `@FK Ref` targeting a sealed `Data` type, it automatically generates two columns (discriminator + ID) instead of the usual single FK column. [Kotlin] ```kotlin // Sealed Data interface - NOT an entity, just a type constraint sealed interface Commentable : Data data class Post( @PK val id: Int = 0, val title: String ) : Commentable, Entity data class Photo( @PK val id: Int = 0, val url: String ) : Commentable, Entity // Entity with polymorphic FK data class Comment( @PK val id: Int = 0, val text: String, @FK val target: Ref // produces target_type + target_id columns ) : Entity ``` [Java] ```java // Sealed Data interface - NOT an entity, just a type constraint sealed interface Commentable extends Data permits Post, Photo {} record Post(@PK Integer id, String title) implements Commentable, Entity {} record Photo(@PK Integer id, String url) implements Commentable, Entity {} // Entity with polymorphic FK record Comment(@PK Integer id, String text, @FK Ref target // produces target_type + target_id columns ) implements Entity {} ``` ### Column Generation A regular `@FK` field produces a single column (e.g., `pet_id`). A polymorphic `@FK` targeting a sealed `Data` interface is different: Storm needs two pieces of information to resolve the reference (which table and which row), so it generates two columns instead of one. | FK Field | Generated Columns | Column Types | |----------|-------------------|-------------| | `target: Ref` | `target_type` (VARCHAR) + `target_id` (INTEGER) | Discriminator + PK type | The discriminator column name defaults to `{fieldName}_type`, and the FK column name defaults to `{fieldName}_id`. Both can be customized with `@Discriminator` and `@DbColumn` if your schema uses different naming conventions. ### Customizing Column Names Use `@Discriminator` on the FK field to customize the discriminator column name. Unlike sealed entity interfaces where `@Discriminator` is required, on FK fields it is purely optional, because the default naming convention (`{fieldName}_type`) derives from the field name and is predictable. [Kotlin] ```kotlin data class Comment( @PK val id: Int = 0, val text: String, @FK @Discriminator(column = "content_type") val target: Ref ) : Entity ``` [Java] ```java record Comment(@PK Integer id, String text, @FK @Discriminator(column = "content_type") Ref target ) implements Entity {} ``` This produces `content_type` and `target_id` columns instead of `target_type` and `target_id`. ### CRUD Operations Each subtype is an independent entity with its own repository. You insert, update, and delete subtypes using their own entity type, not through the sealed interface. The polymorphic FK only appears in the referencing entity (e.g., `Comment`). When creating a `Comment`, you obtain a `Ref` from an existing entity to establish the relationship. [Kotlin] ```kotlin // CRUD on subtypes - standard entity operations val posts = orm.entity(Post::class) val post = posts.insertAndFetch(Post(title = "New Post")) // Insert a comment referencing the post val comments = orm.entity(Comment::class) comments.insert(Comment( text = "Great post!", target = post.ref() )) ``` [Java] ```java // CRUD on subtypes - standard entity operations var posts = orm.entity(Post.class); var post = posts.insertAndFetch(new Post(null, "New Post")); // Insert a comment referencing the post var comments = orm.entity(Comment.class); comments.insert(new Comment("Great post!", Ref.of(post))); ``` ### Generated SQL Storm derives the discriminator value from the `Ref`'s target type. By default, the resolved table name of the concrete subtype is used as the discriminator value (e.g., `Post` resolves to `"post"`). This means the discriminator value in the database directly corresponds to the target table name, making it easy to reason about the data. **INSERT Comment:** ```sql INSERT INTO comment (text, target_type, target_id) VALUES ('Great post!', 'post', 1) ``` **SELECT Comment:** ```sql SELECT c.id, c.text, c.target_type, c.target_id FROM comment c ``` ### Loading the Target Polymorphic FK targets cannot be auto-joined. With Single-Table and Joined Table, Storm can always generate a JOIN because there is one known base table. With Polymorphic FK, the target could be in any of several independent tables, and a single JOIN cannot span multiple unrelated tables conditionally. Instead, use `Ref.fetch()` to load the referenced entity on demand. The `Ref` already knows the concrete target type (from the discriminator value), so `fetch()` queries the correct table automatically. [Kotlin] ```kotlin val comments = orm.entity(Comment::class).select().resultList for (comment in comments) { val target: Commentable = comment.target.fetch() when (target) { is Post -> println("Comment on post: ${target.title}") is Photo -> println("Comment on photo: ${target.url}") } } ``` [Java] ```java var comments = orm.entity(Comment.class).select().getResultList(); for (var comment : comments) { Commentable target = comment.target().fetch(); switch (target) { case Post p -> System.out.println("Comment on post: " + p.title()); case Photo p -> System.out.println("Comment on photo: " + p.url()); } } ``` ### Hydration Polymorphic FK fields consume two columns from the result set. Storm reads the discriminator to determine the target type, then wraps the FK value in a `Ref` of the correct concrete type. No actual entity is loaded at this point; the `Ref` is a lightweight handle that can be used to fetch the full entity later. ``` Result Set Row ┌────┬──────────────┬─────────────┬───────────┐ │ id │ text │ target_type │ target_id │ ├────┼──────────────┼─────────────┼───────────┤ │ 1 │ Nice post! │ post │ 1 │ └────┴──────────────┴──────┬──────┴─────┬─────┘ │ │ ▼ ▼ ┌─────────────────────────────┐ │ target_type = "post" │ │ → resolve to Post.class │ │ │ │ target_id = 1 │ │ → Ref.of(Post.class, 1) │ └─────────────────────────────┘ ``` The resulting `Ref` knows its concrete type is `Post` and holds ID 1. Calling `fetch()` queries the `post` table for that ID. This two-phase approach (hydrate a lightweight `Ref`, then fetch the full entity on demand) keeps the initial query simple and avoids the complexity of conditional multi-table JOINs. --- ## Choosing a Strategy The right strategy depends on the relationship between your subtypes and how you query them. Use the following decision tree as a starting point: ``` Do all subtypes share the same table? │ ├── Yes ──▶ Are there many subtype-specific columns? │ │ │ ├── No ──▶ Single-Table (simple, fast queries) │ │ │ └── Yes ──▶ Joined Table (normalized, no NULLs) │ └── No ──▶ Are the subtypes independent entities that happen to share a common trait? │ └── Yes ──▶ Polymorphic FK (cross-cutting references) ``` Single-Table works well when subtypes share most of their fields and the number of subtype-specific columns is small. Joined Table is a natural fit when subtypes carry many distinct fields and you prefer a normalized schema without NULL columns. Polymorphic FK suits situations where the subtypes are conceptually independent entities that happen to be referenced by a shared concern (comments, tags, audit logs). ### When to Use Each Strategy The table below offers guidance on when each strategy is a good fit and when it might introduce unnecessary complexity. | Strategy | Good For | Avoid When | |----------|---------|------------| | **Single-Table** | Few subtype-specific fields, high query volume, simple hierarchies | Many subtype-specific fields (too many NULL columns) | | **Joined Table** | Many subtype-specific fields, normalized schema, data integrity | Simple hierarchies with few distinct fields (unnecessary JOINs) | | **Polymorphic FK** | Cross-cutting concerns (comments, tags, audit logs), references to unrelated entity types | Frequent joins across the polymorphic boundary | There is no universally "best" strategy. The choice depends on your schema design goals, query patterns, and the nature of the relationship between your subtypes. --- ## Pattern Matching One of the key benefits of using sealed types for polymorphism is exhaustive pattern matching. The compiler verifies that all subtypes are handled in every `when` (Kotlin) or `switch` (Java) expression. This means adding a new subtype to the hierarchy produces compile errors at every unhandled location, making it impossible to forget to handle the new case. This is a significant advantage over string-based discriminators or open class hierarchies. With a string discriminator, forgetting to handle a new type silently falls through to a default branch (or worse, throws an unexpected exception at runtime). With sealed types, the compiler catches the omission before the code even compiles. [Kotlin] ```kotlin fun describe(pet: Pet): String = when (pet) { is Cat -> "${pet.name}: indoor=${pet.indoor}" is Dog -> "${pet.name}: ${pet.weight}kg" // No else needed - compiler knows all subtypes } ``` [Java] ```java String describe(Pet pet) { return switch (pet) { case Cat c -> c.name() + ": indoor=" + c.indoor(); case Dog d -> d.name() + ": " + d.weight() + "kg"; // No default needed - compiler knows all subtypes }; } ``` If you later add a `Bird` subtype to the `Pet` hierarchy, the compiler flags every incomplete `when`/`switch` as an error, guiding you to handle the new case everywhere. This applies to all three inheritance strategies equally, since they all use sealed types as the basis for the polymorphic hierarchy. --- ## Tips 1. **Choose the strategy that matches your schema.** Single-Table suits compact hierarchies with few subtype-specific fields. Joined Table suits hierarchies with many distinct fields and a preference for normalization. Polymorphic FK suits cross-cutting concerns like comments, tags, and audit logs. 2. **Leverage pattern matching.** Sealed types guarantee exhaustive handling. Prefer `when`/`switch` over `is`/`instanceof` chains. 3. **Keep hierarchies shallow.** Storm supports one level of sealed subtyping (interface + records). Deep inheritance chains are not supported and rarely needed with records. 4. **`@Discriminator` is required for Single-Table, optional for Joined Table.** For Single-Table, the default column name `"dtype"` (consistent with JPA) is used when no column name is specified. For Joined Table, omitting `@Discriminator` enables implicit type resolution via extension table PKs. 5. **Polymorphic FK targets cannot be auto-joined.** Use `Ref.fetch()` to load the target entity. This is by design: the target spans multiple tables, so a single JOIN is not possible. 6. **All subtypes must share the same PK type.** Mixing `Integer` and `Long` primary keys within a sealed hierarchy is not supported. ======================================== ## Source: entity-lifecycle.md ======================================== # Entity Lifecycle Storm provides a typed `EntityCallback` interface that lets you hook into entity lifecycle events. Callbacks are a general-purpose building block for cross-cutting concerns like auditing, validation, and logging, while keeping Storm unopinionated about how those concerns are implemented. Rather than baking opinionated annotations like `@CreatedAt` or `@UpdatedBy` into the framework, Storm gives you the hooks and lets you decide how to use them. This keeps the framework lean and avoids hidden "magic" that can be difficult to debug or customize. --- ## The EntityCallback Interface `EntityCallback` is parameterized by the entity type it applies to. The framework resolves the type parameter at runtime and only invokes the callback for matching entity types. All methods have default no-op implementations, so you only override the hooks you need. | Method | Description | |---|---| | `beforeInsert(entity)` | Called before inserting. Returns the (potentially transformed) entity to persist. | | `beforeUpdate(entity)` | Called before updating. Returns the (potentially transformed) entity to persist. | | `beforeUpsert(entity)` | Called before a SQL-level upsert. Returns the (potentially transformed) entity to persist. Delegates to `beforeInsert` by default. | | `afterInsert(entity)` | Called after a successful insert. | | `afterUpdate(entity)` | Called after a successful update. | | `afterUpsert(entity)` | Called after a successful SQL-level upsert. Delegates to `afterInsert` by default. | | `beforeDelete(entity)` | Called before deleting. | | `afterDelete(entity)` | Called after a successful delete. | > **Warning:** **Important:** The entity passed to `afterInsert`, `afterUpdate`, and `afterUpsert` is the **pre-persist entity**. It does not include database-generated values such as auto-incremented IDs, server defaults, or trigger-applied changes. To access the generated ID, use the return value of `insertAndFetch` instead. Every mutation operation follows the same three-phase lifecycle: the "before" callback runs first and can transform the entity, then the SQL executes, and finally the "after" callback fires to observe the result. The following diagram illustrates this flow for an insert operation. Update, upsert, and delete follow the same pattern with their respective callback methods: ``` insert(entity) │ ▼ ┌───────────────────┐ │ beforeInsert() │ ← returns (potentially transformed) entity └────────┬──────────┘ │ ▼ ┌───────────────────┐ │ INSERT INTO … │ ← SQL executes with transformed entity └────────┬──────────┘ │ ▼ ┌───────────────────┐ │ afterInsert() │ ← observes the pre-persist entity └───────────────────┘ ``` ### Immutable Entity Transformation Storm entities are immutable records and data classes, so they cannot be mutated in place. To accommodate this, the "before" callbacks for insert, update, and upsert **return the entity** that will actually be persisted. Implementations can return a new instance with modified fields (e.g., audit timestamps set) or the original entity unchanged. The "after" callbacks and `beforeDelete` are purely observational and return `void`. This design works naturally with both Kotlin's `copy()` and Java's builder pattern, keeping callback implementations concise and idiomatic in both languages. ### Typed vs. Global Callbacks A callback can target a single entity type or apply globally to all entities. Use a specific type parameter to limit a callback to one entity: ```java EntityCallback
callback = new EntityCallback<>() { ... }; ``` Use `Entity` as the type parameter to create a global callback that fires for every entity type. This is useful for cross-cutting concerns like logging or security checks that apply uniformly: ```java EntityCallback> globalCallback = new EntityCallback<>() { ... }; ``` The framework resolves the type parameter at runtime, so a typed callback is never invoked for entity types it does not match. When multiple callbacks are registered, they fire in registration order, and each callback in the chain receives the entity returned by the previous one. --- ## Registering a Callback There are two ways to register callbacks: programmatically via `withEntityCallback`, or automatically through Spring Boot auto-configuration. ### Programmatic Registration Call `withEntityCallback` on any `ORMTemplate` to create a new template instance with the callback applied. The original template is unchanged; this follows Storm's immutable configuration pattern. Multiple callbacks can be registered by chaining calls, and they fire in registration order. [Kotlin] ```kotlin val callback = object : EntityCallback
{ override fun beforeInsert(entity: Article): Article { return entity.copy(createdAt = Instant.now()) } } val orm = dataSource.orm.withEntityCallback(callback) ``` [Java] ```java EntityCallback
callback = new EntityCallback<>() { @Override public Article beforeInsert(Article entity) { return entity.toBuilder().createdAt(Instant.now()).build(); } }; ORMTemplate orm = ORMTemplate.of(dataSource).withEntityCallback(callback); ``` ### Spring Boot Auto-Configuration When using the Storm Spring Boot Starter, any `EntityCallback` beans in your application context are automatically detected and wired to the `ORMTemplate`. No additional configuration is needed. Each callback is registered individually and only fires for entities matching its type parameter. [Kotlin] ```kotlin @Configuration class AuditConfig { @Bean fun auditCallback(): EntityCallback
= object : EntityCallback
{ override fun beforeInsert(entity: Article): Article { return entity.copy(createdAt = Instant.now()) } } } ``` [Java] ```java @Configuration public class AuditConfig { @Bean public EntityCallback
auditCallback() { return new EntityCallback<>() { @Override public Article beforeInsert(Article entity) { return entity.toBuilder().createdAt(Instant.now()).build(); } }; } } ``` --- ## Callback Behavior ### Upsert Routing An upsert operation does not always result in a SQL-level upsert statement. Depending on the entity's primary key state and the database dialect, the framework may route the operation to a plain insert or update instead. The callbacks that fire depend on which path is taken: ``` upsert(entity) │ ┌──────────────────┼──────────────────┐ ▼ ▼ ▼ ┌─────────────┐ ┌─────────────┐ ┌──────────────────┐ │ Route to │ │ Route to │ │ SQL-level upsert │ │ update │ │ insert │ │ │ └──────┬──────┘ └──────┬──────┘ └────────┬─────────┘ │ │ │ ▼ ▼ ▼ beforeUpdate / beforeInsert / beforeUpsert / afterUpdate afterInsert afterUpsert ``` Exactly one pair of callbacks fires per entity; they are never combined. The following table summarizes when each routing path is taken: | Routing path | When | Callbacks fired | |---|---|---| | **Update** | The entity has an auto-generated primary key with a non-default value (it was previously inserted). | `beforeUpdate` / `afterUpdate` | | **Insert** | The entity has an auto-generated primary key with a default value, and the dialect cannot perform a SQL-level upsert with generated keys (e.g., Oracle, SQL Server). | `beforeInsert` / `afterInsert` | | **SQL-level upsert** | All other cases (non-auto-generated primary keys, or dialects that support SQL-level upsert with generated keys such as PostgreSQL and MySQL). | `beforeUpsert` / `afterUpsert` | The practical consequence is that you do not need to override all three pairs. If you only override `beforeInsert` and `beforeUpdate`, you already cover the routed upsert paths. For the SQL-level upsert path, `beforeUpsert` delegates to `beforeInsert` by default, so insert callbacks cover all three paths out of the box. Override `beforeUpsert` only when you need different behavior for the SQL-level upsert case. ### "After" Callback Entity State The "after" callbacks (`afterInsert`, `afterUpdate`, `afterUpsert`, `afterDelete`) always receive the entity as it was sent to the database, after the corresponding "before" transformation. They do **not** reflect database-generated values such as auto-incremented primary keys, version column increments, default column values, or trigger-applied modifications. This applies to all repository methods, including the `*AndFetch` variants. For example, when `insertAndFetch` is called, `afterInsert` still receives the pre-persist entity; the fetched entity (with the generated ID, defaults, etc.) is only returned to the caller. This keeps the callback contract consistent and predictable regardless of which repository method was used. ### Database Operations Inside Callbacks Callbacks execute in the same thread and transaction as the repository operation that triggered them. This means a callback can safely perform additional database work, such as inserting related entities, querying for validation data, or updating audit logs, and that work will participate in the same transaction. If the transaction rolls back, all changes made by callbacks roll back as well. In Spring Boot, callbacks are regular beans and can have repositories or other services injected through standard dependency injection. Outside Spring, a callback can capture a reference to the `ORMTemplate` or a repository at construction time. ```java public class ArticleHistoryCallback implements EntityCallback
{ private final ORMTemplate orm; public ArticleHistoryCallback(ORMTemplate orm) { this.orm = orm; } @Override public void afterUpdate(Article entity) { orm.insert(new ArticleHistory(entity.id(), Instant.now(), "updated")); } } ``` A natural concern with database-calling callbacks is infinite recursion: if an `afterUpdate` callback inserts an entity, and that insert triggers its own callbacks, which insert more entities, and so on. Storm prevents this with a re-entrancy guard. Callbacks never fire recursively. If a callback performs a database operation that would normally trigger callbacks, that nested operation executes normally but its callbacks are suppressed. The following diagram illustrates this: ``` Application ArticleRepository Callback HistoryRepository Database │ │ │ │ │ │ update(article) │ │ │ │ │──────────────────────▶│ │ │ │ │ │ beforeUpdate() │ │ │ │ │───────────────────▶│ │ │ │ │◀───────────────────│ │ │ │ │ │ │ │ │ │ UPDATE articles … │ │ │ │───────────────────────────────────────────────────────────────▶│ │ │◀───────────────────────────────────────────────────────────────│ │ │ │ │ │ │ │ afterUpdate() │ │ │ │ │───────────────────▶│ │ │ │ │ │ insert(history) │ │ │ │ │──────────────────────▶│ │ │ │ │ │ callbacks │ │ │ │ │ suppressed │ │ │ │ │ │ │ │ │ │ INSERT INTO … │ │ │ │ │──────────────────▶│ │ │ │ │◀──────────────────│ │ │ │◀──────────────────────│ │ │ │◀───────────────────│ │ │ │◀──────────────────────│ │ │ │ ``` This makes it safe to perform arbitrary database work inside a callback without needing manual guards or worrying about stack overflows. ### Batch Operations Callbacks work with both single and batch operations. For batch operations, the "before" callbacks (`beforeInsert`, `beforeUpdate`, `beforeUpsert`) are called per entity during the mapping phase, before the batch is sent to the database. The "after" callbacks (`afterInsert`, `afterUpdate`, `afterUpsert`, `afterDelete`) are called per entity after the batch executes successfully. This means the "before" callback can transform each entity individually, and all transformations are applied before the batch SQL is executed. --- ## Examples ### Auditing A common use case is automatically populating audit fields. A practical approach is to define a shared interface for auditable entities, then use a single callback to fill in the timestamps. The `beforeInsert` callback sets both `createdAt` and `updatedAt`, while `beforeUpdate` only refreshes `updatedAt`. [Kotlin] ```kotlin interface Auditable { fun withAudit(createdAt: Instant, updatedAt: Instant): Auditable } data class Article( @PK val id: Int = 0, val title: String, val createdAt: Instant? = null, val updatedAt: Instant? = null ) : Entity, Auditable { override fun withAudit(createdAt: Instant, updatedAt: Instant) = copy(createdAt = createdAt, updatedAt = updatedAt) } class AuditCallback : EntityCallback
{ override fun beforeInsert(entity: Article): Article { val now = Instant.now() return entity.withAudit(createdAt = now, updatedAt = now) } override fun beforeUpdate(entity: Article): Article { return entity.copy(updatedAt = Instant.now()) } } ``` [Java] ```java public class AuditCallback implements EntityCallback
{ @Override public Article beforeInsert(Article entity) { Instant now = Instant.now(); return entity.toBuilder().createdAt(now).updatedAt(now).build(); } @Override public Article beforeUpdate(Article entity) { return entity.toBuilder().updatedAt(Instant.now()).build(); } } ``` To apply auditing across multiple entity types without writing a separate callback for each, use a global callback with a runtime type check. Any entity that implements the `Auditable` interface gets its timestamps set; other entities pass through unchanged: ```java public class GlobalAuditCallback implements EntityCallback> { @Override public Entity beforeInsert(Entity entity) { if (entity instanceof Auditable a) { return (Entity) a.withCreatedAt(Instant.now()); } return entity; } } ``` ### Validation Callbacks can enforce business rules before data reaches the database. Unlike database constraints, callback-level validation can produce domain-specific error messages and catch problems before the SQL round-trip. Both `beforeInsert` and `beforeUpdate` must return the entity, so a validation callback simply returns the original entity unchanged after checking the invariants: ```java public class ArticleValidationCallback implements EntityCallback
{ @Override public Article beforeInsert(Article entity) { validate(entity); return entity; } @Override public Article beforeUpdate(Article entity) { validate(entity); return entity; } private void validate(Article entity) { if (entity.title() == null || entity.title().isBlank()) { throw new IllegalArgumentException("Article title must not be blank."); } } } ``` ### Logging The "after" callbacks are well-suited for logging, since they fire only after the database operation succeeds. This avoids logging mutations that were rolled back. The entity passed to the callback is the pre-persist version (see [After Callback Entity State](#after-callback-entity-state)), so the logged values reflect what your application sent to the database: ```java public class ArticleLoggingCallback implements EntityCallback
{ private static final Logger log = LoggerFactory.getLogger(ArticleLoggingCallback.class); @Override public void afterInsert(Article entity) { log.info("Inserted article: {}", entity); } @Override public void afterUpdate(Article entity) { log.info("Updated article: {}", entity); } @Override public void afterDelete(Article entity) { log.info("Deleted article: {}", entity); } } ``` ======================================== ## Source: serialization.md ======================================== # Entity Serialization Storm entities are plain records and data classes. Because they carry no proxies, no hidden state, and no framework-managed lifecycle, they serialize naturally with standard JSON libraries. An entity that contains only primitive fields, standard types like `LocalDate`, and inline `@FK` relationships will work out of the box with Jackson or kotlinx.serialization, with no additional configuration required. The challenge arises when entities contain `Ref` fields. A `Ref` is Storm's abstraction for a deferred reference to another entity (see [Refs](refs.md)). Unlike a plain foreign key or an eagerly loaded relationship, a ref can exist in two states: **unloaded** (carrying only the primary key) or **loaded** (holding the full referenced entity in memory). Standard serialization libraries do not understand this distinction, so they cannot serialize or deserialize `Ref` instances without help. The Storm serialization modules solve this by registering custom serializers and deserializers that handle both ref states. Once registered, entities with refs serialize and deserialize correctly, preserving the loaded/unloaded distinction across the JSON round-trip. --- ## Setup ### Jackson (Kotlin & Java) For Jackson-based projects, register `StormModule` on your `ObjectMapper`. This single registration covers all `Ref` fields across all entity types: ```java ObjectMapper mapper = new ObjectMapper(); mapper.registerModule(new StormModule()); ``` The `StormModule` class lives in the `st.orm.jackson` package and is available in both `storm-jackson2` (Jackson 2.17+) and `storm-jackson3` (Jackson 3.0+). Choose the module that matches your Jackson version. For installation details and guidance on choosing between the two, see [JSON Support](json.md). **Spring Boot:** Spring Boot auto-detects any Jackson `Module` bean and registers it on the application's `ObjectMapper`. Declaring `StormModule` as a bean is all that is needed: [Kotlin] ```kotlin @Configuration class JacksonConfig { @Bean fun stormModule(): StormModule = StormModule() } ``` [Java] ```java @Configuration public class JacksonConfig { @Bean public StormModule stormModule() { return new StormModule(); } } ``` With this in place, every `@RestController` response that returns an entity with `Ref` fields will serialize correctly without any per-endpoint configuration. ### Kotlinx Serialization (Kotlin) For Kotlin projects using kotlinx.serialization, configure the `Json` instance with `StormSerializersModule`. This registers contextual serializers for the `Ref` type: ```kotlin val json = Json { serializersModule = StormSerializersModule() } ``` If you do not need any customization, a pre-built convenience instance is available: ```kotlin val json = Json { serializersModule = StormSerializers } ``` Both `StormSerializersModule` and `StormSerializers` are in the `st.orm.serialization` package, provided by the `storm-kotlinx-serialization` module. #### The `@Contextual` Requirement Kotlinx.serialization uses compile-time code generation for serializers. It only delegates to the `SerializersModule` at runtime for fields explicitly annotated with `@Contextual`. Because `Ref` is a Storm type (not a kotlinx-serializable class), every `Ref` field in a `@Serializable` class must carry this annotation. Without it, kotlinx.serialization will fail at compile time because it cannot generate a serializer for `Ref` on its own. ```kotlin @Serializable data class Order( @PK val id: Int = 0, @FK @Contextual val customer: Ref, ) : Entity ``` The same applies to collections of refs. Both the field itself and the type argument need the annotation so that the contextual serializer is used at both the collection level and the element level: ```kotlin @Serializable data class TeamMembers( @Contextual val members: List<@Contextual Ref>, ) ``` This requirement does not apply to Jackson, which resolves serializers at runtime through reflection and does not need compile-time annotations for `Ref`. --- ## Serialization Format The serialization module uses a compact, self-describing JSON format that preserves the ref's state. The format varies depending on whether the ref is unloaded (only the foreign key is known), loaded with an entity, or loaded with a projection. | Ref state | JSON output | Example | |-----------|-------------|---------| | Unloaded | Raw primary key value | `1` or `"abc-123"` | | Loaded entity | `{"@entity": {...}}` | `{"@entity": {"id": 1, "name": "Betty"}}` | | Loaded projection | `{"@id": ..., "@projection": {...}}` | `{"@id": 1, "@projection": {"id": 1, "name": "Betty"}}` | | Null | `null` | `null` | An unloaded ref serializes as a bare value because there is nothing more to convey than the primary key. This keeps the JSON minimal, which is convenient for API responses where the client only needs the ID and can fetch the full object separately if needed. A loaded entity ref wraps the full entity data in an `@entity` object. This tells the deserializer that the enclosed data is a complete entity, from which it can reconstruct a loaded ref with `getOrNull()` returning the entity instance. A loaded projection ref uses a different wrapper (`@projection`) and includes a separate `@id` field. The explicit ID is necessary because projections are partial views of an entity and may not expose an `id()` accessor. Without the separate `@id` field, the deserializer would have no reliable way to recover the primary key. Both Jackson and kotlinx.serialization produce identical JSON for the same ref state, so output from one library can be consumed by the other. --- ## Examples The following examples walk through the common serialization scenarios, starting with the simplest case and building up to loaded refs and round-trip deserialization. ### Entities Without Refs Entities that contain only standard field types serialize with plain Jackson or kotlinx.serialization. No Storm module registration is needed, and no special annotations are required beyond what the serialization library itself expects. [Kotlin] ```kotlin @Serializable data class PetType( @PK val id: Int = 0, val name: String, ) : Entity val petType = PetType(id = 1, name = "cat") val json = Json.encodeToString(petType) // {"id":1,"name":"cat"} ``` Because `PetType` has no `Ref` fields, the default kotlinx.serialization behavior handles everything. The `@Serializable` annotation generates the serializer at compile time. [Java] ```java record PetType(@PK Integer id, String name) implements Entity {} PetType petType = new PetType(1, "cat"); ObjectMapper mapper = new ObjectMapper(); String json = mapper.writeValueAsString(petType); // {"id":1,"name":"cat"} ``` Java records are natively supported by Jackson. No module registration is needed when the entity has no `Ref` fields. ### Unloaded Ref The most common scenario in REST APIs is returning entities where the ref has not been fetched. Storm loads only the foreign key ID into the ref, and the serializer writes that ID as a bare value. This produces compact JSON and avoids unnecessary database lookups during serialization. [Kotlin] ```kotlin @Serializable data class Pet( @PK val id: Int = 0, val name: String, @FK @Contextual val owner: Ref?, ) : Entity val pet = orm.get(Pet_.id eq 1) val json = Json { serializersModule = StormSerializers } .encodeToString(pet) // {"id":1,"name":"Leo","owner":1} ``` The `owner` field serializes as `1`, the owner's primary key. No `Owner` data was loaded from the database; only the foreign key column value was available, and that is exactly what appears in the JSON. [Java] ```java record Pet(@PK Integer id, String name, @FK Ref owner ) implements Entity {} Pet pet = orm.entity(Pet.class).getById(1); ObjectMapper mapper = new ObjectMapper(); mapper.registerModule(new StormModule()); String json = mapper.writeValueAsString(pet); // {"id":1,"name":"Leo","owner":1} ``` The `owner` field serializes as `1`, the owner's primary key. No `Owner` data was loaded from the database; only the foreign key column value was available, and that is exactly what appears in the JSON. ### Loaded Entity Ref When the application calls `fetch()` on a ref before serialization, the referenced entity is loaded into memory. The serializer detects this and writes the full entity data inside an `@entity` wrapper. This is useful when the API consumer needs the related object inline without making a separate request. [Kotlin] ```kotlin val pet = orm.get(Pet_.id eq 1) pet.owner?.fetch() // Load the owner into the ref val json = Json { serializersModule = StormSerializers } .encodeToString(pet) // {"id":1,"name":"Leo","owner":{"@entity":{"id":1,"firstName":"Betty","lastName":"Davis"}}} ``` After `fetch()`, calling `pet.owner?.getOrNull()` returns the `Owner` instance. The serializer sees that the ref holds data and emits the `@entity` wrapper instead of the bare ID. [Java] ```java Pet pet = orm.entity(Pet.class).getById(1); pet.owner().fetch(); // Load the owner into the ref ObjectMapper mapper = new ObjectMapper(); mapper.registerModule(new StormModule()); String json = mapper.writeValueAsString(pet); // {"id":1,"name":"Leo","owner":{"@entity":{"id":1,"firstName":"Betty","lastName":"Davis"}}} ``` After `fetch()`, calling `pet.owner().getOrNull()` returns the `Owner` instance. The serializer sees that the ref holds data and emits the `@entity` wrapper instead of the bare ID. ### Loaded Projection Ref When the ref target is a [Projection](projections.md) rather than an `Entity`, the loaded format includes both `@id` and `@projection` fields. The separate `@id` is necessary because projections are partial views and may not include a field that maps to the primary key. [Kotlin] ```kotlin @Serializable data class OwnerSummary( @PK val id: Int = 0, val firstName: String, ) : Projection @Serializable data class PetWithProjectionOwner( @PK val id: Int = 0, val name: String, @FK @Contextual val owner: Ref?, ) : Entity val pet = orm.get(PetWithProjectionOwner_.id eq 1) pet.owner?.fetch() val json = Json { serializersModule = StormSerializers } .encodeToString(pet) // {"id":1,"name":"Leo","owner":{"@id":1,"@projection":{"id":1,"firstName":"Betty"}}} ``` [Java] ```java record OwnerSummary(@PK Integer id, String firstName) implements Projection {} record PetWithProjectionOwner(@PK Integer id, String name, @FK Ref owner ) implements Entity {} var pet = orm.entity(PetWithProjectionOwner.class).getById(1); pet.owner().fetch(); ObjectMapper mapper = new ObjectMapper(); mapper.registerModule(new StormModule()); String json = mapper.writeValueAsString(pet); // {"id":1,"name":"Leo","owner":{"@id":1,"@projection":{"id":1,"firstName":"Betty"}}} ``` ### Round-Trip Deserialization The serialization format is fully round-trippable. Both Jackson and kotlinx.serialization can reconstruct entities with refs from the JSON produced by the serializer. The ref's state is preserved: an unloaded ref (bare ID) deserializes back to an unloaded ref, and a loaded ref (`@entity` or `@projection` wrapper) deserializes back to a loaded ref with the data accessible via `getOrNull()`. [Kotlin] Deserializing a bare ID produces an unloaded ref. The ID is available, but `getOrNull()` returns `null` because no entity data was present in the JSON. ```kotlin val jsonString = """{"id":1,"name":"Leo","owner":1}""" val pet = Json { serializersModule = StormSerializers } .decodeFromString(jsonString) pet.name // "Leo" pet.owner?.id() // 1 pet.owner?.getOrNull() // null (unloaded) ``` Deserializing an `@entity` wrapper produces a loaded ref. The full entity is reconstructed and available immediately. ```kotlin val jsonString = """{"id":1,"name":"Leo","owner":{"@entity":{"id":1,"firstName":"Betty","lastName":"Davis"}}}""" val pet = Json { serializersModule = StormSerializers } .decodeFromString(jsonString) pet.owner?.getOrNull() // Owner(id=1, firstName="Betty", lastName="Davis") ``` [Java] Deserializing a bare ID produces an unloaded ref. The ID is available, but `getOrNull()` returns `null` because no entity data was present in the JSON. ```java String jsonString = """ {"id":1,"name":"Leo","owner":1}"""; ObjectMapper mapper = new ObjectMapper(); mapper.registerModule(new StormModule()); Pet pet = mapper.readValue(jsonString, Pet.class); pet.name(); // "Leo" pet.owner().id(); // 1 pet.owner().getOrNull(); // null (unloaded) ``` Deserializing an `@entity` wrapper produces a loaded ref. The full entity is reconstructed and available immediately. ```java String jsonString = """ {"id":1,"name":"Leo","owner":{"@entity":{"id":1,"firstName":"Betty","lastName":"Davis"}}}"""; Pet pet = mapper.readValue(jsonString, Pet.class); pet.owner().getOrNull(); // Owner(id=1, firstName="Betty", lastName="Davis") ``` Note that refs deserialized from JSON are **detached**: they carry the type and primary key but have no connection to a database context. Calling `fetch()` on a deserialized ref will throw a `PersistenceException`. If you need to fetch the referenced entity, use the deserialized ID to query the database directly. See [Detached Ref Behavior](refs.md#detached-ref-behavior) for more details. --- ## See Also - [JSON Support](json.md) -- JSON columns and aggregation with `@Json` - [Refs](refs.md) -- lightweight entity references and deferred loading - [Entities](entities.md) -- entity definition and annotations ======================================== ## Source: validation.md ======================================== # Validation Storm validates your entity and projection definitions at two levels: structural validation ensures your records follow the ORM's rules (valid primary key types, correct use of annotations, no circular dependencies), while schema validation compares your definitions against the actual database to catch mismatches before they surface as runtime errors. Both levels are optional and configurable. Structural validation runs automatically on first use; schema validation must be explicitly enabled. --- ## Record Validation When Storm first encounters an entity or projection type, it inspects the record structure and validates that the definition is well-formed. This catches common modeling mistakes early, at startup rather than at query time. ### What Gets Checked **Primary key rules:** - The `@PK` type must be one of: `boolean`, `int`, `long`, `short`, `String`, `UUID`, `BigInteger`, `Enum`, or `Ref`. Floating-point types (`float`, `double`, `BigDecimal`) are rejected because they cannot reliably serve as identity values. - Compound keys (inline records annotated with `@PK`) follow the same type restrictions for each component. **Foreign key rules:** - Fields annotated with `@FK` must be a `Data` type (entity, projection, or data class with a `@PK`) or a `Ref` wrapping such a type. Scalars like `String` or `Integer` cannot be foreign keys. - Auto-generated foreign keys (`@FK(generation = ...)`) cannot be inlined. **Inline component rules:** - Fields annotated with `@Inline` must be record types. Scalars cannot be inlined. - Inline records must not declare their own `@PK`, since they are embedded within a parent entity. **Version fields:** - At most one field per entity can be annotated with `@Version`. Multiple version fields are rejected. **Structural integrity:** - Records must be immutable. Mutable fields (Kotlin `var`) are rejected. - Entities or projections that contain other entities or projections must annotate them as `@FK` or `@Inline`. Storm needs to know the relationship type to generate correct SQL. - The record graph is checked for cycles. If entity A inlines entity B, which inlines entity A, the circular dependency is reported. ### Configuration Record validation runs by default and causes startup to fail on the first error. The `record-mode` property controls this behavior: | Value | Behavior | |-------|----------| | `fail` | Validation errors cause startup to fail (default). | | `warn` | Errors are logged as warnings; startup continues. | | `none` | Record validation is skipped entirely. | This can be set as a system property, via `StormConfig`, or in Spring Boot's `application.yml`: ```yaml storm: validation: record-mode: fail # or "warn" or "none" (default: fail) ``` --- ## Schema Validation Schema validation compares your entity and projection definitions against the actual database schema. It catches mismatches before they surface as runtime errors, similar to Hibernate's `ddl-auto=validate`. Storm never modifies the schema; it only reports mismatches. ### What Gets Checked | Check | Error Kind | Severity | |-------|-----------|----------| | Table exists in the database | `TABLE_NOT_FOUND` | Error | | Each mapped column exists in the table | `COLUMN_NOT_FOUND` | Error | | Kotlin/Java type is compatible with the SQL column type | `TYPE_INCOMPATIBLE` | Error | | Entity primary key columns match the database primary key | `PRIMARY_KEY_MISMATCH` | Error | | `@FK` constraint references the correct target table | `FOREIGN_KEY_MISMATCH` | Error | | Sequences referenced by `@PK(generation = SEQUENCE)` exist | `SEQUENCE_NOT_FOUND` | Error | | | | | | Numeric cross-category conversions (e.g., `Integer` mapped to `DECIMAL`) | `TYPE_NARROWING` | Warning | | Non-nullable entity field mapped to a nullable database column | `NULLABILITY_MISMATCH` | Warning | | Entity declares `@PK` but the database has no primary key constraint | `PRIMARY_KEY_MISSING` | Warning | | `@UK` field has a matching unique constraint in the database | `UNIQUE_KEY_MISSING` | Warning | | `@FK` field has a matching foreign key constraint in the database | `FOREIGN_KEY_MISSING` | Warning | **Errors** indicate definitive mismatches that will cause runtime failures, such as missing tables or columns. **Warnings** indicate situations where the mapping works at runtime but may involve subtle differences, such as precision loss when mapping a Kotlin `Int` to an Oracle `NUMBER` column. Warnings are logged but do not cause validation to fail (unless strict mode is enabled). #### Constraint Validation Schema validation checks that the database has the constraints your entity model declares. There are two categories of constraint findings: **Mismatches (errors)** occur when a constraint exists in the database but contradicts the entity definition. For example, if `@FK val city: City` expects a foreign key referencing the `city` table, but the database has a foreign key on that column referencing the `account` table, that is a `FOREIGN_KEY_MISMATCH`. Similarly, if the entity declares `@PK` with columns `(id)` but the database primary key is `(user_id, role_id)`, that is a `PRIMARY_KEY_MISMATCH`. Mismatches are always hard errors because they indicate a bug in the entity definition. **Missing constraints (warnings)** occur when the database has no constraint at all for a declared `@PK`, `@FK`, or `@UK` field. These are warnings rather than errors because the ORM functions correctly without database-level enforcement: queries return the same results, inserts and updates succeed, and scrolling works as expected. However, database constraints serve as a safety net that the application layer cannot replace: - **Primary key constraints** ensure row uniqueness at the database level. Without one, duplicate primary key values could be inserted by other applications or direct SQL. - **Unique constraints** protect against application bugs and concurrent modifications that could insert duplicate values. Without a database-level unique constraint, a `@UK` field might contain duplicates that go undetected until a `findBy` call unexpectedly returns multiple results. - **Foreign key constraints** protect referential integrity. Without a database-level foreign key constraint, orphaned rows can accumulate when referenced rows are deleted. ##### Suppressing Constraint Warnings When the database intentionally omits a constraint (for performance, for views, or because integrity is enforced at the application level), use the `constraint` attribute to suppress the warning for that specific field: [Kotlin] ```kotlin // No FK constraint for performance reasons. data class Order( @PK val id: Int = 0, @FK(constraint = false) val customer: Customer ) : Entity // No unique index in the database. data class User( @PK val id: Int = 0, @UK(constraint = false) val email: String ) : Entity ``` [Java] ```java // No FK constraint for performance reasons. record Order(@PK Integer id, @FK(constraint = false) Customer customer ) implements Entity {} // No unique index in the database. record User(@PK Integer id, @UK(constraint = false) String email ) implements Entity {} ``` Setting `constraint = false` only suppresses the "missing" warning. If the database *does* have a constraint that contradicts the entity definition (a mismatch), it is always reported as a hard error regardless of this flag. In [strict mode](#strict-mode), missing constraint warnings are promoted to errors, causing validation to fail. The `constraint = false` flag takes precedence: fields marked with it are excluded from validation even in strict mode. ### Programmatic API Any `ORMTemplate` created from a `DataSource` supports schema validation: [Kotlin] ```kotlin val orm = dataSource.orm // Inspect errors programmatically val errors: List = orm.validateSchema() // Or validate and throw on failure orm.validateSchemaOrThrow() ``` [Java] ```java var orm = ORMTemplate.of(dataSource); // Inspect errors programmatically List errors = orm.validateSchema(); // Or validate and throw on failure orm.validateSchemaOrThrow(); ``` Both methods have overloads that accept specific types to validate: [Kotlin] ```kotlin orm.validateSchema(User::class, Order::class) ``` [Java] ```java orm.validateSchema(List.of(User.class, Order.class)); ``` The no-argument variants discover all entity and projection types on the classpath automatically. On success, a confirmation message is logged at INFO level. On failure, each error is logged at ERROR level, and `validateSchemaOrThrow()` throws a `PersistenceException` with a summary of all errors. Warnings are always logged at WARN level regardless of the outcome. Templates created from a raw `Connection` or JPA `EntityManager` do not support schema validation, since they lack the `DataSource` needed to query database metadata. ### Strict Mode By default, warnings (type narrowing and nullability mismatches) do not cause validation to fail. In strict mode, all findings are treated as errors: [Kotlin] ```kotlin val config = StormConfig.of(mapOf(VALIDATION_STRICT to "true")) val orm = ORMTemplate.of(dataSource, config) orm.validateSchemaOrThrow() // Warnings now cause failure ``` [Java] ```java var config = StormConfig.of(Map.of(VALIDATION_STRICT, "true")); var orm = ORMTemplate.of(dataSource, config); orm.validateSchemaOrThrow(); // Warnings now cause failure ``` ### Suppressing Validation with @DbIgnore Use `@DbIgnore` to suppress schema validation for specific entities or fields. This is useful for legacy tables, columns handled by custom converters, or known mismatches that are safe to ignore. **Suppress validation for an entire entity:** [Kotlin] ```kotlin @DbIgnore data class LegacyUser( @PK val id: Int = 0, val name: String ) : Entity ``` [Java] ```java @DbIgnore record LegacyUser(@PK Integer id, @Nonnull String name ) implements Entity {} ``` **Suppress validation for a specific field:** [Kotlin] ```kotlin data class User( @PK val id: Int = 0, val name: String, @DbIgnore("DB uses FLOAT, but column only stores whole numbers") val age: Int ) : Entity ``` [Java] ```java record User(@PK Integer id, @Nonnull String name, @DbIgnore("DB uses FLOAT, but column only stores whole numbers") @Nonnull Integer age ) implements Entity {} ``` The optional `value` parameter documents why the mismatch is acceptable. When `@DbIgnore` is placed on an inline component field, validation is suppressed for all columns within that component. ### Custom Schemas Schema validation respects `@DbTable(schema = "...")`. Each entity is validated against the schema specified in its annotation, or the connection's default schema if none is specified. [Kotlin] ```kotlin @DbTable(schema = "reporting") data class Report( @PK val id: Int = 0, val name: String ) : Entity ``` [Java] ```java @DbTable(schema = "reporting") record Report(@PK Integer id, @Nonnull String name ) implements Entity {} ``` ### Spring Boot Configuration When using the Spring Boot Starter, both record and schema validation can be configured through `application.yml`: ```yaml storm: validation: record-mode: fail # or "warn" or "none" (default: fail) schema-mode: none # or "warn" or "fail" (default: none) strict: false # treat schema warnings as errors (default: false) ``` The `schema-mode` values: | Value | Behavior | |-------|----------| | `none` | Schema validation is skipped (default). | | `warn` | Mismatches are logged at WARN level; startup continues. | | `fail` | Mismatches cause startup to fail with a `PersistenceException`. | ### Configuration Properties | Property | Default | Description | |----------|---------|-------------| | `storm.validation.record_mode` | `fail` | Record validation mode: `fail`, `warn`, or `none` | | `storm.validation.schema_mode` | `none` | Schema validation mode: `none`, `warn`, or `fail` (Spring Boot only) | | `storm.validation.strict` | `false` | When `true`, schema validation warnings are treated as errors | ======================================== ## Source: batch-streaming.md ======================================== # Batch Processing & Streaming Database performance often degrades when applications issue many individual SQL statements in a loop. Each statement incurs network latency, server-side parsing, and transaction log overhead. Batch processing and streaming solve two sides of this problem: batch processing reduces the cost of writing many rows, and streaming reduces the memory cost of reading many rows. - **Batch processing** groups multiple insert/update/delete operations into a single database round-trip, reducing network overhead. JDBC batching sends a prepared statement once and supplies multiple parameter sets, which the database can execute as a unit. This is significantly faster than issuing individual statements. - **Streaming** processes query results row by row without loading the entire result set into memory. This is essential when result sets are too large to fit in memory, or when you want to begin processing before the query has finished returning all rows. --- ## Batch Processing When you pass a list of entities to Storm's insert, update, remove, or upsert methods, Storm automatically uses JDBC batch statements. The framework groups rows together and sends them to the database in a single round-trip, rather than issuing one statement per entity. ### Batch Insert [Kotlin] ```kotlin val users = listOf( User(email = "alice@example.com", name = "Alice", city = city), User(email = "bob@example.com", name = "Bob", city = city), User(email = "charlie@example.com", name = "Charlie", city = city) ) orm insert users ``` [Java] ```java List users = List.of( new User(null, "alice@example.com", "Alice", null, city), new User(null, "bob@example.com", "Bob", null, city), new User(null, "charlie@example.com", "Charlie", null, city) ); orm.entity(User.class).insert(users); ``` ### Batch Update Pass a list of modified entities and Storm generates a batched UPDATE statement. Each entity in the list produces one row in the batch. This is especially useful when you need to apply a transformation to many rows at once. [Kotlin] ```kotlin val updatedUsers = users.map { it.copy(active = true) } orm update updatedUsers ``` [Java] Since Java records are immutable, you create new record instances with the modified values. Storm batches the resulting UPDATE statements. ```java List updatedUsers = users.stream() .map(u -> new User(u.id(), u.email(), u.name(), true, u.city())) .toList(); orm.entity(User.class).update(updatedUsers); ``` ### Batch Remove Batch removes delete multiple entities in a single round-trip. Storm generates a batched DELETE using each entity's primary key. [Kotlin] ```kotlin orm remove users // Or remove all entities of a type orm.removeAll() ``` [Java] ```java orm.entity(User.class).remove(users); ``` ### Batch Upsert Batch upserts combine insert and update semantics for a list of entities. Each entity is either inserted (if no matching row exists) or updated (if a row with the same unique constraint already exists). This is useful for data synchronization scenarios where you receive a batch of records from an external source and need to merge them into your database. See [Upserts](upserts.md) for details on how conflict detection works per database. [Kotlin] ```kotlin val users = listOf( User(email = "alice@example.com", name = "Alice Updated", city = city), User(email = "dave@example.com", name = "Dave", city = city) ) orm upsert users // Inserts new, updates existing ``` [Java] ```java List users = List.of( new User(null, "alice@example.com", "Alice Updated", null, city), new User(null, "dave@example.com", "Dave", null, city) ); orm.entity(User.class).upsert(users); // Inserts new, updates existing ``` ### Batch Size Storm automatically groups batch operations for optimal performance. Batch operations have overloaded methods that accept a batch size parameter, giving you control over how many rows are grouped together before being sent to the database. Smaller batches reduce memory usage, while larger batches reduce network round-trips. The default batch size works well for most cases. [Kotlin] ```kotlin // Insert in batches of 500 orm.entity(User::class).insert(users, 500) ``` [Java] ```java // Insert in batches of 500 orm.entity(User.class).insert(users, 500); ``` --- ## Streaming When a query returns thousands or millions of rows, loading them all into a `List` can exhaust memory. Streaming processes rows one at a time as they arrive from the database, keeping memory usage constant regardless of result set size. > **Warning:** Streams returned by Storm must be closed after use. Use `.use {}` (Kotlin) or try-with-resources (Java) to ensure proper cleanup. Failing to close a stream will leak database resources (cursors, connections). [Kotlin] Kotlin uses `Flow` for streaming, which provides automatic resource cleanup through structured concurrency. When the Flow completes or the coroutine is cancelled, database cursors and connections are released without explicit cleanup code. ```kotlin val users: Flow = orm.entity(User::class).selectAll() // Process one at a time -- only one row in memory users.collect { user -> processUser(user) } // Transform and collect val emails: List = users .map { it.email } .toList() // Count without loading all entities val count: Int = users.count() ``` [Java] Java uses `Stream` for streaming. Unlike Kotlin's Flow, Java streams do not have automatic resource management through structured concurrency. You must explicitly close streams to release database resources (cursors, connections). **Always use try-with-resources** to ensure cleanup happens even if an exception occurs. ```java // Process one at a time try (Stream users = orm.entity(User.class).selectAll()) { users.forEach(user -> processUser(user)); } // Transform and collect try (Stream users = orm.entity(User.class).selectAll()) { List emails = users .map(User::email) .toList(); } // Count without loading all entities try (Stream users = orm.entity(User.class).selectAll()) { long count = users.count(); } ``` ### Filtered Streaming You can combine streaming with query filters to process only rows that match your criteria. This pushes the filtering to the database rather than loading all rows and filtering in application code. [Kotlin] ```kotlin val filteredUsers: Flow = orm.entity(User::class) .select() .where(User_.name like "A%") .resultFlow ``` [Java] ```java try (Stream users = orm.entity(User.class) .select() .where(User_.name, LIKE, "A%") .getResultStream()) { users.forEach(this::processUser); } ``` ### Streaming with Transactions When you need to read and update rows as part of a single atomic operation, wrap the streaming operation in a transaction. This ensures that the data you read and the updates you write are consistent, and that the entire operation either succeeds or is rolled back. ```kotlin transaction { val users: Flow = orm.selectAll() users.collect { user -> // Process within the same transaction orm update user.copy(processed = true) } } ``` --- ## Tips 1. **Always close Java streams** - use try-with-resources to prevent resource leaks (database cursors, connections) 2. **Kotlin Flow is safer** - automatic resource management through structured concurrency 3. **Use streaming for large datasets** - avoid loading millions of rows into memory 4. **Batch operations are automatic** - Storm handles JDBC batching internally for bulk inserts/updates/deletes 5. **Wrap in transactions** - batch operations within a transaction commit atomically and perform better 6. **Tune batch size for large imports** - use the batch size parameter for datasets with thousands of rows ======================================== ## Source: upserts.md ======================================== # Upserts Many applications need to create a record if it does not exist, or update it if it does. A naive approach using separate SELECT-then-INSERT-or-UPDATE logic introduces race conditions: two concurrent requests can both see that a row is missing and both attempt to insert, causing a constraint violation. Even with application-level locking, this approach adds complexity and reduces throughput. Storm provides first-class support for upsert (insert-or-update) operations across all major databases. By delegating conflict resolution to the database engine itself, upserts behave predictably and handle race conditions atomically in a single SQL statement. No application-level locking or retry logic is needed. Use upsert when you need idempotent write operations, data synchronization from external sources, or any scenario where the same logical record may arrive multiple times. --- [Kotlin] ### Single Upsert The simplest form of upsert operates on a single entity. Storm determines whether to insert or update based on the table's unique constraints. The returned entity includes any database-generated values, such as an auto-incremented primary key. ```kotlin val user = orm upsert User( email = "alice@example.com", name = "Alice", birthDate = LocalDate.of(1990, 5, 15), city = city ) // user.id is now populated with the database-generated ID ``` If a user with matching unique constraints exists, it will be updated. Otherwise, a new user is inserted. The returned entity includes any database-generated values (such as the primary key). ### Batch Upsert Upsert multiple entities in a single batch operation: ```kotlin val users = listOf( User(email = "alice@example.com", name = "Alice Updated", city = city), User(email = "bob@example.com", name = "Bob", city = city), User(email = "charlie@example.com", name = "Charlie", city = city) ) orm upsert users ``` ### Upsert within a Transaction Upserts participate in transactions like any other Storm operation. When you need to upsert an entity that depends on another entity (for example, a user that references a city), wrap both operations in a transaction to ensure atomicity. ```kotlin transaction { val city = orm insert City(name = "Sunnyvale", population = 155_000) val user = orm upsert User( email = "alice@example.com", name = "Alice", city = city ) } ``` [Java] ### Single Upsert The Java API provides upsert through the `entity()` method. Pass `null` for the primary key field to indicate that the database should generate the value on insert. ```java orm.entity(User.class).upsert(new User( null, // null ID triggers insert logic "alice@example.com", "Alice", LocalDate.of(1990, 5, 15), city )); ``` If a user with matching unique constraints exists, it will be updated. Otherwise, a new user is inserted. ### Upsert and Fetch When you need the resulting entity with all database-generated values (such as the assigned primary key or default column values), use `upsertAndFetch`. This performs the upsert and returns the complete entity as it exists in the database after the operation. In Kotlin, `orm upsert` returns the entity with generated values by default, but the Java API separates `upsert` (void) from `upsertAndFetch` (returns entity) for clarity. ```java User user = orm.entity(User.class).upsertAndFetch(new User( null, "alice@example.com", "Alice", LocalDate.of(1990, 5, 15), city )); // user.id() is now populated with the database-generated ID ``` ### With Lombok Builder If your entity uses Lombok's `@Builder`, you can construct upsert arguments using the builder pattern. This avoids positional constructor arguments and makes the code more readable when entities have many fields. ```java User user = orm.entity(User.class).upsertAndFetch(User.builder() .email("alice@example.com") .name("Alice") .birthDate(LocalDate.of(1990, 5, 15)) .city(city) .build() ); ``` ### Batch Upsert Batch upserts process a list of entities in a single batched operation, combining JDBC batching with the database's native upsert syntax. This is significantly faster than upserting entities one at a time in a loop. ```java List users = List.of( new User(null, "alice@example.com", "Alice Updated", null, city), new User(null, "bob@example.com", "Bob", null, city) ); orm.entity(User.class).upsert(users); ``` --- ## How Upsert Works Storm does not implement upsert logic in application code. Instead, it delegates to each database platform's native upsert syntax. This ensures atomicity at the database level and avoids race conditions that would occur with application-level check-then-insert logic. The specific SQL syntax varies by database: | Database | SQL Strategy | Conflict Detection | |----------|--------------|--------------------| | Oracle | `MERGE INTO ...` | Explicit match conditions | | MS SQL Server | `MERGE INTO ...` | Explicit match conditions | | PostgreSQL | `INSERT ... ON CONFLICT DO UPDATE` | Targets a specific unique constraint or index | | MySQL/MariaDB | `INSERT ... ON DUPLICATE KEY UPDATE` | Primary key or any unique constraint | | SQLite | `INSERT ... ON CONFLICT DO UPDATE` | Targets a specific unique constraint | | H2 | `MERGE INTO ...` | Explicit match conditions | ### Database-Specific Behavior - **Oracle**, **MS SQL Server**, and **H2** define upsert behavior through explicit match conditions in the `MERGE` statement, giving you control over how conflicts are detected. - **PostgreSQL** upserts target a specific conflict source (a unique constraint or index), making conflict resolution explicit and predictable. This is the most granular approach. - **MySQL/MariaDB** upserts trigger the update branch when an insert would violate the primary key **or any unique constraint**. When multiple unique constraints exist, the database decides which conflict applies. Be aware of this if your table has multiple unique constraints. - **SQLite** uses the same `ON CONFLICT` syntax as PostgreSQL, targeting a specific unique constraint (available since SQLite 3.24). ## Failure Modes Understanding how upserts fail helps you diagnose issues quickly and design your schema correctly. **Missing dialect dependency:** Upsert requires a database-specific dialect module (e.g., `storm-postgresql`, `storm-mysql`). If no dialect is on the classpath, Storm throws an `UnsupportedOperationException` at runtime when you call `upsert()`. The error message indicates that the current dialect does not support upsert operations. Add the appropriate dialect dependency to resolve this. See [Dialects](dialects.md) for the full list. **Missing unique constraint:** Upsert relies on database-level unique constraints to detect conflicts. If the table has no unique constraint (or the constraint does not cover the fields you expect), the behavior depends on the database: - **Oracle/MS SQL Server/H2:** The `MERGE` statement's match condition determines conflict detection. If the match condition references columns without a unique constraint, concurrent upserts may produce duplicates. - **PostgreSQL:** The `ON CONFLICT` clause references a specific constraint. If the constraint does not exist, the database returns a SQL error. - **MySQL/MariaDB:** Without any unique constraint, every row is treated as a new insert. No update branch is triggered, and duplicates accumulate silently. - **SQLite:** Behaves similarly to PostgreSQL. The `ON CONFLICT` clause references a specific constraint. In all cases, Storm does **not** fall back to a plain insert. It always generates the upsert SQL for the configured dialect. If the SQL fails at the database level, the exception propagates to the caller. **Joined sealed entities:** Upsert is not supported for [Joined Table](polymorphism.md#joined-table-inheritance) polymorphic entities. SQL-level upsert constructs (`ON CONFLICT`, `MERGE`) are fundamentally single-table operations. Attempting an upsert on a joined sealed entity throws an `UnsupportedOperationException`. Use `insert()` and `update()` separately instead. ## Requirements 1. **Database dialect** - include the appropriate dialect dependency for your database (see [Dialects](dialects.md)) 2. **Unique constraints** - the table must have a primary key or unique constraint for conflict detection 3. **Null ID for new inserts** - pass default `0` (Kotlin) or `null` (Java) for the primary key field to allow the database to generate a value 4. **Not a joined sealed entity** - upsert is not supported for [Joined Table](polymorphism.md#joined-table-inheritance) polymorphic entities, because SQL-level upsert constructs (`ON CONFLICT`, `MERGE`, etc.) are fundamentally single-table operations. Use `insert()` and `update()` separately instead ## Common Use Cases ### Idempotent API Endpoints REST APIs should be idempotent whenever possible: calling the same endpoint multiple times should produce the same result. Upserts make this straightforward. If a client retries a request (due to a timeout or network error), the second call updates the existing row instead of failing with a duplicate key violation. ```kotlin fun syncUser(email: String, name: String, city: City): User { return orm upsert User(email = email, name = name, city = city) } ``` ### Data Synchronization Import data from an external source, creating new records and updating existing ones: ```kotlin fun syncUsersFromExternalSource(externalUsers: List) { val users = externalUsers.map { ext -> User(email = ext.email, name = ext.name, city = resolveCity(ext.city)) } orm upsert users } ``` ### Configuration or Settings Tables Key-value configuration tables are a natural fit for upserts. You want to store the latest value for a given key, regardless of whether the key already exists. Using upsert eliminates the need to check for existence before writing. ```kotlin data class Setting( @PK val key: String, val value: String ) : Entity orm upsert Setting(key = "theme", value = "dark") ``` ## Entity Definition for Upserts For Java records, you can define a convenience constructor that omits the primary key for cleaner upsert calls: ```java record User(@PK Integer id, String email, String name, LocalDate birthDate, @FK City city) implements Entity { // Convenience constructor for inserts/upserts public User(String email, String name, LocalDate birthDate, City city) { this(null, email, name, birthDate, city); } } ``` This allows you to write: ```java orm.entity(User.class).upsert(new User("alice@example.com", "Alice", birthDate, city)); ``` ## Tips 1. **Use upsert for idempotent operations** - safe to retry without creating duplicates 2. **Check your constraints** - upsert relies on unique constraints to detect conflicts 3. **Use upsertAndFetch for generated IDs** (Java) - get the actual ID assigned by the database; Kotlin's `orm upsert` returns the entity with the ID populated 4. **Include the dialect dependency** - upsert requires database-specific SQL syntax; see [Dialects](dialects.md) 5. **Be mindful of multiple unique constraints** - especially on MySQL/MariaDB, where any unique constraint can trigger the update branch ======================================== ## Source: sql-templates.md ======================================== # SQL Templates SQL templates are the foundation of Storm. The `EntityRepository` and `ProjectionRepository` APIs are built entirely on top of SQL templates. Everything those repositories do, such as generating SELECT columns, deriving joins from `@FK` relationships, and resolving table aliases, uses the same template engine available to you directly. Most users will interact with Storm through repositories and only use templates when they need custom queries. This page covers the template features you're most likely to use: referencing tables and columns with automatic alias resolution, and understanding how joins are derived. For details on how query results are mapped to records, see [Hydration](hydration.md). --- ## Template Syntax Storm uses string interpolation to inject template elements into SQL. Rather than concatenating strings or using positional placeholders, you embed type references, metamodel fields, and parameter values directly in the SQL text. Storm resolves these at compilation time into proper column lists, table aliases, and parameterized placeholders. The syntax differs between Kotlin and Java due to language-level string interpolation support. [Kotlin] Kotlin uses `${}` interpolation inside a lambda. With the [Storm compiler plugin](string-templates.md), interpolated expressions are automatically wrapped in `t()` calls at compile time, so you can write natural Kotlin string interpolation: ```kotlin orm.query { """ SELECT ${User::class} FROM ${User::class} WHERE ${User_.email} = $email """ } ``` The compiler plugin wraps each interpolated expression in `t()`, which is the single entry point for all template elements: types expand to column lists, metamodel references resolve to column names, and values become parameterized placeholders. Without the plugin, you can wrap expressions in `t()` manually. See [String Templates](string-templates.md) for setup instructions. [Java] Java uses string templates with `\{}` syntax: ```java orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.email} = \{email}""") ``` > **Note:** Java string templates are a preview feature. Storm for Java requires Java 21+ with preview mode enabled (`--enable-preview`). Storm will adapt to the final string template specification once it's released. --- ## Data Interface The `Data` interface marks a record or data class as eligible for Storm's SQL generation. Without this marker, Storm treats the type as a plain container and expects you to write all SQL manually. With it, template expressions like `${MyType::class}` in a SELECT clause expand into the full column list, and the same expression in a FROM clause generates the table name with appropriate joins for `@FK` fields. Use `Data` for query-specific result types that do not need full repository support (insert, update, remove). If you need CRUD operations, use `Entity` or `Projection` instead, which extend `Data`. [Kotlin] ```kotlin data class PetWithOwner( val name: String, val birthDate: LocalDate?, @FK val owner: Owner ) : Data // SQL template generates SELECT columns and joins val pets = orm.query { """ SELECT ${PetWithOwner::class} FROM ${PetWithOwner::class} WHERE ${Owner_.city} = $city """ }.getResultList(PetWithOwner::class) ``` [Java] ```java record PetWithOwner( @Nonnull String name, @Nullable LocalDate birthDate, @FK Owner owner ) implements Data {} // SQL template generates SELECT columns and joins List pets = orm.query(RAW.""" SELECT \{PetWithOwner.class} FROM \{PetWithOwner.class} WHERE \{Owner_.city} = \{city}""") .getResultList(PetWithOwner.class); ``` **When to use:** Single-use queries where you want Storm's SQL generation, automatic joins via `@FK`, and type-safe column references. --- ## Entity and Projection For reusable types with repository support (`findById`, `insert`, `update`, etc.), use `Entity` or `Projection`. These extend `Data` and provide full repository operations. See [Entities](entities.md) and [Projections](projections.md) for details. | Type | Template Support | Repository Support | |------|------------------|-------------------| | Plain record | No | No | | `Data` | Yes | No | | `Entity`/`Projection` | Yes | Yes | For plain records with manual SQL, see [Hydration](hydration.md). --- ## Auto-Join Generation When you use a type in both SELECT and FROM expressions, Storm automatically generates joins for `@FK` relationships. This eliminates the need to write join clauses manually. ### How Auto-Joins Work Given these entities: [Kotlin] ```kotlin data class Country( @PK val id: Int, val name: String, val code: String ) : Entity data class City( @PK val id: Int, val name: String, @FK val country: Country ) : Entity data class User( @PK val id: Int, val email: String, @FK val city: City ) : Entity ``` This query: ```kotlin orm.query { """ SELECT ${User::class} FROM ${User::class} """ } ``` [Java] ```java record Country(@PK Integer id, @Nonnull String name, @Nonnull String code ) implements Entity {} record City(@PK Integer id, @Nonnull String name, @FK Country country ) implements Entity {} record User(@PK Integer id, @Nonnull String email, @FK City city ) implements Entity {} ``` This query: ```java orm.query(RAW.""" SELECT \{User.class} FROM \{User.class}""") ``` Generates: ```sql SELECT u.id, u.email, c.id, c.name, co.id, co.name, co.code FROM user u INNER JOIN city c ON u.city_id = c.id INNER JOIN country co ON c.country_id = co.id ``` Storm traverses the record type graph, following `@FK` annotations to generate the necessary joins. The ON clauses are derived automatically from the foreign key relationships. ### Nullable FKs Become LEFT JOINs When an `@FK` field is nullable, Storm generates a LEFT JOIN instead of an INNER JOIN: ```kotlin data class User( @PK val id: Int, val email: String, @FK val city: City? // Nullable FK ) : Entity ``` Generates: ```sql SELECT u.id, u.email, c.id, c.name, co.id, co.name, co.code FROM user u LEFT JOIN city c ON u.city_id = c.id LEFT JOIN country co ON c.country_id = co.id ``` Nullability propagates through the relationship chain. If `city` is nullable, all joins that depend on it (like `country` through `city`) also become LEFT JOINs. ### Join Ordering Storm automatically orders joins so that LEFT JOINs appear after INNER JOINs. This prevents unintended filtering effects that can occur when outer joins precede inner joins. ``` FROM user u INNER JOIN department d ON u.department_id = d.id -- INNER joins first INNER JOIN company co ON d.company_id = co.id LEFT JOIN city c ON u.city_id = c.id -- LEFT joins last LEFT JOIN country cn ON c.country_id = cn.id ``` ### Disabling Auto-Joins Use `from(Class, autoJoin = false)` to disable automatic join generation: ```kotlin orm.query { """ SELECT ${User::class} FROM ${from(User::class, autoJoin = false)} JOIN ${table(City::class)} ON ${User_.city} = ${City_.id} """ } ``` --- ## Column References with Metamodel Hardcoding column names as strings in SQL is error-prone: a renamed field silently breaks at runtime. Storm's compile-time metamodel eliminates this risk. For each entity or data class, the code generator (KSP for Kotlin, annotation processor for Java) generates a companion class (e.g., `User_`) with a static field for every column. These fields resolve to the correct column name and table alias at template compilation time, so a renamed field causes a compile error instead of a runtime failure. ### Basic Column Reference For an entity `User`, Storm generates `User_` with fields for each column. Use these fields anywhere you would write a column name in SQL. [Kotlin] ```kotlin // Reference a column in WHERE clause orm.query { """ SELECT ${User::class} FROM ${User::class} WHERE ${User_.email} = $email """ } ``` [Java] ```java orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.email} = \{email}""") ``` ### Nested Column References Metamodel fields support path navigation for `@FK` relationships. This lets you reference columns on joined tables without writing the join alias yourself. Storm resolves the path to the correct alias based on the auto-generated joins. [Kotlin] ```kotlin // Reference a column through a relationship orm.query { """ SELECT ${User::class} FROM ${User::class} WHERE ${User_.city.country.code} = ${"US"} """ } ``` [Java] ```java orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.city.country.code} = \{"US"}""") ``` This generates: ```sql WHERE co.code = ? ``` The alias (`co`) is resolved from the auto-generated joins. ### Column in Different Contexts Use `column()` to explicitly reference a column with alias resolution: ```kotlin orm.query { """ SELECT ${User::class} FROM ${User::class} ORDER BY ${column(User_.email)} """ } ``` --- ## ResolveScope When working with subqueries or nested template expressions, you may need to control how Storm resolves table aliases. The `ResolveScope` enum determines where Storm looks for aliases when resolving a column or table reference. | Scope | Behavior | |-------|----------| | `CASCADE` | Enforce unambiguity by requiring the alias to be resolved uniquely. This is the default. | | `INNER` | Resolve only within the current (innermost) scope. Fails if the alias is not defined locally. | | `OUTER` | Resolve only from outer scope(s), ignoring locally defined aliases. | The `alias()` and `column()` template functions accept an optional `ResolveScope` parameter. This is most useful in correlated subqueries where the same entity appears in both the outer and inner query. For example, selecting all pets that have at least one visit: [Kotlin] ```kotlin val pets = orm.entity(Pet::class) .select() .whereExists { subquery(Visit::class) .where { "${column(Visit_.pet, INNER)} = ${column(Pet_.id, OUTER)}" } } .resultList ``` [Java] ```java var pets = orm.entity(Pet.class).select() .where(wb -> wb.exists( wb.subquery(Visit.class) .where(RAW."\{column(Visit_.pet, INNER)} = \{column(Pet_.id, OUTER)}"))) .getResultList(); ``` The `column()` function with a metamodel reference resolves to the fully qualified column name (e.g., `v.pet_id` and `p.id`). `INNER` tells Storm to resolve `Visit_.pet` from the subquery, while `OUTER` resolves `Pet_.id` from the main query. In most cases the default `CASCADE` scope is correct, because it ensures that each alias resolves to exactly one table. Use `INNER` or `OUTER` when writing correlated subqueries where you need to control whether a reference resolves to the inner query's tables or the outer query's tables. --- ## Common Template Elements Most queries only need a few template elements. Here are the ones you'll use most often: | Element | Description | |---------|-------------| | `${Class}` | Type reference for SELECT columns or FROM clause | | `${Metamodel_}` (e.g., `${User_.email}`) | Column reference with automatic alias resolution | | `${column(Metamodel)}` | Explicit column reference | | `${table(Class)}` | Table reference without auto-join | | `${from(Class, autoJoin)}` | FROM clause with auto-join control | | `${unsafe(String)}` | Raw SQL (use with caution) | For advanced use cases like batch operations, subqueries, or custom insert/update statements, Storm provides additional elements. See the `Templates` class for the full API. --- ## Examples The following examples demonstrate common query patterns using SQL templates. Each combines multiple template features (type references, metamodel columns, parameter binding) into a complete query. ### Filtering with Metamodel [Kotlin] ```kotlin val users = orm.query { """ SELECT ${User::class} FROM ${User::class} WHERE ${User_.city.country.code} = ${"US"} AND ${User_.email} LIKE ${"%@example.com"} """ }.getResultList(User::class) ``` [Java] ```java List users = orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.city.country.code} = \{"US"} AND \{User_.email} LIKE \{"%@example.com"}""") .getResultList(User.class); ``` ### Custom Joins When auto-join does not produce the join type or condition you need, disable it with `from(Class, autoJoin = false)` and write explicit join clauses. This is common for LEFT JOINs with aggregation or joins on non-FK conditions. ```kotlin orm.query { """ SELECT ${User::class}, COUNT(${Order_.id}) FROM ${from(User::class, autoJoin = false)} LEFT JOIN ${table(Order::class)} ON ${Order_.userId} = ${User_.id} GROUP BY ${User_.id} """ } ``` ### Subquery Subqueries use `column()` and `table()` to reference columns and tables without triggering auto-join generation. This keeps the subquery self-contained, with its own FROM clause and alias scope. ```kotlin orm.query { """ SELECT ${User::class} FROM ${User::class} WHERE ${User_.id} IN ( SELECT ${column(Order_.userId)} FROM ${table(Order::class)} WHERE ${Order_.total} > ${1000} ) """ } ``` --- ## Template Processing Since all Storm operations are built on the SQL template engine, understanding how templates are processed helps explain Storm's performance characteristics. Whether you use repository methods like `findById()` or write custom queries, the same template engine powers every database interaction. Storm processes templates in two distinct steps: 1. **Compilation.** The template is parsed and analyzed. Storm resolves table aliases, traverses record type graphs to determine `@FK` relationships, generates the appropriate joins, and produces a reusable SQL shape with parameter placeholders. This step involves type introspection, alias management, and SQL construction. 2. **Binding.** Parameter values are substituted into the compiled template. This step is lightweight: it simply fills in the placeholders with actual values and prepares the statement for execution. The compilation step does the heavy lifting. It analyzes your record types, walks through nested relationships, determines which joins are needed and in what order, and assembles the final SQL structure. The binding step, by contrast, is a straightforward value substitution. Because the template model closely mirrors SQL structure, compilation is already fast. Storm doesn't need to translate between paradigms or build complex query plans. The template essentially describes the SQL you want, and Storm fills in the details like column lists, aliases, and join conditions. This direct mapping keeps compilation overhead low even without caching. ### Compilation Caching Storm caches compiled templates to eliminate even this small overhead on repeated queries. The cache key is based on the template structure, not the parameter values. When you execute the same query pattern with different parameter values, Storm retrieves the compiled template from the cache and only performs the binding step. ```kotlin // First execution: full compilation + binding userRepository.find(User_.email eq "alice@example.com") // Subsequent executions: cache hit, binding only userRepository.find(User_.email eq "bob@example.com") userRepository.find(User_.email eq "charlie@example.com") ``` This applies to all Storm operations. Repository methods like `findAll()`, `insert()`, and `update()` benefit from the same caching mechanism. Once a query pattern has been compiled, repeated use across your application reuses the cached compilation. The performance improvement from caching is significant, typically 10-20x faster for cached queries compared to full compilation. For most applications, templates are compiled once during the initial requests and then served from cache for the lifetime of the application. ### Why This Matters Traditional database latency from network round-trips and query execution is handled efficiently by modern runtimes through non-blocking IO and asynchronous operations. This means IO-bound work scales well without consuming threads or CPU cycles while waiting. At high scale, CPU time becomes the limiting factor. A server handling thousands of requests per second needs to minimize per-request overhead. Compilation caching ensures that Storm contributes minimal CPU overhead after the initial warmup period, leaving cycles available for your application logic and allowing better utilization of your hardware. ======================================== ## Source: string-templates.md ======================================== # String Templates String templates are the mechanism that makes Storm's SQL template engine injection-safe by design. Rather than concatenating SQL strings (which invites SQL injection), Storm uses language-level string interpolation that separates SQL fragments from parameter values at compile time. This page explains how string templates work in both Kotlin and Java, their current status, and how to set them up. --- ## Overview Storm's SQL template engine accepts a template consisting of **fragments** (the literal SQL parts) and **values** (the interpolated expressions). The engine never concatenates values into SQL text. Instead, values are processed by the template engine: types expand into column lists, metamodel fields resolve to column names, and plain values become parameterized placeholders (`?`). This design makes SQL injection structurally impossible. Both Kotlin and Java provide language-level string interpolation that Storm leverages for this purpose, but each language takes a different approach. | | Kotlin | Java | |---|---|---| | **Syntax** | `$variable` or `${expression}` | `\{expression}` | | **Mechanism** | Compiler plugin (auto-wraps interpolations) | String Templates (preview feature) | | **Status** | Stable (Kotlin 2.0+) | Preview (Java 21+, evolving) | | **Module** | `storm-kotlin` | `storm-java21` | --- ## Kotlin ### How It Works Kotlin's string interpolation (`${}`) is a stable language feature. Storm provides a compiler plugin that transforms interpolated expressions inside template lambdas at compile time. When you write: ```kotlin orm.query { "SELECT ${User::class} FROM ${User::class} WHERE id = $id" } ``` The compiler plugin detects that the lambda has a `TemplateContext` receiver and automatically wraps each interpolated expression in a `t()` call: ```kotlin orm.query { "SELECT ${t(User::class)} FROM ${t(User::class)} WHERE id = ${t(id)}" } ``` The `t()` function is the single entry point for all template elements. It handles types (expanding to column lists), metamodel fields (resolving to column names with aliases), and plain values (becoming parameterized placeholders). The compiler plugin inserts these calls so you don't have to. This transformation happens at compile time and produces identical bytecode to writing `t()` manually. The resulting template is then processed by Storm's SQL template engine, which splits the string on the `t()` boundaries to obtain fragments and values. ### Setup Add the Storm compiler plugin to your Kotlin compiler configuration. The plugin is published as a separate artifact per Kotlin major.minor version, so that each artifact is compiled against the matching Kotlin compiler API. Choose the artifact that matches the Kotlin version in your project: | Kotlin version | Artifact ID | |---|---| | 2.0.x | `storm-compiler-plugin-2.0` | | 2.1.x | `storm-compiler-plugin-2.1` | | 2.2.x | `storm-compiler-plugin-2.2` | | 2.3.x | `storm-compiler-plugin-2.3` | The artifact version matches the Storm version (e.g., `@@STORM_VERSION@@`). [Gradle (Kotlin DSL)] ```kotlin dependencies { kotlinCompilerPluginClasspath("st.orm:storm-compiler-plugin-2.0") } ``` [Maven] Add the plugin jar as a dependency of `kotlin-maven-plugin`: ```xml org.jetbrains.kotlin kotlin-maven-plugin ${kotlin.version} st.orm storm-compiler-plugin-2.0 ${storm.version} ``` The plugin activates automatically via service loader once it is on the Kotlin compiler classpath. No additional configuration flags are needed. ### Without the Compiler Plugin The compiler plugin is optional. Without it, you can still use Storm's template engine by wrapping interpolations in `t()` manually: ```kotlin orm.query { "SELECT ${t(User::class)} FROM ${t(User::class)} WHERE id = ${t(id)}" } ``` This produces identical behavior. The `t()` function is always available inside template lambdas. The compiler plugin simply automates the wrapping. ### Interpolation Safety When a `TemplateBuilder` lambda runs without the compiler plugin and without any explicit `t()` or `interpolate()` calls, Storm cannot distinguish a pure SQL literal from a string with accidentally concatenated interpolations. The `storm.validation.interpolation_mode` system property controls how Storm handles this situation: | Value | Behavior | |-------|----------| | `warn` | Logs a warning (default). Suitable for development. | | `fail` | Throws an `IllegalStateException`. Recommended for production. | | `none` | Disables the check entirely. | In `warn` mode (the default), Storm logs the following message: ``` WARNING: TemplateBuilder lambda executed without the Storm compiler plugin and without explicit t() or interpolate() calls. If this template uses string interpolations, values may have been concatenated directly into the SQL, risking SQL injection. See https://orm.st/string-templates for setup instructions. To change this behavior, set -Dstorm.validation.interpolation_mode=warn|fail|none. ``` This helps catch cases where the compiler plugin is missing from the build configuration, causing interpolated values to be concatenated directly into the SQL string instead of being parameterized. **Configuring the mode:** ```bash # Production: fail on missing compiler plugin java -Dstorm.validation.interpolation_mode=fail -jar myapp.jar # Disable the check entirely java -Dstorm.validation.interpolation_mode=none -jar myapp.jar ``` See [Configuration](configuration.md#interpolation-safety) for details and recommended production settings. ### Template Functions Inside a template lambda, the `TemplateContext` receiver provides several functions for controlling how expressions are interpreted. With the compiler plugin, these functions are passed through `t()` automatically: ```kotlin // Type reference (expands to column list in SELECT, table with joins in FROM) orm.query { "SELECT ${User::class} FROM ${User::class}" } // Metamodel column reference (resolves to column name with alias) orm.query { "SELECT ${User::class} FROM ${User::class} WHERE ${User_.email} = $email" } // Explicit column reference orm.query { "SELECT ${User::class} FROM ${User::class} ORDER BY ${column(User_.email)}" } // Table reference without auto-join orm.query { "FROM ${from(User::class, autoJoin = false)} JOIN ${table(City::class)} ON ..." } // Raw SQL (use with caution, bypasses parameterization) orm.query { "SELECT ${User::class} FROM ${User::class} WHERE ${unsafe("name = 'Alice'")}" } ``` ### Fallback: Manual t() Wrapping If the compiler plugin is not available, you can wrap interpolations in `t()` manually. The compiler plugin detects existing `t()` and `interpolate()` calls and leaves them unchanged, so mixing both styles in the same project is safe: ```kotlin orm.query { "SELECT ${t(User::class)} FROM ${t(User::class)} WHERE id = ${t(id)}" } ``` When using `t()` manually, the interpolation safety check is automatically suppressed because Storm detects the explicit calls. If you use pure literal templates without any interpolations, you can disable the check with the JVM system property: ```bash -Dstorm.validation.interpolation_mode=none ``` --- ## Java ### How It Works Java's String Templates (preview feature since Java 21) provide a `StringTemplate` processor mechanism. Storm's `RAW` processor receives the template fragments and values directly from the language runtime, giving Storm the same structural separation as the Kotlin approach. ```java orm.query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.email} = \{email}""") ``` The `\{expression}` syntax is Java's string template interpolation. The `RAW` processor passes fragments and values to Storm's template engine without any string concatenation. ### Status Java String Templates are a **preview feature** that is still evolving in the JDK. Storm is a forward-looking framework, and String Templates are the best way to write SQL in Java that is both readable and injection-safe by design. Rather than wait for the feature to stabilize, Storm ships with String Template support today. The Java API is production-ready from a quality perspective, but its API surface will adapt as String Templates move toward a stable release. Only `storm-java21` depends on this preview feature. The core framework and the Kotlin API are unaffected. ### Setup Enable preview features in your Java compiler configuration: [Maven] ```xml org.apache.maven.plugins maven-compiler-plugin --enable-preview ``` [Gradle (Kotlin DSL)] ```kotlin tasks.withType { options.compilerArgs.add("--enable-preview") } ``` ### Template Elements Java uses the same template elements as Kotlin, but without the `t()` wrapper (the `RAW` processor handles the separation directly): ```java // Type reference orm.query(RAW."SELECT \{User.class} FROM \{User.class}") // Metamodel column reference orm.query(RAW."SELECT \{User.class} FROM \{User.class} WHERE \{User_.email} = \{email}") // Explicit column and table references orm.query(RAW."FROM \{from(User.class, false)} JOIN \{table(City.class)} ON ...") // Raw SQL orm.query(RAW."SELECT \{User.class} FROM \{User.class} WHERE \{unsafe("name = 'Alice'")}") ``` --- ## Comparison Both approaches achieve the same goal: structurally safe SQL templates with compile-time separation of fragments and values. The difference is in how they get there. | Aspect | Kotlin (Compiler Plugin) | Java (String Templates) | |--------|--------------------------|-------------------------| | **Interpolation** | `${expression}` (auto-wrapped by plugin) | `\{expression}` (processed by `RAW`) | | **Plugin/flag required** | Storm compiler plugin | `--enable-preview` | | **Multiline** | Triple-quoted strings (`"""..."""`) | Text blocks (`"""..."""`) | | **Template functions** | `column()`, `table()`, `from()`, `unsafe()` | Same functions available | | **Explicit wrapping** | `t()` available but optional with plugin | Not needed (`RAW` handles it) | Both languages support all Storm template features: type expansion, metamodel column references, auto-join generation, subqueries, and raw SQL injection via `unsafe()`. ======================================== ## Source: hydration.md ======================================== # Hydration Hydration is the process of transforming flat database rows into structured Kotlin data classes and Java records. Kotlin data classes and Java records are ideal for result mapping because they have a **canonical constructor** with a deterministic parameter order. This order matches the declaration order of the record components, providing a predictable and stable mapping target. Combined with their immutability, records eliminate the need for reflection-based field injection or setter calls during hydration. Storm leverages this by mapping SELECT columns directly to constructor parameters by position. Several optimizations ensure high performance and low memory usage: - **Positional mapping**: No runtime reflection on column names - **Compiled mapping plans**: Plans are computed once per type and reused - **Early cache lookup**: Entities are looked up by primary key before construction, skipping redundant object creation - **Query-level interning**: Duplicate entities within a result set share the same instance - **Memory-safe streaming**: Supports efficient iteration over large result sets Storm natively supports a wide range of field types beyond basic JDBC types: - **Primitives and wrappers**: `boolean`, `byte`, `short`, `int`, `long`, `float`, `double` - **Common types**: `String`, `BigDecimal`, `byte[]`, enums - **Legacy date/time**: `java.util.Date`, `Calendar`, `Timestamp`, `java.sql.Date`, `Time` - **java.time**: `LocalDate`, `LocalTime`, `LocalDateTime`, `Instant`, `OffsetDateTime`, `ZonedDateTime` **Timezone handling**: Storm uses UTC for reading and writing timestamp values. For types not in this list, use a [custom converter](#custom-type-converters). --- ## How Column Mapping Works Storm maps columns to record fields **by position**, matching the order of columns in the result set to the order of constructor parameters in your record. This positional mapping is fast and predictable, with no runtime reflection on column names. ### Basic Example Given a query that returns three columns: ```sql SELECT id, email, name FROM user ``` You can map the results to a plain data class: ```kotlin data class User( val id: Int, val email: String, val name: String ) ``` Storm maps columns to constructor parameters in order: ``` ┌───────────────────────────────────────────────────────────────────────┐ │ Result Set Row │ │ ┌──────────┬────────────────────┬─────────────┐ │ │ │ col 1 │ col 2 │ col 3 │ │ │ │ 42 │ "alice@test.com" │ "Alice" │ │ │ └──────────┴────────────────────┴─────────────┘ │ └───────────────────────────────────────────────────────────────────────┘ │ ▼ ┌───────────────────────────────────────────────────────────────────────┐ │ Record Constructor │ │ User(id = 42, email = "alice@test.com", name = "Alice") │ └───────────────────────────────────────────────────────────────────────┘ ``` The three columns from the result set are passed directly to the `User` constructor in order. Column 1 becomes `id`, column 2 becomes `email`, and column 3 becomes `name`. --- ## Plain Records Not every query result maps to a full entity. Aggregate queries, reports, and ad-hoc projections return custom column sets that do not correspond to any database table. Storm handles these cases without requiring special interfaces or annotations. You can define a plain Kotlin data class or Java record whose constructor parameters match the query's columns by position and type, and Storm will hydrate it directly. [Kotlin] ```kotlin data class MonthlySales( val month: YearMonth, val orderCount: Long, val revenue: BigDecimal ) val sales = orm.query(""" SELECT DATE_TRUNC('month', order_date), COUNT(*), SUM(amount) FROM orders GROUP BY DATE_TRUNC('month', order_date) """).getResultList(MonthlySales::class) ``` [Java] ```java record MonthlySales( YearMonth month, long orderCount, BigDecimal revenue ) {} List sales = orm.query(RAW.""" SELECT DATE_TRUNC('month', order_date), COUNT(*), SUM(amount) FROM orders GROUP BY DATE_TRUNC('month', order_date)""") .getResultList(MonthlySales.class); ``` This works for any query. The only requirement is that the number and order of columns matches the constructor parameters. For SQL generation features (template expressions, automatic joins via `@FK`), implement `Data`, `Entity`, or `Projection`. See [SQL Templates](sql-templates.md) for details. --- ## Nested Records Real-world data models rarely consist of flat structures. Addresses, coordinates, monetary amounts, and other value objects are naturally represented as separate types composed into larger entities. Storm supports this composition without requiring any special annotations for embedded records. When a record contains another record as a field, Storm **flattens** the nested structure into a single column sequence. During hydration, it reconstructs the nested hierarchy. This means you can model your domain with fine-grained value objects while Storm handles the mapping to and from flat database rows. ### Column Flattening ```kotlin data class Address( val street: String, val postalCode: String ) data class User( val id: Int, val name: String, val address: Address, // Embedded record val active: Boolean ) ``` Storm flattens nested records into consecutive columns: ``` Record Structure Flattened Columns ──────────────── ───────────────── ┌─────────────────────┐ ┌───────┬─────────┬─────────────┬─────────────┬────────┐ │ User │ │ col 1 │ col 2 │ col 3 │ col 4 │ col 5 │ │ ├─ id: Int │ ──────────────────▶ │ id │ name │ street │ postalCode │ active │ │ ├─ name: String │ ├───────┼─────────┼─────────────┼─────────────┼────────┤ │ ├─ address ────────┼──┐ │ 42 │ "Alice" │ "Main St 1" │ "94086" │ true │ │ │ ┌───────────────┼──┘ └───────┴─────────┴─────────────┴─────────────┴────────┘ │ │ │ Address │ │ │ │ │ │ │ │ │ ├─ street │ │ │ └──────┬──────┘ │ │ │ │ └─ postalCode│ │ │ │ │ │ │ └───────────────┘ └────┬────┘ │ │ │ └─ active: Boolean │ │ │ │ └─────────────────────┘ │ │ │ ▼ ▼ ▼ User fields Address fields User fields [1..2] [3..4] [5] ``` The nested `Address` record is expanded inline between `User` fields. Columns 1-2 map to `User.id` and `User.name`, columns 3-4 map to the nested `Address`, and column 5 maps to `User.active`. ### Hydration: Reconstructing Nested Records During hydration, Storm reconstructs the nested hierarchy from the flat columns. It processes nested records first, then returns to the parent level: ``` Step 1: Build Address Step 2: Build User ────────────────────── ────────────────── cols [3..4] cols [1..2] + Address + col [5] │ │ ▼ ▼ ┌───────────────────────┐ ┌─────────────────────────────────────────────────────┐ │ Address( │ │ User( │ │ street = "Main St 1"│ ───────▶ │ id = 42, │ │ postalCode = "94086"│ │ name = "Alice", │ │ ) │ │ address = Address("Main St 1", "94086"), │ └───────────────────────┘ │ active = true │ │ ) │ └─────────────────────────────────────────────────────┘ ``` Storm first constructs the nested `Address` from columns 3-4, then constructs `User` using columns 1-2, the `Address` instance, and column 5. ### Deep Nesting Nesting works recursively to any depth: ```kotlin data class Country( val name: String, val code: String ) data class City( val name: String, @FK val country: Country ) data class User( val id: Int, @FK val city: City ) ``` The nested structure flattens to 4 columns, with innermost records at the end: ``` Record Structure Flattened Columns ──────────────── ───────────────── ┌────────────────────────┐ ┌──────┬───────────┬───────────────┬──────┐ │ User │ │col 1 │ col 2 │ col 3 │col 4 │ │ ├─ id: Int │────────────▶│ id │ city.name │ country.name │ code │ │ └─ city ──────────────┼──┐ ├──────┼───────────┼───────────────┼──────┤ │ ┌──────────────────┼──┘ │ 42 │"Sunnyvale"│"United States"│ "US" │ │ │ City │ └──────┴───────────┴───────────────┴──────┘ │ │ ├─ name: String │ │ │ │ │ │ │ └─ country ─────┼──┐ │ │ └────┬─────┘ │ │ ┌────────────┼──┘ │ │ │ │ │ │ Country │ │ └───────┬────────┘ │ │ │ ├─ name │ │ │ │ │ │ └─ code │ │ │ │ │ └────────────┘ ▼ ▼ │ └──────────────────┘ User [1] City [2..4] └────────────────────────┘ Country [3..4] ``` With deeply nested records, the innermost record (`Country`) appears last in the column sequence. Column ranges overlap: `City` spans columns 2-4 because it includes `Country`. Hydration reconstructs from the **innermost** level outward: ``` Step 1: Build Country Step 2: Build City Step 3: Build User ───────────────────── ────────────────── ────────────────── cols [3..4] col [2] + Country col [1] + City │ │ │ ▼ ▼ ▼ ┌──────────────────┐ ┌────────────────┐ ┌──────────────────┐ │ Country( │ │ City( │ │ User( │ │ "United States",│ ───────▶ │ "Sunnyvale", │ ────────▶ │ id = 42, │ │ "US" │ │ country ─────┼───┐ │ city ──────────┼─┐ │ ) │ │ ) │ │ │ ) │ │ └──────────────────┘ └────────────────┘ │ └──────────────────┘ │ ▲ │ ▲ │ └─────────────┘ └─────────────┘ ``` `Country` is constructed first from columns 3-4. Then `City` is constructed using column 2 plus the `Country` instance. Finally, `User` is constructed using column 1 plus the `City` instance. --- ## Foreign Keys (@FK) The `@FK` annotation marks a field as a foreign key relationship. When the result set includes a joined table, Storm hydrates all its columns into the nested record. See [SQL Templates](sql-templates.md) for how `@FK` affects query generation. ### FK Column Layout ```kotlin data class City( @PK val id: Int, val name: String, val population: Long ) : Entity data class User( @PK val id: Int, val email: String, @FK val city: City // Foreign key relationship ) : Entity ``` When the result set includes both `User` and `City` columns, the layout is: ``` ┌───────────────────────────────────────────────────────────────────────┐ │ Column: 1 2 3 4 5 │ │ ┌────┬──────────┬─────────┬───────────┬─────────────┐ │ │ │ id │ email │ city.id │ city.name │ city.popul. │ │ │ └────┴──────────┴─────────┴───────────┴─────────────┘ │ │ │ │ User fields: [1..2] │ │ City fields: [3..5] │ └───────────────────────────────────────────────────────────────────────┘ ``` Columns 1-2 contain `User` fields, while columns 3-5 contain all fields from the joined `City` entity. The foreign key column (`city_id`) is **not** included in the result. Storm reconstructs the relationship from the joined entity's primary key. ### Nullable FK ```kotlin data class User( @PK val id: Int, val email: String, @FK val city: City? // Nullable FK ) : Entity ``` When `city` is nullable and all city columns are NULL in a row, the hydrated `city` field is `null`. --- ## Refs (Lazy References) Eagerly loading every related entity is not always desirable. When a `User` references a `City`, which references a `Country`, a simple user query can cascade into loading the entire object graph. In many cases, the calling code only needs the foreign key value, not the full related entity. A `Ref` is a lightweight reference that stores only the foreign key value, not the full record. This gives you control over how much data is loaded during hydration. Use `Ref` when: - You need to break circular dependencies (self-referential entities like a tree structure) - You want to defer entity loading until the related data is actually needed - You are processing large result sets and want to minimize memory consumption ### Ref Column Layout ```kotlin data class User( @PK val id: Int, val email: String, @FK val city: Ref // Only stores city_id, not full City ) : Entity ``` With `Ref`, Storm hydrates only the foreign key value (not the full entity): Column layout: ``` ┌───────────────────────────────────────────────────────────────────────┐ │ Column: 1 2 3 │ │ ┌────┬──────────┬─────────┐ │ │ │ id │ email │ city_id │ │ │ └────┴──────────┴─────────┘ │ │ │ │ User fields: [1..2] │ │ Ref: [3] (PK only) │ └───────────────────────────────────────────────────────────────────────┘ ``` Only three columns are hydrated. Column 3 contains just the foreign key value, which is wrapped in a `Ref`. Call `fetch()` later to load the full entity: ```kotlin val user = userRepository.findById(42) val city: City = user.city.fetch() // Loads City from database ``` ### FK vs Ref Comparison | Aspect | `@FK val city: City` | `@FK val city: Ref` | |------------------------|---------------------------|-------------------------------| | Columns hydrated | All City columns | Only FK column (city_id) | | Memory usage | Higher (full entity) | Lower (just PK) | | Access pattern | Immediate | Deferred (call `fetch()`) | | Circular dependencies | Not allowed | Allowed | --- ## Query-Level Identity (Interning) When the same entity appears multiple times in a query result (e.g., through joins), Storm ensures they share the same object instance within that query. This is called **interning**. ``` ┌─────────────────────────────────────────────────────────────────────────┐ │ Query Result Set │ │ │ │ SELECT u.*, c.* FROM user u JOIN city c ON u.city_id = c.id │ │ │ │ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Row 1: User(id=1, city_id=42) │ City(id=42, name="Sunnyvale") │ │ │ │ Row 2: User(id=2, city_id=42) │ City(id=42, name="Sunnyvale") │ │ │ │ Row 3: User(id=3, city_id=99) │ City(id=99, name="Austin") │ │ │ └───────────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌───────────────────────┐ │ │ │ Interner │ │ │ │ ┌────────┬────────┐ │ │ │ │ │ PK │ Entity │ │ │ │ │ ├────────┼────────┤ │ │ │ │ │ 42 │ ──────────▶ City(42) │ │ │ │ 99 │ ──────────▶ City(99) │ │ │ └────────┴────────┘ │ │ │ └───────────────────────┘ │ │ │ │ │ ▼ │ │ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Result: │ │ │ │ User(1) ──▶ City(42) ◀── same instance │ │ │ │ User(2) ──▶ City(42) ◀──┘ │ │ │ │ User(3) ──▶ City(99) │ │ │ └───────────────────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────────────┘ ``` Three rows contain `City(42)` data twice, but the interner ensures only one instance is created. Both `User(1)` and `User(2)` reference the same `City` object in memory. ### How Interning Works As Storm processes each row: 1. **Extract primary key**: Before constructing an entity, Storm extracts its PK from the flat column array 2. **Check interner**: If an entity with that PK was already constructed in this query, return the existing instance 3. **Construct and store**: Otherwise, construct the entity and store it in the interner This happens automatically during hydration. The interner is scoped to a single query execution. Once you're done iterating results, the interner is discarded. ### Early Cache Lookup: Skipping Construction A key optimization in Storm's hydration is **early primary key extraction**. Before constructing any nested objects, Storm extracts the primary key directly from the flat column array and checks if that entity already exists in the cache or interner. When a cache hit occurs, Storm skips the entire construction process for that entity, including all its nested records. This is particularly valuable for queries with joins where the same entity appears in multiple rows. **How it works:** 1. Storm knows the PK column offset for each entity in the flattened structure 2. Before recursing into nested construction, it reads the PK value at that offset 3. It checks the entity cache first (if applicable), then falls back to the interner 4. On cache hit: skip construction entirely, advance the column cursor, use cached instance 5. On cache miss: proceed with normal construction, then store for later lookup **Example with joins:** ```kotlin // Query returns 1000 users, but only 50 unique cities val users = userRepository.findAll(User_.city eq city) ``` Without early lookup, Storm would construct 1000 `City` objects and then deduplicate. With early lookup: - Row 1: City PK=42 not in cache -> construct City, store in interner - Row 2: City PK=42 found in interner -> skip construction, reuse instance - Row 3: City PK=42 found in interner -> skip construction, reuse instance - ... Result: Only 50 City objects are ever constructed, not 1000. **This optimization applies to:** - Top-level entities (checked against entity cache first, then interner) - Nested entities via `@FK` (checked at each nesting level) - Both simple and composite primary keys The benefit compounds with deep nesting. If a parent entity is cached, none of its nested children need to be constructed either. ### Memory Safety The interner only retains entities while your code uses them. Once released, they are cleaned up and don't accumulate in memory. This makes streaming and flow-based processing safe: ```kotlin // Safe for large result sets - processed entities don't accumulate orderRepository.selectAll().collect { order -> process(order) // order can be cleaned up after this iteration } ``` ### Relationship with Entity Cache Query-level interning and the [entity cache](entity-cache.md) serve different purposes: | Aspect | Query Interner | Entity Cache | |--------------------|----------------------------------|---------------------------------------| | Scope | Single query | Transaction | | Purpose | Deduplicate within result set | Identity + dirty checking | | Isolation level | Any | `REPEATABLE_READ`+ or read-only | | Memory management | Cleaned up when no longer used | Configurable retention | At `REPEATABLE_READ` and above, the entity cache extends query-level identity to the full transaction. The interner ensures correctness within each query regardless of cache settings. --- ## Composite Primary Keys Some tables use multiple columns as their primary key rather than a single auto-incremented ID. Junction tables (many-to-many relationships) are a common example: the combination of two foreign keys forms the primary key. Storm supports composite primary keys by modeling the key as a separate record type that contains each key column. ```kotlin data class UserRolePk( val userId: Int, // PK column 1 val role: String // PK column 2 ) data class UserRole( @PK val pk: UserRolePk, val grantedAt: Instant, @FK val grantedBy: Ref ) : Entity ``` This maps to a `user_role` table where `user_id` and `role` together form the primary key: ``` ┌────────────────────────────────────────────────────────────────────────┐ │ Column: 1 2 3 4 │ │ ┌───────────┬────────────┬──────────┬───────────┐ │ │ │ user_id │ role │granted_at│granted_by │ │ │ └───────────┴────────────┴──────────┴───────────┘ │ │ \___________ __________/ │ │ v │ │ composite primary key │ │ │ │ UserRolePk: [1..2] │ │ UserRole: [1..4] (includes nested PK) │ └────────────────────────────────────────────────────────────────────────┘ ``` Storm first constructs `UserRolePk` from the primary key columns (1-2), then uses it along with columns 3-4 to construct the full `UserRole` entity. --- ## Custom Type Converters Storm's built-in type support covers standard JDBC types, but applications often use domain-specific value types that do not map directly to any JDBC type. Examples include durations stored as seconds, monetary amounts stored as cents, or encoded identifiers stored as strings. Custom type converters bridge this gap by defining a bidirectional mapping between a database column type and your domain type. For types not natively supported by Storm, use `@Convert` to specify a custom converter: ```kotlin // Value object for type-safe duration handling data class DurationSeconds(val value: Duration) // Converter transforms between database Long and DurationSeconds class DurationConverter : Converter { override fun toDatabase(value: DurationSeconds?): Long? = value?.value?.toSeconds() override fun fromDatabase(dbValue: Long?): DurationSeconds? = dbValue?.let { DurationSeconds(Duration.ofSeconds(it)) } } data class Task( @PK val id: Int, @Convert(DurationConverter::class) val timeout: DurationSeconds ) : Entity ``` Converters map a single column to a custom type. For composite types spanning multiple columns, use nested records instead (see [Nested Records](#nested-records)). --- ## Nullability Handling Database columns can contain NULL values, but not every field in your data model should accept null. Storm enforces nullability constraints during hydration, catching data integrity issues at the application boundary rather than letting null values propagate silently through your code. [Kotlin] Kotlin's type system indicates nullability: ```kotlin data class User( val id: Int, // Non-nullable val email: String, // Non-nullable val nickname: String? // Nullable ) ``` If a non-nullable field receives NULL from the database, Storm throws an exception. [Java] Use `@Nonnull` and `@Nullable` annotations: ```java record User( int id, // Primitive = non-nullable @Nonnull String email, // Non-nullable @Nullable String nickname // Nullable ) {} ``` ### Nullable Nested Records When a nested record field is nullable, Storm checks if **all** its columns are NULL: ```kotlin data class User( val id: Int, val address: Address? // Nullable nested record ) ``` If all columns for `address` are NULL, the field is set to `null`. If some columns are NULL but others aren't, Storm validates each field individually and may throw if non-nullable fields are NULL. --- ## Summary | Concept | Column Behavior | |---------|-----------------| | **Simple field** | 1 column per field | | **Nested record** | Flattened: all nested fields become consecutive columns | | **`@FK` record** | All record columns hydrated | | **`@FK Ref`** | Only FK column hydrated (record PK) | | **Composite PK** | Multiple columns for PK fields | | **Converter** | 1 column mapped to custom type | **Key principles:** - Columns map by **position**, not name - Nested records are **flattened** into consecutive columns - `@FK` hydrates all columns from the related record - `Ref` hydrates only the foreign key value - The interner ensures identity within a query result --- ## See Also - [Entity Cache](entity-cache.md) - identity interning across a transaction - [Refs](refs.md) - Ref column layout and lazy loading - [Projections](projections.md) - projection mapping for partial entity views - [Entities](entities.md) - entity definitions and annotations ======================================== ## Source: dirty-checking.md ======================================== # Dirty Checking ## What Is Dirty Checking? Dirty checking is the process of determining which fields of an entity have changed since it was loaded from the database. When you update an entity, the ORM needs to decide: 1. **Whether** to execute an UPDATE statement at all 2. **Which columns** to include in the UPDATE statement Storm's entities are **stateless** and **immutable** by design: plain Kotlin data classes or Java records with no proxies, no bytecode manipulation, and no hidden state. This design simplifies the dirty checking logic and allows for high performance. Instead of tracking changes implicitly, Storm: 1. **Observes** entity state when you read from the database 2. **Compares** entity state when you call `update()` within the same transaction 3. **Generates** the appropriate UPDATE statement based on the configured mode Observed state is stored in the transaction context, not on the entity itself. This keeps entities simple and predictable while still providing intelligent update behavior. ``` ┌─────────────────────────────────────────────────────────────────┐ │ Transaction Scope │ │ │ │ ┌─────────┐ ┌──────────────┐ ┌─────────┐ │ │ │ READ │────────▶│ Observed │────────▶│ UPDATE │ │ │ │ Entity │ │ State │ │ Called │ │ │ └─────────┘ │ (cached) │ └────┬────┘ │ │ └──────────────┘ │ │ │ │ │ │ │ ▼ ▼ │ │ ┌──────────────────────────────┐ │ │ │ Compare current entity │ │ │ │ with observed state │ │ │ └──────────────┬───────────────┘ │ │ │ │ │ ┌─────────────────┼─────────────────┐ │ │ ▼ ▼ ▼ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ No change│ │ Some │ │ Some │ │ │ │ detected │ │ changed │ │ changed │ │ │ └────┬─────┘ │ (ENTITY) │ │ (FIELD) │ │ │ │ └────┬─────┘ └────┬─────┘ │ │ ▼ ▼ ▼ │ │ Skip UPDATE Full-row UPDATE Partial UPDATE │ │ │ └─────────────────────────────────────────────────────────────────┘ ``` **Key insight:** Dirty checking in Storm is scoped to a single transaction. Once the transaction commits, all observed state is discarded. This keeps memory usage predictable and avoids the complexity of managing detached entities. Entity cache misses can affect dirty checking behavior. When an entity is not found in the cache, Storm falls back to a full-row update. See [Entity Cache](entity-cache.md) for cache retention configuration. --- ## Update Modes Storm supports three update modes, each representing a different trade-off between SQL efficiency, batching potential, and write amplification: | Mode | Dirty Check | UPDATE Behavior | SQL Variability | |------|-------------|-----------------|-----------------| | `OFF` | None | Always update all columns | Single shape | | `ENTITY` | Entity-level | Skip if unchanged; full row if any changed | Single shape | | `FIELD` | Field-level | Update only changed columns | Multiple shapes | The selected update mode controls: - **Whether** an UPDATE is executed (can be skipped if nothing changed) - **What** gets updated (all columns vs. only changed columns) - **How predictable** the generated SQL is (affects batching and caching) ### Choosing the Right Mode ``` ┌─────────────────────────────────────┐ │ What kind of workload do you have? │ └─────────────────┬───────────────────┘ │ ┌───────────────────────┼───────────────────────┐ ▼ ▼ ▼ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Batch/ETL │ │ Typical CRUD │ │ Wide tables │ │ processing │ │ application │ │ or hot rows │ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │ │ │ ▼ ▼ ▼ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Use: OFF │ │ Use: ENTITY │ │ Use: FIELD │ │ │ │ (default) │ │ │ │ Maximum batch │ │ Good balance of │ │ Minimal write │ │ efficiency │ │ efficiency and │ │ amplification │ │ │ │ simplicity │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ ``` --- ## UpdateMode.OFF In `OFF` mode, Storm bypasses dirty checking entirely. Every call to `update()` generates a full UPDATE statement that writes all columns, regardless of whether values have actually changed. ```kotlin val user = orm.find(User_.id eq 1) val updatedUser = user.copy(name = "New Name") // Always generates: UPDATE user SET email=?, name=?, city_id=? WHERE id=? // All columns are included, even though only 'name' changed orm update updatedUser ``` No comparison is performed; every update is unconditional. ### When to Use OFF Mode **Batch processing and ETL:** When importing or transforming large datasets, you often want predictable, unconditional writes. OFF mode gives you maximum batching efficiency because every UPDATE has the same shape. ```kotlin // Processing 10,000 records - all UPDATEs have identical structure // JDBC can batch them efficiently userRepository.update(users.map { processUser(it) }) ``` **Simple applications:** If your entities are small and updates are infrequent, the overhead of dirty checking may not be worth the complexity. OFF mode keeps things straightforward. **Characteristics** - Single, stable SQL shape (enables efficient JDBC batching) - Zero CPU overhead (no comparisons to perform) - Maximum predictability **Trade-offs** - Updates may write unchanged values to the database - Cannot skip unnecessary UPDATEs - May cause more database trigger activity if triggers fire on any UPDATE --- ## UpdateMode.ENTITY (Default) `ENTITY` mode is Storm's default and provides a balanced approach. Storm checks entities against the observed state from when the entity was read. Based on this comparison: - If **same instance**: No UPDATE is executed - If **no field changed**: No UPDATE is executed (individual fields are checked when needed) - If **any field changed**: A full-row UPDATE is executed (all columns are written) ```kotlin val user = orm.get(User_.id eq 1) // Storm observes: {id=1, email="a@b.com", name="Alice"} // Scenario 1: No changes orm update user // No SQL executed - entity unchanged // Scenario 2: Any field changed val updated = user.copy(name = "Bob") orm update updated // UPDATE user SET email=?, name=?, city_id=? WHERE id=? // Full row update, even though only 'name' changed ``` ### Why Full-Row Updates? You might wonder: if Storm knows only `name` changed, why update all columns? The answer is **batching efficiency**. When multiple entities of the same type are updated in a transaction, JDBC can batch them together only if they have the same SQL shape. With ENTITY mode, all UPDATEs for a given entity type look identical, enabling efficient batching: ```kotlin // All updates have identical SQL shape - JDBC batches them val users = userRepository.findAll(User_.city eq city) userRepository.update(users.map { it.copy(lastLogin = now()) }) ``` ### When to Use ENTITY Mode **Most CRUD applications:** ENTITY mode provides the right balance for typical web applications. It avoids unnecessary database round-trips when nothing changed, while maintaining predictable SQL patterns. **Read-modify-write patterns:** When you load an entity and pass it back to update without modifications, ENTITY mode skips the UPDATE entirely. ```kotlin val user = orm.get(User_.id eq userId) // No changes made - UPDATE is skipped orm update user // Conditional modification - UPDATE only if actually changed val updated = if (shouldUpdate) user.copy(name = "New Name") else user orm update updated ``` **Characteristics** - UPDATE suppression when nothing changed - Stable SQL shape per entity (enables batching) - Low memory overhead (stores one copy of observed state per entity) - Minimal CPU overhead (single comparison per update) **Trade-offs** - Writes unchanged columns when any field is dirty - Requires storing observed state in memory during transaction --- ## UpdateMode.FIELD `FIELD` mode provides the most granular control. Storm compares each field individually and generates UPDATE statements that include only the columns that actually changed. Like ENTITY mode, if no fields changed, Storm skips the UPDATE entirely. ```kotlin val user = orm.get(User_.id eq 1) // {id=1, email="a@b.com", name="Alice", bio="...", settings="..."} // Only name changed val updated = user.copy(name = "Bob") orm update updated // UPDATE user SET name=? WHERE id=? // Multiple fields changed val updated2 = user.copy(name = "Bob", email = "bob@example.com") orm update updated2 // UPDATE user SET name=?, email=? WHERE id=? ``` ### Why Use Field-Level Updates? **Reduced write amplification:** When you have wide tables (many columns) but typically only change a few fields, FIELD mode avoids writing unchanged data. This can significantly reduce I/O, especially for tables with large TEXT or BLOB columns. ```kotlin // Article has 20 columns including large 'content' field // But we're only updating the view count val article = orm.find(Article_.id eq articleId) orm update article.copy(viewCount = article.viewCount + 1) // UPDATE article SET view_count=? WHERE id=? // The large 'content' column is NOT written ``` **Reduced database overhead:** Updating fewer columns reduces redo/undo log volume, replication payload size, and avoids rewriting large column values unnecessarily. **Reduced trigger activity:** If your database has column-specific triggers, FIELD mode ensures they only fire when their columns actually change. ### Understanding SQL Shape Variability The trade-off with FIELD mode is that it generates different SQL statements depending on which fields changed: ``` ┌──────────────────────────────────────────────────────────────────┐ │ FIELD Mode SQL Shapes │ ├──────────────────────────────────────────────────────────────────┤ │ │ │ Change: name only │ │ SQL: UPDATE user SET name=? WHERE id=? │ │ │ │ Change: email only │ │ SQL: UPDATE user SET email=? WHERE id=? │ │ │ │ Change: name + email │ │ SQL: UPDATE user SET name=?, email=? WHERE id=? │ │ │ │ Change: name + email + city_id │ │ SQL: UPDATE user SET name=?, email=?, city_id=? WHERE id=? │ │ │ │ ... potentially many more combinations ... │ │ │ └──────────────────────────────────────────────────────────────────┘ ``` This variability has two consequences: 1. **Reduced batching:** JDBC can only batch statements with identical SQL. Different update patterns cannot be batched together. 2. **Statement cache pressure:** Databases cache prepared statements for reuse. Many distinct SQL shapes consume more cache memory and reduce cache hit rates. Storm mitigates this with a [max shapes limit](#max-shapes-limit) that automatically falls back to full-row updates when too many shapes are generated. ### When to Use FIELD Mode **Wide tables:** Tables with many columns where updates typically touch only a few fields. **High-contention rows:** When multiple transactions frequently update the same rows (e.g., counters, status fields), updating fewer columns reduces conflict potential. **Large column values:** Tables with TEXT, BLOB, or JSON columns where rewriting unchanged large values is wasteful. **Characteristics** - Skips UPDATE entirely if nothing changed - Minimal write amplification - Reduced redo/undo and replication overhead - Efficient for wide tables with sparse updates **Trade-offs** - Multiple SQL shapes reduce batching efficiency - Higher statement cache usage - More CPU overhead for field-level comparison --- ## Configuring Update Mode Per Entity Use the `@DynamicUpdate` annotation to specify the update mode for individual entity classes. This allows you to use different strategies for different entities based on their characteristics. [Kotlin] ```kotlin @DynamicUpdate(FIELD) data class User( @PK val id: Int = 0, val email: String, val name: String, @FK val city: City ) : Entity ``` [Java] ```java @DynamicUpdate(FIELD) record User(@PK Integer id, @Nonnull String email, @Nonnull String name, @FK City city ) implements Entity {} ``` ### How It Works The `@DynamicUpdate` annotation is processed at compile time by Storm's KSP processor (Kotlin) or annotation processor (Java). The update mode is encoded in the generated metamodel class (`User_`), so there's no runtime reflection cost. ``` ┌─────────────────────┐ Compile Time ┌─────────────────────┐ │ │ │ │ │ @DynamicUpdate │ ───────────────────▶ │ User_ metamodel │ │ data class User │ Annotation │ updateMode=FIELD │ │ │ Processor │ │ └─────────────────────┘ └─────────────────────┘ │ │ Runtime ▼ ┌─────────────────────┐ │ Storm reads mode │ │ from metamodel │ │ (no reflection) │ └─────────────────────┘ ``` ### Mixing Modes in an Application Different entities can use different update modes based on their characteristics: ```kotlin // Wide table with large content - use FIELD mode @DynamicUpdate(FIELD) data class Article( @PK val id: Int = 0, val title: String, val content: String, // Large TEXT column val metadata: String // JSON blob ) : Entity // Simple entity with frequent batch updates - use ENTITY mode (default) data class AuditLog( @PK val id: Int = 0, val action: String, val timestamp: Instant ) : Entity // High-throughput batch processing - use OFF mode @DynamicUpdate(OFF) data class MetricSample( @PK val id: Int = 0, val value: Double, val timestamp: Instant ) : Entity ``` --- ## Dirty Checking Strategy When comparing an entity to its observed state, Storm needs to determine whether each field has changed. Storm supports two strategies for this comparison. Both strategies are correct; this is purely a performance tuning choice. ### Instance-Based (Default) Instance-based checking treats a field as changed when the object reference differs (`!=` identity comparison). This is the fastest option because reference comparison is a single pointer check with no method dispatch. It works correctly in the vast majority of cases because Kotlin's `copy()` and Java's `with...()` patterns create new instances for modified fields while reusing the same references for unchanged fields. The only scenario where instance-based checking produces a false positive is when you construct a new object with identical content. For example, `user.copy(name = user.name)` creates a new `String` reference for `name`, even though the value is unchanged. In practice, this is rare and the cost of an extra column in the UPDATE is negligible. ### Value-Based Value-based checking compares field values using `equals()`. This avoids false positives from the scenario described above, at the cost of calling `equals()` on every field during comparison. For entities with simple fields (primitives, strings), the overhead is minimal. For entities with complex nested objects or large collections, the `equals()` calls can become measurable. Choose value-based checking when your update patterns frequently reconstruct fields with identical values, and you want to minimize unnecessary column writes. In most applications, instance-based checking is sufficient. **Enabling Value-Based Checking** Per entity: ```kotlin @DynamicUpdate(value = FIELD, dirtyCheck = VALUE) data class User( @PK val id: Int = 0, val email: String ) : Entity ``` Globally via `StormConfig` or system property: ```kotlin val config = StormConfig.of(mapOf(UPDATE_DIRTY_CHECK to "VALUE")) ``` ```bash -Dstorm.update.dirty_check=VALUE ``` --- ## Max Shapes Limit When using `FIELD` mode, Storm generates different SQL statements depending on which columns changed. For an entity with N columns, there could theoretically be 2^N different UPDATE shapes (though in practice, it's far fewer). ### The Problem Databases maintain a **prepared statement cache** to avoid re-parsing SQL. Each distinct SQL shape consumes cache memory. If your application generates too many shapes, you can: 1. **Exhaust statement cache memory**, causing evictions and re-parsing 2. **Reduce cache hit rates**, degrading performance 3. **Lose batching benefits**, as only identical statements can be batched ### Storm's Solution Storm enforces a **maximum number of UPDATE shapes per entity**. Once this limit is reached, Storm automatically falls back to full-row updates for that entity, ensuring bounded resource usage. ``` ┌─────────────────────────────────────────────────────────────────────┐ │ Max Shapes Protection │ ├─────────────────────────────────────────────────────────────────────┤ │ │ │ Shape 1: UPDATE user SET name=? WHERE id=? │ │ Shape 2: UPDATE user SET email=? WHERE id=? │ │ Shape 3: UPDATE user SET name=?, email=? WHERE id=? │ │ Shape 4: UPDATE user SET city_id=? WHERE id=? │ │ Shape 5: UPDATE user SET name=?, city_id=? WHERE id=? │ │ ───────────────────────────────────────────────────────────────── │ │ LIMIT REACHED (small default, e.g., 5) │ │ ───────────────────────────────────────────────────────────────── │ │ Shape 6+: UPDATE user SET name=?, email=?, city_id=? WHERE id=? │ │ (Falls back to full-row update) │ │ │ └─────────────────────────────────────────────────────────────────────┘ ``` ### Configuration **Default:** 5 shapes per entity **Configure via `StormConfig` or system property:** ```kotlin val config = StormConfig.of(mapOf(UPDATE_MAX_SHAPES to "10")) ``` ```bash -Dstorm.update.max_shapes=10 ``` ### Choosing the Right Limit - **Lower values (3-5):** Better for applications with many entity types or limited database memory. Ensures predictable caching behavior. - **Higher values (10-20):** Appropriate when you have few entity types and want maximum write efficiency. Monitor your database's statement cache usage. - **Very high values (50+):** Generally not recommended. If you need this many shapes, consider whether FIELD mode is appropriate for your use case. **Tip:** Monitor your database's prepared statement cache metrics in production. If you see high eviction rates, consider lowering the max shapes limit. --- ## Configuration Reference Storm's dirty checking behavior can be configured at multiple levels: via `StormConfig`, system properties, or per-entity annotations. Entity-level configuration always takes precedence over `StormConfig` defaults, and `StormConfig` values take precedence over system properties. ### Properties | Property | Default | Description | |----------|---------|-------------| | `storm.update.default_mode` | `ENTITY` | Default update mode for entities without `@DynamicUpdate` | | `storm.update.dirty_check` | `INSTANCE` | Default dirty check strategy (`INSTANCE` or `VALUE`) | | `storm.update.max_shapes` | `5` | Maximum UPDATE shapes before fallback to full-row | For cache retention settings, see [Entity Cache Configuration](entity-cache.md#configuration-reference). **Example: Setting properties** ```kotlin // Via StormConfig val config = StormConfig.of(mapOf( UPDATE_DEFAULT_MODE to "FIELD", UPDATE_DIRTY_CHECK to "VALUE", UPDATE_MAX_SHAPES to "10" )) val orm = ORMTemplate.of(dataSource, config) ``` ```bash # Or via JVM arguments (used as fallback when not set in StormConfig) java -Dstorm.update.default_mode=FIELD \ -Dstorm.update.dirty_check=VALUE \ -Dstorm.update.max_shapes=10 \ -jar myapp.jar ``` ### Per-Entity Annotation The `@DynamicUpdate` annotation provides fine-grained control per entity: ```kotlin @DynamicUpdate(OFF) // No dirty checking @DynamicUpdate(ENTITY) // Entity-level (default) @DynamicUpdate(FIELD) // Field-level updates @DynamicUpdate(FIELD, dirtyCheck = VALUE) // Field-level with value comparison ``` ### Configuration Precedence ``` ┌─────────────────────────────────────────────────────────────┐ │ Configuration Precedence │ ├─────────────────────────────────────────────────────────────┤ │ │ │ 1. @DynamicUpdate annotation on entity class │ │ ↓ (if not present) │ │ 2. StormConfig property │ │ ↓ (if not set) │ │ 3. System property (-Dstorm.update.default_mode) │ │ ↓ (if not set) │ │ 4. Built-in default (ENTITY mode, INSTANCE checking) │ │ │ └─────────────────────────────────────────────────────────────┘ ``` Entity-level annotations always override global settings. This allows you to set a sensible default globally while customizing specific entities that have different requirements. --- ## Entity Cache Integration Dirty checking relies on Storm's [entity cache](entity-cache.md), which stores observed entity state within a transaction. The cache serves multiple purposes beyond dirty checking. See the [Entity Cache](entity-cache.md) documentation for details on: - Cache behavior at different isolation levels - Memory management and retention configuration - Query optimization and cache-first lookups **Key point for dirty checking:** Cache writes for observed state occur at all isolation levels when dirty checking is enabled. This means dirty checking works even at `READ_COMMITTED` and `READ_UNCOMMITTED`, where cached instances are not returned on reads. --- ## Important Notes ### 1. Dirty Checking Is Not Optimistic Locking A common misconception is that dirty checking prevents concurrent modification conflicts. **It does not.** Dirty checking determines *what to update*. Optimistic locking determines *whether the update is safe* based on concurrent modifications by other transactions. ```kotlin // This does NOT prevent lost updates: val user = orm.find(User_.id eq 1) // ... another transaction modifies the same user ... orm update user.copy(name = "New Name") // May overwrite other transaction's changes! // For conflict detection, use @Version: data class User( @PK val id: Int = 0, @Version val version: Int = 0, // Incremented on each update val name: String ) : Entity // Now concurrent modifications are detected: orm update user.copy(name = "New Name") // Throws OptimisticLockException if version changed since read ``` ### 2. Raw SQL Mutations Clear All Observed State Storm tracks which entity types are affected by each mutation so it can selectively invalidate observed state. For template-based updates (using `orm update entity`), Storm knows the entity type and only clears observed state of that type. However, when you execute raw SQL mutations without entity type information, Storm cannot determine which entities may have been affected. Rather than risk stale comparisons that could silently skip a necessary UPDATE, Storm clears all observed state in the current transaction: ```kotlin val user = orm.get(User_.id eq 1) // Storm observes User state val city = orm.get(City_.id eq 100) // Storm observes City state // Raw SQL mutation - Storm clears all observed state orm.execute("UPDATE user SET name = 'Changed' WHERE id = 1") // All observed state is now invalidated orm update user.copy(email = "new@example.com") // Falls back to full-row update orm update city.copy(name = "New City") // Also falls back to full-row update ``` This ensures correctness at the cost of losing dirty checking optimization for the remainder of the transaction. ### 3. Nested Records Are Inspected Storm's dirty checking is not limited to top-level fields. When entities contain embedded records or value objects, Storm flattens the nested structure and inspects individual columns. This means that changing a single field inside a nested record produces a targeted UPDATE rather than rewriting the entire nested structure. ```kotlin data class Address(val street: String, val city: String) data class User( @PK val id: Int = 0, val name: String, @Embedded val address: Address ) : Entity val user = orm.find(User_.id eq 1) // With FIELD mode: only changed columns in Address are updated orm update user.copy(address = user.address.copy(city = "New City")) // UPDATE user SET city=? WHERE id=? ``` ### 4. Generated Metamodel Improves Performance Storm uses compile-time generated metamodel classes for dirty checking operations. This provides several advantages: - **No reflection:** Field access is direct, not reflective - **No boxing:** Primitive values are compared without boxing overhead - **Type safety:** Comparison operations are type-checked at compile time - **Optimized paths:** The generated code is specialized for each entity Ensure your build is configured to run the KSP (Kotlin) or annotation processor (Java) to generate metamodel classes. If the metamodel is not available, Storm falls back to reflection. --- ## Best Practices ### 1. Start with ENTITY Mode For most applications, the default `ENTITY` mode provides the right balance: - Skips unnecessary updates when nothing changed - Maintains stable SQL shapes for batching - Low memory and CPU overhead Only switch to `FIELD` mode when you have a specific need (wide tables, high contention, large columns). ### 2. Use FIELD Mode Strategically Reserve `FIELD` mode for entities where it provides clear benefits: ```kotlin // Good candidate for FIELD mode: // - 20+ columns // - Large TEXT content column // - Typically only 1-2 fields change per update @DynamicUpdate(FIELD) data class Article( @PK val id: Int, val title: String, val content: String, // Large val metadata: String, // JSON blob val viewCount: Int, // Frequently updated alone // ... many more fields ) : Entity // Poor candidate for FIELD mode: // - Only 4 columns // - All fields typically change together // - Batched frequently data class AuditEntry( // Keep ENTITY mode @PK val id: Int, val action: String, val userId: Int, val timestamp: Instant ) : Entity ``` ### 3. Always Use @Version for Concurrency Control Dirty checking answers "what changed?" but not "did someone else change this?" ```kotlin data class Account( @PK val id: Int = 0, @Version val version: Int = 0, // Always include for concurrent access val balance: BigDecimal ) : Entity ``` Without `@Version`, concurrent updates can silently overwrite each other (lost update problem). ### 4. Match Mode to Workload ``` ┌─────────────────────────────────────────────────────────────────────┐ │ Mode Selection Guide │ ├─────────────────────────────────────────────────────────────────────┤ │ │ │ Workload Recommended Mode │ │ ────────────────────────────────── ───────────────── │ │ Typical CRUD application ENTITY (default) │ │ Batch import/export OFF │ │ Wide tables (20+ columns) FIELD │ │ Tables with BLOB/TEXT columns FIELD │ │ High-contention rows FIELD │ │ Event sourcing / audit logs OFF │ │ Mixed workload ENTITY + selective FIELD │ │ │ └─────────────────────────────────────────────────────────────────────┘ ``` ### 5. Monitor in Production If using `FIELD` mode extensively, monitor: - **Database statement cache:** Watch for high eviction rates - **Query execution plans:** Ensure varied SQL shapes don't cause plan instability - **Batch sizes:** Verify batching is still effective for your use case Most databases provide metrics for prepared statement cache usage. If you see degradation, consider: - Lowering `storm.update.max_shapes` - Switching some entities back to `ENTITY` mode - Increasing database statement cache size ======================================== ## Source: entity-cache.md ======================================== # Entity Cache Storm maintains a transaction-scoped entity cache that optimizes database interactions. The cache is a pure performance optimization: it never changes the semantics of your transactions. What you read from the database is exactly what you would read without caching; the cache simply avoids redundant work. ## Design Principles **Semantics-preserving:** The cache is carefully designed to align with your chosen transaction isolation level. At `READ_COMMITTED` or lower, you see fresh data on every read; the cache won't return stale instances. At `REPEATABLE_READ` or higher, returning cached instances is safe and matches what the database guarantees. **Transparent:** You don't need to manage the cache. It's automatically scoped to the transaction and cleared on commit or rollback. There's no flush, no detach, no merge. Just predictable behavior aligned with your isolation level. **Multi-purpose:** The cache serves four complementary goals: 1. **Query optimization:** Avoid redundant database round-trips for the same entity 2. **Hydration optimization:** Skip entity construction when a cached instance exists 3. **Identity preservation:** Same database row returns the same object instance 4. **Dirty checking:** Track observed state for efficient updates ## How It Works When you read an entity within a transaction, Storm stores it in a transaction-local cache keyed by primary key: ``` ┌─────────────────────────────────────────────────────────────────────────┐ │ Transaction Scope │ │ │ │ ┌──────────────┐ ┌──────────────────────────┐ │ │ │ Database │ │ Entity Cache │ │ │ └──────┬───────┘ │ ┌────────┬───────────┐ │ │ │ │ │ │ PK │ Entity │ │ │ │ │ SELECT │ ├────────┼───────────┤ │ │ │ ▼ │ │ 1 │ User(1) │ │ │ │ ┌──────────────┐ cache write │ │ 2 │ User(2) │ │ │ │ │ User(id=1) │ ─────────────────────▶ │ 42 │ City(42) │ │ │ │ └──────────────┘ │ └────────┴───────────┘ │ │ │ └──────────────────────────┘ │ │ │ │ │ ┌──────────────┐ cache read │ │ │ │ findById(1) │ ◀───────────────────────────────┘ │ │ └──────────────┘ (no SQL) │ │ │ └─────────────────────────────────────────────────────────────────────────┘ ``` --- ## Cache Behavior Whether Storm returns cached instances depends on the transaction isolation level: | Isolation Level | Cache Write | Cache Read | |-----------------|-------------|------------| | `READ_COMMITTED` or lower | If dirty checking enabled | No | | `REPEATABLE_READ` or higher | Yes | Yes | At `READ_COMMITTED` or lower, Storm fetches fresh data on every read. At `REPEATABLE_READ` or higher, cached instances are returned. This matches what the database guarantees at each isolation level. When no isolation level is explicitly set, Storm uses the database default and fetches fresh data on each read. Most databases default to `READ_COMMITTED`. --- ## Query Optimization The primary benefit of the entity cache is avoiding redundant database round-trips. When your code reads the same entity multiple times within a transaction, the cache short-circuits the second read. This matters most in business logic that navigates entity graphs, where the same parent entity may be reached through multiple paths. ### Repository Lookups Repository methods that fetch by primary key check the cache first: ```kotlin transaction { val user = userRepository.findById(1) // Database query, result cached // ... other operations ... val sameUser = userRepository.findById(1) // Cache hit, no query // user === sameUser (same instance at REPEATABLE_READ+) } ``` This applies to: - `findById()` / `getById()` - `findByRef()` / `getByRef()` - `selectById()` / `selectByRef()` ### Ref Resolution When you call `fetch()` on a `Ref`, Storm checks the cache before querying: ```kotlin transaction { val order = orderRepository.findById(orderId) // If the customer was already loaded in this transaction, // this returns the cached instance without a query val customer = order.customer.fetch() } ``` This is particularly useful when navigating entity graphs where the same entity might be referenced multiple times. ### Select Operations Batch select operations benefit from cache-aware splitting. When you request multiple entities by ID, Storm partitions the IDs into cache hits and cache misses, queries the database only for the misses, and merges the results. This is transparent to the caller and reduces query size when some entities have already been loaded. ```kotlin transaction { // Load some users val user1 = userRepository.findById(1) // Cached val user2 = userRepository.findById(2) // Cached // Select users 1, 2, 3, 4, 5 val users = userRepository.select(listOf(1, 2, 3, 4, 5)) // Only queries for IDs 3, 4, 5 - returns cached instances for 1, 2 } ``` --- ## Entity Identity At `REPEATABLE_READ` and above, the cache ensures consistent entity identity within a transaction: ```kotlin transaction(isolation = REPEATABLE_READ) { val user1 = userRepository.findById(1) val user2 = userRepository.findById(1) // Same instance check(user1 === user2) // Also applies to entities loaded via relationships val order = orderRepository.findById(orderId) val orderUser = order.user.fetch() if (order.userId == 1) { check(orderUser === user1) // Same instance } } ``` This identity guarantee simplifies application logic: you can use reference equality (`===`) to check if two variables refer to the same database row. --- ## Cache Invalidation The cache must stay consistent with the database. Rather than trying to predict what the database will store after a mutation (which is impossible when triggers, computed columns, or version increments are involved), Storm invalidates the cache entry for any mutated entity. The next read fetches the authoritative state from the database. ### After Mutations Insert, update, upsert, and delete operations invalidate the cache entry for the affected entity: ```kotlin transaction { val user = userRepository.findById(1) // Cached userRepository.update(user.copy(name = "New Name")) // Cache entry invalidated val freshUser = userRepository.findById(1) // Database query // freshUser has the database state (including any trigger modifications) } ``` Why invalidate rather than update? The database may modify data in ways not visible to the application: - Triggers can change values after INSERT/UPDATE - Version fields are incremented by the database - Default values and computed columns - `ON UPDATE CURRENT_TIMESTAMP` constraints By invalidating, the next read fetches the actual persisted state. ### Raw SQL Mutations When you execute raw SQL mutations, Storm cannot determine which entities were affected: ```kotlin transaction { val user = userRepository.findById(1) // Cached val city = cityRepository.findById(42) // Cached // Raw SQL - Storm doesn't know what was affected orm.execute("UPDATE user SET status = 'inactive' WHERE last_login < ?", cutoffDate) // All caches cleared for safety val freshUser = userRepository.findById(1) // Database query } ``` To preserve cache efficiency, prefer using repository methods or typed templates that specify the entity type. --- ## Memory Management The entity cache is scoped to a single transaction and automatically discarded when the transaction commits or rolls back. You don't need to manage cache lifecycle; it's tied to the transaction boundary. Within a transaction, memory is managed automatically based on what your code is using: - Entities you're actively using stay cached - Entities you've moved past can be reclaimed - Memory usage stays proportional to your working set ### Retention Modes The retention mode controls how long the JVM retains cached entities within a transaction. The `default` mode retains entities for the duration of the transaction, which provides reliable dirty checking. The JVM may still reclaim entries under memory pressure. Switch to `light` only if you have memory-constrained transactions that load a very large number of entities and you are willing to trade dirty-checking accuracy for lower memory usage. Configure retention behavior via `StormConfig` or system property: ```kotlin val config = StormConfig.of(mapOf(ENTITY_CACHE_RETENTION to "light")) val orm = ORMTemplate.of(dataSource, config) ``` ```bash -Dstorm.entity_cache.retention=default # Default -Dstorm.entity_cache.retention=light ``` | Mode | Behavior | Use Case | |------|----------|----------| | `default` | Entries retained for the transaction duration (reclaimable under memory pressure) | Most applications | | `light` | Entries may be cleaned up when entity is no longer referenced | Memory-constrained bulk operations | ### Impact on Dirty Checking If an entity's cache entry is cleaned up before you call `update()`, Storm falls back to a full-row update. This is correct but less optimal. With the `default` retention mode, this rarely happens. If you use `light` retention and observe frequent fallbacks, consider: - Switching to `default` retention - Keeping references to entities until update - Restructuring code to update sooner after reading --- ## Dirty Checking Integration The entity cache serves a dual purpose beyond query optimization: it stores the original state of entities at the time they were read. This original state is the baseline for dirty checking. When you update an entity, Storm compares the new values against the cached original to determine which fields changed, producing a minimal UPDATE statement. Without the cache, Storm falls back to updating all columns. ```kotlin transaction { val user = userRepository.findById(1) // State observed and cached // Modify the entity val updated = user.copy(name = "New Name") // Storm compares against cached state userRepository.update(updated) // With FIELD mode: UPDATE user SET name = ? WHERE id = ? // With ENTITY mode: Full row update, but skipped if unchanged } ``` Cache writes for dirty checking occur at all isolation levels when dirty checking is enabled for the entity type. See [Dirty Checking & Update Modes](dirty-checking.md) for details on configuring update behavior. --- ## Transaction Boundaries The entity cache is scoped to a single transaction. When the transaction commits or rolls back, all cached state is discarded. This ensures that no stale data leaks across transaction boundaries. ### Nested Transactions Cache behavior with nested transactions follows from the underlying transaction semantics. Propagation modes that share the parent transaction also share the parent cache. Propagation modes that create a new transaction (or suspend the current one) start with a fresh, empty cache. | Propagation | Cache Behavior | |-------------|----------------| | `REQUIRED`, `SUPPORTS`, `MANDATORY` | Shares parent's cache | | `NESTED` | Shares parent's cache; cleared on savepoint rollback | | `REQUIRES_NEW` | Fresh cache (separate transaction) | | `NOT_SUPPORTED`, `NEVER` | Fresh cache (no transaction) | ```kotlin transaction { val user = userRepository.findById(1) // Cached in outer transaction transaction(propagation = NESTED) { val sameUser = userRepository.findById(1) // Cache hit from outer // sameUser === user } transaction(propagation = REQUIRES_NEW) { val differentUser = userRepository.findById(1) // Fresh query, new cache // differentUser !== user (different transaction, different instance) } } ``` --- ## Configuration Reference | Property | Default | Description | |----------|---------|-------------| | `storm.entity_cache.retention` | `default` | Cache retention: `default` or `light` | --- ## Query-Level Identity Even without transaction-level caching, Storm preserves entity identity within a single query result. When the same entity appears multiple times in one query (e.g., through joins), Storm interns these to the same object instance. This happens automatically during result set hydration. ```kotlin // Even at READ_COMMITTED in a read-write transaction: val orders = orderRepository.findAll(Order_.status eq "pending") // If order1 and order2 have the same customer, they share the instance val customer1 = orders[0].customer.fetch() val customer2 = orders[1].customer.fetch() // Same instance if same customer ``` **Why this matters:** - **Memory efficiency:** Duplicate entities in a result set share one instance - **Consistent identity:** Within a query, `===` works as expected for same-row entities **Relationship with transaction-level caching:** - At `READ_COMMITTED` or lower: Identity preserved within each query, but separate queries may return different instances - At `REPEATABLE_READ` or higher: Query-level identity is extended transaction-wide via the cache For details on how the query interner works during hydration, see [Hydration](hydration.md#query-level-identity-interning). --- ## Best Practices ### 1. Choose the Right Isolation Level When no isolation level is explicitly set, Storm uses the database default (typically `READ_COMMITTED` for most databases). Use higher isolation levels only when you have a specific consistency requirement: ```kotlin // Database default: Fresh data on each read transaction { val user = userRepository.findById(1) // ... later ... val freshUser = userRepository.findById(1) // Fresh database query } // REPEATABLE_READ: Consistent snapshot, cached instances, more locking transaction(isolation = REPEATABLE_READ) { val user = userRepository.findById(1) val sameUser = userRepository.findById(1) // Cache hit, same instance } ``` See [Transactions](transactions.md#isolation-levels) for guidance on choosing isolation levels. ### 2. Leverage Ref.fetch() Caching When navigating relationships, `Ref.fetch()` automatically uses the cache: ```kotlin transaction { // Load orders with their users val orders = orderRepository.findAll(Order_.status eq "pending") // If multiple orders share the same user, only one query per unique user orders.forEach { order -> val user = order.user.fetch() // Cached after first fetch per user println("${order.id} belongs to ${user.name}") } } ``` ### 3. Batch Lookups for Cache Efficiency When you need multiple entities, use batch lookups to optimize cache interaction: ```kotlin transaction { // Efficient: single query for cache misses val users = userRepository.select(userIds) // Less efficient: N queries (though cached results help) val users = userIds.map { userRepository.findById(it) } } ``` ### 4. Keep Transactions Focused Since the cache is transaction-scoped, long-running transactions accumulate cached entities. Keep transactions focused on specific operations to maintain predictable memory usage. ======================================== ## Source: cursors.md ======================================== # Cursor Serialization This page covers the low-level details of cursor serialization for scrolling. For a high-level introduction to scrolling, see [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling). ## Overview When a `Window` is returned from a scroll operation, it carries `Scrollable` navigation tokens that encode the cursor position. These tokens can be serialized to opaque, URL-safe strings using `toCursor()` and deserialized using `Scrollable.fromCursor()`. This allows REST APIs to pass scroll state as a query parameter between requests. [Kotlin] ```kotlin // Server: serialize cursor into response val cursor: String? = window.nextCursor() // Client sends cursor back in next request // Server: reconstruct scrollable cursor?.let { val scrollable = Scrollable.fromCursor(User_.id, it) val next = userRepository.scroll(scrollable) } ``` [Java] ```java // Server: serialize cursor into response String cursor = window.nextCursor(); // Client sends cursor string back in next request // Server: reconstruct scrollable var scrollable = Scrollable.fromCursor(User_.id, cursor); var next = userRepository.scroll(scrollable); ``` ## Cursor format The serialized cursor is a Base64 URL-safe encoded binary payload. The format is intentionally opaque: clients should treat it as an immutable token and never parse or modify it. The internal structure includes: - A version byte for forward compatibility - Fingerprints of the metamodel key/sort paths and the codec registry, used to detect mismatches on deserialization - The scroll direction (forward or backward) - The page size - The cursor value(s) for the key and optional sort fields Cursors produced by one application instance can be consumed by another, as long as both use the same entity model and codec registry. A cursor becomes invalid if the metamodel paths change (for example, renaming the key field) or if the codec registry changes (for example, adding or removing a custom codec). ## Security The cursor format is opaque but **not tamper-proof**. A malicious client can decode the Base64 payload, modify cursor values, and re-encode it. Storm validates structural integrity (version, fingerprints, type tags, trailing bytes), but it does not detect value tampering. If your cursors are exposed to untrusted clients (for example, in a public REST API), consider one of the following mitigations: - **HMAC wrapping.** Sign the cursor string with a server-side secret and verify the signature before passing it to `fromCursor()`. This prevents modification without detection. - **Encryption.** Encrypt the cursor string before sending it to the client and decrypt it on the server. This prevents both reading and modification. - **Server-side storage.** Store the cursor state on the server (for example, in a session or cache) and give the client an opaque session key instead of the actual cursor. Storm does not provide built-in signing or encryption because the appropriate security mechanism depends on your application's threat model and infrastructure. ## Supported types The following Java types can be used as cursor values (key or sort fields) out of the box: | Type | Binary size | Notes | |------|------------|-------| | `Integer` / `int` | 4 bytes | | | `Long` / `long` | 8 bytes | | | `Short` / `short` | 2 bytes | | | `Byte` / `byte` | 1 byte | | | `Boolean` / `boolean` | 1 byte | | | `String` | 4 + length | UTF-8 encoded | | `UUID` | 16 bytes | | | `Instant` | 12 bytes | Epoch seconds + nanos | | `LocalDate` | 6 bytes | Year (4) + month (1) + day (1) | | `LocalDateTime` | 11 bytes | Date (6) + hour/min/sec (3) + nanos (4) | | `OffsetDateTime` | 15 bytes | LocalDateTime (11) + offset seconds (4) | | `BigDecimal` | 4 + length | Serialized as plain string | If your key or sort field uses a type not in this list, serialization via `toCursor()` will throw an `IllegalStateException`. You can either use one of the supported types for your key/sort columns, or register a custom codec. Note that in-memory navigation (using `next()` and `previous()` directly, without serializing to a cursor string) works with any type, including inline records and other composite types. The type restriction only applies to `toCursor()` serialization. ## Custom cursor codecs To add cursor serialization support for a custom type, implement the `CursorCodecProvider` SPI. Storm discovers providers via `ServiceLoader`. ### Step 1: Implement the codec Create a class that implements `CursorCodecProvider` and returns codec entries for your custom types. Each entry binds a unique tag (in the range 64-255), a Java type, and a `CursorCodec` implementation. Tags below 64 are reserved for built-in types and will be rejected at startup. [Kotlin] ```kotlin class MyCursorCodecProvider : CursorCodecProvider { override fun codecs(): List> = listOf( CursorCodecEntry(64, UserId::class.java, object : CursorCodec { override fun write(out: DataOutputStream, value: UserId) { out.writeLong(value.value) } override fun read(`in`: DataInputStream): UserId { return UserId(`in`.readLong()) } }) ) } ``` [Java] ```java public class MyCursorCodecProvider implements CursorCodecProvider { @Override public List> codecs() { return List.of( new CursorCodecEntry<>(64, UserId.class, new CursorCodec() { @Override public void write(DataOutputStream out, UserId value) throws IOException { out.writeLong(value.value()); } @Override public UserId read(DataInputStream in) throws IOException { return new UserId(in.readLong()); } }) ); } } ``` ### Step 2: Register the provider Create a service file at `META-INF/services/st.orm.core.spi.CursorCodecProvider` containing the fully qualified class name of your provider: ``` com.example.MyCursorCodecProvider ``` ### Constraints - Custom tags must be in the range **64-255**. Tags 0-63 are reserved for built-in types. Using a reserved tag throws an `IllegalArgumentException` at startup. - Each tag and each type can only be registered once. Duplicate registrations throw an `IllegalArgumentException` at startup. - The codec registry is built once at class load time. Adding or removing codecs changes the registry fingerprint, which invalidates all previously serialized cursors. - The `write` method receives a non-null value; null handling is done by the framework. The `read` method must return a non-null value. ## Size limit Cursor strings carry a page size that is validated during deserialization. The maximum size defaults to 1000 and can be configured via the `st.orm.scrollable.maxSize` system property. This limit only applies to cursors deserialized from external input via `fromCursor()`, not to programmatic `Scrollable.of()` calls. ======================================== ## Source: configuration.md ======================================== # Configuration Storm can be configured through `StormConfig`, system properties, Spring Boot's `application.yml`, or Ktor's `application.conf`. These properties control runtime behavior for features like dirty checking and entity caching. All properties have sensible defaults, so **configuration is optional**. Storm works out of the box without any configuration. --- ## Properties | Property | Default | Description | |----------|---------|-------------| | `storm.update.default_mode` | `ENTITY` | Default update mode for entities without `@DynamicUpdate` | | `storm.update.dirty_check` | `INSTANCE` | Default dirty check strategy (`INSTANCE` or `VALUE`) | | `storm.update.max_shapes` | `5` | Maximum UPDATE shapes before fallback to full-row | | `storm.entity_cache.retention` | `default` | Cache retention mode: `default` or `light` | | `storm.template_cache.size` | `2048` | Maximum number of compiled templates to cache | | `storm.validation.record_mode` | `fail` | Record validation mode: `fail`, `warn`, or `none` | | `storm.validation.schema_mode` | `none` | Schema validation mode: `none`, `warn`, or `fail` (Spring Boot and Ktor) | | `storm.validation.strict` | `false` | Treat schema validation warnings as errors | | `storm.validation.interpolation_mode` | `warn` | Interpolation safety mode: `warn`, `fail`, or `none` (see [Interpolation Safety](#interpolation-safety)) | | `st.orm.scrollable.maxSize` | `1000` | Maximum window size allowed in a serialized cursor (system property only) | ### Setting Properties **Via JVM arguments:** ```bash java -Dstorm.update.default_mode=FIELD \ -Dstorm.update.dirty_check=VALUE \ -Dstorm.update.max_shapes=10 \ -Dstorm.entity_cache.retention=light \ -Dstorm.template_cache.size=4096 \ -jar myapp.jar ``` **Programmatically via `StormConfig`:** `StormConfig` holds an immutable set of `String` key-value properties. Pass a `StormConfig` to `ORMTemplate.of()` to apply the configuration. Any property not explicitly set falls back to the system property, then to the built-in default. [Kotlin] ```kotlin val config = StormConfig.of(mapOf( UPDATE_DEFAULT_MODE to "FIELD", ENTITY_CACHE_RETENTION to "light", TEMPLATE_CACHE_SIZE to "4096" )) val orm = ORMTemplate.of(dataSource, config) // Or using the extension function val orm = dataSource.orm(config) ``` [Java] ```java var config = StormConfig.of(Map.of( UPDATE_DEFAULT_MODE, "FIELD", ENTITY_CACHE_RETENTION, "light", TEMPLATE_CACHE_SIZE, "4096" )); var orm = ORMTemplate.of(dataSource, config); ``` When `StormConfig` is omitted, `ORMTemplate.of(dataSource)` reads system properties and built-in defaults automatically. **In Spring Boot's `application.yml`** (requires `storm-spring-boot-starter` or `storm-kotlin-spring-boot-starter`): ```yaml storm: ansi-escaping: false update: default-mode: ENTITY dirty-check: INSTANCE max-shapes: 5 entity-cache: retention: default template-cache: size: 2048 validation: record-mode: fail schema-mode: none strict: false ``` The Spring Boot Starter binds these properties and builds a `StormConfig` that is passed to the `ORMTemplate` factory. Values not set in YAML fall back to system properties and then to built-in defaults. See [Spring Integration](spring-integration.md#configuration-via-applicationyml) for details. **In Ktor's `application.conf`** (requires `storm-ktor`): ```hocon storm { ansiEscaping = false update { defaultMode = "ENTITY" dirtyCheck = "INSTANCE" maxShapes = 5 } entityCache { retention = "default" } templateCache { size = 2048 } validation { recordMode = "fail" schemaMode = "none" strict = false } } ``` The Storm Ktor plugin reads these properties and builds a `StormConfig` that is passed to the `ORMTemplate` factory. HOCON supports environment variable substitution with `${?VAR_NAME}` syntax. See [Ktor Integration](ktor-integration.md#configuration) for details. --- ## ORMTemplate Factory Overloads The `ORMTemplate.of()` factory method is the main entry point for creating an ORM template outside of Spring. It accepts optional parameters for configuration and template decoration, so you can combine `StormConfig` (for runtime properties) with a `TemplateDecorator` (for name resolution customization) at creation time. The simplest form takes only a `DataSource` and uses all defaults. From there, you can add a `StormConfig` for property overrides, a decorator for custom naming conventions, or both. The decorator parameter is a `UnaryOperator` that receives the default decorator and returns a modified version. [Kotlin] ```kotlin // Minimal: defaults only val orm = dataSource.orm // With configuration val orm = dataSource.orm(config) // With decorator (custom name resolution) val orm = dataSource.orm { decorator -> decorator.withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.DEFAULT)) } // With both configuration and decorator val orm = dataSource.orm(config) { decorator -> decorator.withColumnNameResolver(ColumnNameResolver.toUpperCase(ColumnNameResolver.DEFAULT)) } ``` [Java] ```java // Minimal: defaults only var orm = ORMTemplate.of(dataSource); // With configuration var orm = ORMTemplate.of(dataSource, config); // With decorator (custom name resolution) var orm = ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.DEFAULT))); // With both configuration and decorator var orm = ORMTemplate.of(dataSource, config, decorator -> decorator .withColumnNameResolver(ColumnNameResolver.toUpperCase(ColumnNameResolver.DEFAULT))); ``` When using Spring Boot, the starter creates the `ORMTemplate` for you and applies configuration from `application.yml`. You can still customize name resolution by defining a `TemplateDecorator` bean. See [Spring Integration: Template Decorator](spring-integration.md#template-decorator) for details. --- ## Naming Conventions Storm uses pluggable name resolvers to convert Kotlin/Java names to database identifiers. By default, camelCase names are converted to snake_case. You can replace or wrap these resolvers to match any naming convention your database requires, whether that means uppercase identifiers, table prefixes, or entirely custom logic. This section covers global name resolution configuration. For per-entity annotation overrides (`@DbTable`, `@DbColumn`), see [Entities: Custom Table and Column Names](entities.md#custom-table-and-column-names). ### Name Resolvers Storm splits name resolution into three independent concerns. Each resolver is a functional interface with a single method, so you can configure them with lambdas or with full class implementations. | Resolver | Method Signature | Purpose | |----------|-----------------|---------| | `TableNameResolver` | `resolveTableName(RecordType)` | Maps an entity or projection class to a table name | | `ColumnNameResolver` | `resolveColumnName(RecordField)` | Maps a record field to a column name | | `ForeignKeyResolver` | `resolveColumnName(RecordField, RecordType)` | Maps a foreign key field to its column name, given the target entity type | The separation means you can, for example, use uppercase table names while keeping lowercase column names, or apply a custom foreign key naming pattern without affecting regular columns. ### Default Conversion: CamelCase to Snake_Case Out of the box, Storm converts camelCase identifiers to snake_case by inserting underscores before uppercase letters and lowercasing the result. This matches the most common convention in relational databases. | Field/Class | Resolved Name | |-------------|---------------| | `id` | `id` | | `email` | `email` | | `birthDate` | `birth_date` | | `postalCode` | `postal_code` | | `firstName` | `first_name` | | `UserRole` | `user_role` | For foreign keys, `_id` is appended after the conversion. This convention makes it clear which columns are foreign keys when reading the schema directly. | FK Field | Resolved Column | |----------|-----------------| | `city` | `city_id` | | `petType` | `pet_type_id` | | `homeAddress` | `home_address_id` | ### Configuring Name Resolvers To replace the default resolvers, pass a `TemplateDecorator` when creating the ORM template. The decorator exposes `withTableNameResolver()`, `withColumnNameResolver()`, and `withForeignKeyResolver()` methods. You only need to set the resolvers you want to change; any resolver you leave unset keeps its default behavior. [Kotlin] ```kotlin val orm = dataSource.orm { decorator -> decorator .withTableNameResolver(TableNameResolver.camelCaseToSnakeCase()) .withColumnNameResolver(ColumnNameResolver.camelCaseToSnakeCase()) .withForeignKeyResolver(ForeignKeyResolver.camelCaseToSnakeCase()) } ``` [Java] ```java var orm = ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(TableNameResolver.camelCaseToSnakeCase()) .withColumnNameResolver(ColumnNameResolver.camelCaseToSnakeCase()) .withForeignKeyResolver(ForeignKeyResolver.camelCaseToSnakeCase())); ``` The example above is equivalent to the defaults and is shown for illustration. In practice, you would only call these methods when you want to override the default behavior. ### Uppercase Conversion Some databases (notably Oracle) use uppercase identifiers by default. Rather than writing a new resolver from scratch, Storm provides `toUpperCase()` wrappers that decorate any existing resolver and uppercase its output. [Kotlin] ```kotlin val orm = dataSource.orm { decorator -> decorator .withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.camelCaseToSnakeCase())) .withColumnNameResolver(ColumnNameResolver.toUpperCase(ColumnNameResolver.camelCaseToSnakeCase())) .withForeignKeyResolver(ForeignKeyResolver.toUpperCase(ForeignKeyResolver.camelCaseToSnakeCase())) } ``` [Java] ```java var orm = ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(TableNameResolver.toUpperCase(TableNameResolver.camelCaseToSnakeCase())) .withColumnNameResolver(ColumnNameResolver.toUpperCase(ColumnNameResolver.camelCaseToSnakeCase())) .withForeignKeyResolver(ForeignKeyResolver.toUpperCase(ForeignKeyResolver.camelCaseToSnakeCase()))); ``` This produces: | Field/Class | Resolved Name | |-------------|---------------| | `birthDate` | `BIRTH_DATE` | | `User` | `USER` | | `city` (FK) | `CITY_ID` | ### Composing Resolvers The `toUpperCase()` wrapper demonstrates a general pattern: because each resolver is a functional interface, you can compose wrappers that add behavior to any existing resolver. This is more flexible than subclassing because wrappers are independent of each other and can be combined in any order. For example, a wrapper that adds a table name prefix. This is useful when multiple applications share a database and each uses a common prefix to avoid table name collisions. [Kotlin] ```kotlin fun withPrefix(prefix: String, resolver: TableNameResolver) = TableNameResolver { type -> "$prefix${resolver.resolveTableName(type)}" } val orm = dataSource.orm { decorator -> decorator.withTableNameResolver(withPrefix("app_", TableNameResolver.camelCaseToSnakeCase())) } // User -> app_user, OrderItem -> app_order_item ``` [Java] ```java static TableNameResolver withPrefix(String prefix, TableNameResolver resolver) { return type -> prefix + resolver.resolveTableName(type); } var orm = ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(withPrefix("app_", TableNameResolver.camelCaseToSnakeCase()))); // User -> app_user, OrderItem -> app_order_item ``` Note that each resolver should return a plain identifier (the table name, column name, or foreign key column name). Do not include schema qualifiers or other SQL syntax in the resolved name. ### RecordType and RecordField Reference Custom resolvers receive `RecordType` and `RecordField` objects that provide metadata about the entity or field being resolved. These objects give you access to the class, its annotations, and individual field details, so your resolvers can make decisions based on package names, annotation presence, field types, or any other metadata. **`RecordType`** is passed to `TableNameResolver` and `ForeignKeyResolver`. It represents the entity or projection class being mapped. | Method | Return Type | Description | |--------|-------------|-------------| | `type()` | `Class` | The record class | | `annotations()` | `List` | All annotations on the record class | | `fields()` | `List` | Metadata for all record fields, in declaration order | | `isAnnotationPresent(Class)` | `boolean` | Whether an annotation type is present | | `getAnnotation(Class)` | `Annotation` | Retrieve a single annotation by type | **`RecordField`** is passed to `ColumnNameResolver` and `ForeignKeyResolver`. It represents a single field (record component) being mapped to a column. | Method | Return Type | Description | |--------|-------------|-------------| | `name()` | `String` | The field name (e.g., `"birthDate"`) | | `type()` | `Class` | The raw field type | | `declaringType()` | `Class` | The class that declares this field | | `annotations()` | `List` | All annotations on the field | | `isAnnotationPresent(Class)` | `boolean` | Whether an annotation type is present | | `nullable()` | `boolean` | Whether the field can be null | ### Custom Resolvers When the built-in resolvers and wrappers are not enough, you can implement fully custom naming strategies. There are two approaches: lambda expressions for simple, inline logic, and interface implementations for strategies that are complex or shared across projects. #### Lambda-Based Configuration Lambdas are convenient for quick, self-contained overrides. Since each resolver is a functional interface, a single lambda replaces the entire resolution strategy for that concern. [Kotlin] ```kotlin // Identity resolver: use the field name as-is, without any conversion val orm = dataSource.orm { decorator -> decorator.withColumnNameResolver { field -> field.name() } } // Custom prefix for foreign key columns val orm = dataSource.orm { decorator -> decorator.withForeignKeyResolver { field, type -> "fk_${ForeignKeyResolver.camelCaseToSnakeCase().resolveColumnName(field, type)}" } } ``` [Java] ```java // Identity resolver: use the field name as-is, without any conversion var orm = ORMTemplate.of(dataSource, decorator -> decorator .withColumnNameResolver(field -> field.name())); // Custom prefix for foreign key columns var orm = ORMTemplate.of(dataSource, decorator -> decorator .withForeignKeyResolver((field, type) -> "fk_" + ForeignKeyResolver.camelCaseToSnakeCase().resolveColumnName(field, type))); ``` #### Interface-Based Implementation For more complex or reusable naming strategies, implement the resolver interfaces as standalone classes. This approach is preferable when the resolver contains non-trivial logic, needs to be tested independently, or is shared across multiple ORM template instances. The examples below show three resolvers working together: a table name resolver that adds a prefix based on the entity's package, a column name resolver that marks encrypted columns, and a foreign key resolver that uses the target table name instead of the field name. [Kotlin] ```kotlin class PrefixedTableNameResolver : TableNameResolver { override fun resolveTableName(type: RecordType): String { val pkg = type.type().packageName val prefix = if (pkg.contains(".admin")) "admin_" else "" val tableName = TableNameResolver.camelCaseToSnakeCase().resolveTableName(type) return "$prefix$tableName" } } class EncryptedColumnNameResolver : ColumnNameResolver { override fun resolveColumnName(field: RecordField): String { val columnName = ColumnNameResolver.camelCaseToSnakeCase().resolveColumnName(field) return if (field.isAnnotationPresent(Encrypted::class.java)) "enc_$columnName" else columnName } } class TargetTableForeignKeyResolver : ForeignKeyResolver { override fun resolveColumnName(field: RecordField, type: RecordType): String { val targetTable = TableNameResolver.camelCaseToSnakeCase().resolveTableName(type) return "${targetTable}_fk" } } ``` Register custom implementations: ```kotlin val orm = dataSource.orm { decorator -> decorator .withTableNameResolver(PrefixedTableNameResolver()) .withColumnNameResolver(EncryptedColumnNameResolver()) .withForeignKeyResolver(TargetTableForeignKeyResolver()) } ``` [Java] ```java public class PrefixedTableNameResolver implements TableNameResolver { @Override public String resolveTableName(RecordType type) { String pkg = type.type().getPackageName(); String prefix = pkg.contains(".admin") ? "admin_" : ""; String tableName = TableNameResolver.camelCaseToSnakeCase() .resolveTableName(type); return prefix + tableName; } } public class EncryptedColumnNameResolver implements ColumnNameResolver { @Override public String resolveColumnName(RecordField field) { String columnName = ColumnNameResolver.camelCaseToSnakeCase() .resolveColumnName(field); if (field.isAnnotationPresent(Encrypted.class)) { return "enc_" + columnName; } return columnName; } } public class TargetTableForeignKeyResolver implements ForeignKeyResolver { @Override public String resolveColumnName(RecordField field, RecordType type) { String targetTable = TableNameResolver.camelCaseToSnakeCase() .resolveTableName(type); return targetTable + "_fk"; } } ``` Register custom implementations: ```java var orm = ORMTemplate.of(dataSource, decorator -> decorator .withTableNameResolver(new PrefixedTableNameResolver()) .withColumnNameResolver(new EncryptedColumnNameResolver()) .withForeignKeyResolver(new TargetTableForeignKeyResolver())); ``` ### Per-Entity and Per-Field Overrides Annotations on individual entities and fields always take precedence over configured resolvers. This means you can set a global naming convention through resolvers and still override specific tables or columns where the convention does not apply (for example, a legacy table with a non-standard name). Use `@DbTable` to override a table name, `@DbColumn` to override a column name, and the string parameter on `@PK` or `@FK` to override their respective column names. See [Entities: Custom Table and Column Names](entities.md#custom-table-and-column-names) for details and examples. ### Identifier Escaping When a table or column name conflicts with a SQL reserved word, the database will reject the query unless the identifier is escaped. Storm automatically detects and escapes common reserved words. For cases that are not caught automatically, you can force escaping with the `escape` parameter on `@DbTable` or `@DbColumn`. Storm uses the escaping syntax appropriate for the active database dialect (double quotes for most databases, square brackets for SQL Server). [Kotlin] ```kotlin @DbTable("order", escape = true) // "order" is a reserved word data class Order( @PK val id: Int = 0, @DbColumn("select", escape = true) val select: String // "select" is reserved ) : Entity ``` [Java] ```java @DbTable(value = "order", escape = true) // "order" is a reserved word record Order(@PK Integer id, @DbColumn(value = "select", escape = true) String select // "select" is reserved ) implements Entity {} ``` --- ## Scrolling Properties ### st.orm.scrollable.maxSize Sets the maximum window size that a deserialized cursor (via `Scrollable.fromCursor()`) is allowed to carry. This is a safety limit that prevents untrusted clients from requesting excessively large pages through cursor manipulation. The limit is only enforced when deserializing a cursor string; programmatic usage via `Scrollable.of()` is not restricted. This property is a JVM system property only; it is not configurable through `StormConfig` or `application.yml`, because it applies at the `Scrollable` record level in storm-foundation, before any ORM template is created. ```bash java -Dst.orm.scrollable.maxSize=5000 -jar myapp.jar ``` Repository or API layers may choose to enforce stricter per-endpoint limits on top of this framework-level bound. --- ## Dirty Checking Properties These properties control how Storm detects changes to entities during update operations. Dirty checking determines whether an UPDATE statement is sent to the database and, if so, which columns it includes. Choosing the right mode depends on your entity size, update frequency, and whether you use immutable records or mutable objects. See [Dirty Checking](dirty-checking.md) for a detailed explanation of each strategy. ### storm.update.default_mode Controls the default update mode for entities that don't have an explicit `@DynamicUpdate` annotation. This setting applies globally and can be overridden per entity with the `@DynamicUpdate` annotation. | Value | Behavior | |-------|----------| | `OFF` | No dirty checking. Always update all columns. | | `ENTITY` | Skip UPDATE if entity unchanged; full-row update if any field changed. | | `FIELD` | Update only the columns that actually changed. | ### storm.update.dirty_check Controls how Storm compares field values to detect changes. The choice between `INSTANCE` and `VALUE` depends on whether your entities are truly immutable. Immutable records (the recommended pattern) work correctly with `INSTANCE` because unchanged fields share the same object reference. If your entities contain mutable objects that could change without creating a new reference, use `VALUE` to compare by `equals()` instead. | Value | Behavior | |-------|----------| | `INSTANCE` | Compare by reference identity. Fast, works well with immutable records. | | `VALUE` | Compare using `equals()`. More accurate for mutable objects. | ### storm.update.max_shapes In `FIELD` mode, each unique combination of changed columns produces a distinct UPDATE statement shape (e.g., updating only `email` is a different shape than updating `email` and `name`). Each shape occupies a slot in the database's prepared statement cache. This property caps the number of shapes per entity type. Once the limit is reached, Storm falls back to full-row updates to prevent statement cache bloat. Lower values (3-5) are better for applications with many entity types, where the total number of cached statements across all entities matters. Higher values (10-20) allow more granular updates but increase statement cache pressure. --- ## Entity Cache Properties Storm maintains a transaction-scoped entity cache that ensures the same database row maps to the same object instance within a single transaction. This property controls the cache's memory behavior. See [Entity Cache](entity-cache.md) for details on how the cache interacts with identity guarantees and garbage collection. ### storm.entity_cache.retention Controls how long entities are retained in the cache during a transaction. The choice is a trade-off between memory consumption and dirty-checking reliability. With `default`, entities are retained for the duration of the transaction, which provides reliable dirty checking while still allowing the JVM to reclaim entries under memory pressure. With `light`, the JVM can reclaim cached entities as soon as your code no longer holds a reference, which reduces memory usage but may cause dirty-check cache misses. | Value | Behavior | |-------|----------| | `default` | Entities retained for the transaction duration. Reliable dirty checking. The JVM may still reclaim entries under memory pressure. | | `light` | Entities can be garbage collected when no longer referenced by your code. Memory-efficient but may cause full-row updates due to cache misses. | --- ## Template Cache Properties Storm compiles SQL templates into reusable prepared statement shapes. This compilation step resolves aliases, derives joins, and expands column lists. Caching the compiled result avoids repeating this work for the same query pattern with different parameter values. See [SQL Templates](sql-templates.md#compilation-caching) for details on how compilation and caching work. ### storm.template_cache.size Sets the maximum number of compiled templates to keep in the cache. When the cache is full, the least recently used templates are evicted. The default of 2048 is sufficient for most applications. A typical application uses a few hundred distinct query patterns. Increase this value if you have many distinct query patterns (for example, from dynamically constructed queries) and observe cache eviction in your metrics. Each cached entry is small (the compiled SQL structure and metadata), so increasing the limit has minimal memory impact. --- ## Validation Properties Storm provides two independent validation subsystems, each controlled by a mode property. Record validation checks that your entity and projection definitions are structurally correct (valid primary key types, proper annotation usage, no circular dependencies). Schema validation compares your definitions against the actual database schema to catch mismatches before they surface as runtime errors. ### storm.validation.record_mode Controls whether record (structural) validation runs when Storm first encounters an entity or projection type. | Value | Behavior | |-------|----------| | `fail` | Validation errors cause startup to fail with a `PersistenceException` (default). | | `warn` | Errors are logged as warnings; startup continues. | | `none` | Record validation is skipped entirely. | ### storm.validation.schema_mode Controls whether schema validation runs at startup (Spring Boot only; for programmatic use, see [Validation](validation.md#programmatic-api)). | Value | Behavior | |-------|----------| | `none` | Schema validation is skipped (default). | | `warn` | Mismatches are logged at WARN level; startup continues. | | `fail` | Mismatches cause startup to fail with a `PersistenceException`. | ### storm.validation.strict When `true`, schema validation warnings (type narrowing, nullability mismatches, missing unique/foreign key constraints) are promoted to errors. This is useful in CI environments where any schema drift should be caught. See [Validation](validation.md) for a complete list of what each validation level checks. --- ## Interpolation Safety Storm's Kotlin API uses the Storm compiler plugin to automatically wrap string interpolations inside SQL template lambdas, ensuring all values are parameterized and SQL injection safe. When a `TemplateBuilder` lambda runs without the compiler plugin and without any explicit `t()` or `interpolate()` calls, Storm cannot distinguish a pure SQL literal (safe) from a string with accidentally concatenated interpolations (SQL injection risk). The `storm.validation.interpolation_mode` property controls how Storm handles this situation. ### storm.validation.interpolation_mode | Value | Behavior | |-------|----------| | `warn` | Logs a warning at `WARNING` level (default). | | `fail` | Throws an `IllegalStateException`, preventing execution of potentially unsafe templates. | | `none` | Disables the check entirely. Use only when you are certain the compiler plugin is not needed. | See [String Templates](string-templates.md) for setup instructions for the compiler plugin. > **Tip:** Storm exposes runtime metrics for template compilation, dirty checking, and entity cache behavior through JMX MBeans. See [Metrics](metrics.md) for details. --- ## Per-Entity Configuration System properties set global defaults, but individual entities often have different update characteristics. An entity with a large text column benefits from field-level updates, while a small entity with three columns does not. Per-entity annotations let you tune update behavior where it matters most, without changing the global default. ### @DynamicUpdate Override the update mode for a specific entity. This is most valuable for entities with large or variable-size columns where sending unchanged data wastes bandwidth. [Kotlin] ```kotlin @DynamicUpdate(FIELD) data class Article( @PK val id: Int = 0, val title: String, val content: String // Large column - benefits from field-level updates ) : Entity ``` [Java] ```java @DynamicUpdate(FIELD) record Article( @PK Integer id, @Nonnull String title, @Nonnull String content ) implements Entity {} ``` ### Dirty Check Strategy Per Entity You can also override the dirty check strategy on a per-entity basis. This is useful when a specific entity contains mutable objects that require value-based comparison, while the rest of your application uses the default instance-based comparison. ```kotlin @DynamicUpdate(value = FIELD, dirtyCheck = VALUE) data class User( @PK val id: Int = 0, val email: String ) : Entity ``` --- ## Configuration Precedence Entity-level annotations take the highest precedence, followed by explicit `StormConfig` values, then system properties, and finally built-in defaults: ``` 1. @DynamicUpdate annotation on entity class ↓ (if not present) 2. StormConfig (explicit value passed to factory) ↓ (if not set) 3. System property (-Dstorm.*) ↓ (if not set) 4. Built-in default ``` When using the Spring Boot Starter, `StormConfig` is built from `application.yml` properties. Properties not set in YAML fall through to system properties and then to built-in defaults. --- ## Recommended Configurations The following profiles cover common scenarios. Start with the defaults and adjust only when profiling reveals a specific bottleneck. ### Default (Most Applications) The built-in defaults work well for most applications. No configuration needed: - `ENTITY` mode skips UPDATE when nothing changed, but sends all columns when any field changes - `INSTANCE` comparison is fast and correct with immutable records/data classes - `default` cache retention provides reliable dirty checking ### High-Write Applications For applications with frequent updates to large entities, field-level updates reduce the amount of data sent to the database on each UPDATE. This matters most when entities have large text or binary columns where sending unchanged data wastes network bandwidth and database I/O. ```bash java -Dstorm.update.default_mode=FIELD \ -Dstorm.update.max_shapes=10 \ -jar myapp.jar ``` This reduces network bandwidth by only sending changed columns. ### Memory-Constrained Bulk Operations For transactions that load a very large number of entities (bulk migrations, large reports), light cache retention allows the JVM to reclaim cached entities sooner. The trade-off is that dirty checking may encounter cache misses, resulting in full-row updates. ```bash java -Dstorm.entity_cache.retention=light \ -jar myapp.jar ``` This reduces memory usage at the cost of less efficient dirty checking. ### Production Hardening For production environments, consider enabling strict validation and interpolation safety checks. These settings catch configuration issues and potential security problems that should not reach production: ```bash java -Dstorm.validation.schema_mode=fail \ -Dstorm.validation.interpolation_mode=fail \ -jar myapp.jar ``` - `storm.validation.schema_mode=fail` catches entity-to-schema mismatches at startup rather than at runtime. - `storm.validation.interpolation_mode=fail` prevents execution of templates that were not processed by the compiler plugin and do not use explicit `t()` calls, protecting against accidental SQL injection. During development, the defaults (`schema_mode=none`, `interpolation_mode=warn`) provide a smoother experience: schema validation is skipped (since the schema may be evolving), and missing compiler plugin usage is logged as a warning rather than blocking execution. ======================================== ## Source: sql-logging.md ======================================== # SQL Logging When debugging performance issues or tracing application behavior, you often need visibility into the SQL statements your ORM generates. Standard JDBC logging shows raw statements with `?` placeholders, giving you no context about which repository method triggered the query or what the actual parameter values were. Storm provides the `@SqlLog` annotation for declarative SQL logging on repositories. Place it on a repository interface or an individual method, and Storm will log every SQL statement that method generates, including which method triggered it. No boilerplate, no manual interceptor setup, no dependency on a specific logging framework. `@SqlLog` uses the JDK Platform Logging API (`System.Logger`), which automatically bridges to whatever logging backend is on your classpath (SLF4J, Log4j2, java.util.logging). This means it works out of the box in any environment. --- ## Annotating a Repository The simplest way to enable SQL logging is to annotate the repository interface itself. This logs every SQL statement generated by any method in the repository. [Kotlin] ```kotlin @SqlLog interface UserRepository : EntityRepository { fun findByEmail(email: String): User? = find(User_.email eq email) fun findActiveUsers(): List = findAll(User_.active eq true) } ``` [Java] ```java @SqlLog public interface UserRepository extends EntityRepository { default Optional findByEmail(String email) { return select(User_.email.eq(email)).getOptionalResult(); } default List findActiveUsers() { return select(User_.active.eq(true)).getResultList(); } } ``` When you call `userRepository.findByEmail("alice@example.com")`, the log output looks like this: ``` INFO com.example.UserRepository - [SQL] (UserRepository.findByEmail(String)) SELECT u.id, u.email, u.name, u.active FROM user u WHERE u.email = ? ``` The log message includes the repository type and method name, making it easy to trace which code path triggered the query. --- ## Annotating Individual Methods When you only need logging for specific operations (for example, a complex query you are developing or debugging), annotate the method instead of the entire interface. This avoids noisy output from methods you are not interested in. [Kotlin] ```kotlin interface OrderRepository : EntityRepository { // No logging fun findById(id: Int): Order? = find(Order_.id eq id) // Logged @SqlLog fun findExpiredOrders(cutoff: LocalDate): List = findAll(Order_.expiresAt lt cutoff) } ``` [Java] ```java public interface OrderRepository extends EntityRepository { // No logging default Optional findById(int id) { return select(Order_.id.eq(id)).getOptionalResult(); } // Logged @SqlLog default List findExpiredOrders(LocalDate cutoff) { return select(Order_.expiresAt.lt(cutoff)).getResultList(); } } ``` Method-level annotations override type-level annotations. If the interface has `@SqlLog` but a specific method has `@SqlLog(level = Level.DEBUG)`, the method's configuration takes precedence. --- ## Debugging with Inline Parameters By default, SQL log output shows parameterized queries with `?` placeholders, just as they are sent to the database. This is useful for understanding query structure, but when you are debugging a specific issue, you often want to see the actual values. Setting `inlineParameters = true` replaces the `?` placeholders with the actual bound values. This produces SQL you can copy directly into a database tool and execute, which makes it invaluable for reproducing issues. [Kotlin] ```kotlin @SqlLog(inlineParameters = true) interface UserRepository : EntityRepository { fun findByEmail(email: String): User? = find(User_.email eq email) } ``` [Java] ```java @SqlLog(inlineParameters = true) public interface UserRepository extends EntityRepository { default Optional findByEmail(String email) { return select(User_.email.eq(email)).getOptionalResult(); } } ``` Compare the output: | Setting | Output | |---------|--------| | `inlineParameters = false` (default) | `SELECT u.id, u.email FROM user u WHERE u.email = ?` | | `inlineParameters = true` | `SELECT u.id, u.email FROM user u WHERE u.email = 'alice@example.com'` | With inlined parameters, the logged SQL is a complete, executable statement. You can paste it directly into your database client to inspect the result set, check the query plan with `EXPLAIN`, or verify that the WHERE clause matches the rows you expect. This is especially helpful when debugging queries with multiple parameters, date ranges, or complex filter expressions where it is not obvious which `?` corresponds to which value. > **Important:** `inlineParameters` only affects the log output. The actual SQL sent to the database always uses parameterized queries with `?` placeholders, regardless of this setting. Storm never sends inlined parameter values to the database, so there is no risk of SQL injection or behavioral changes. This is purely a logging convenience. > **Tip:** Use `inlineParameters = true` during development and debugging. For production logging, prefer the default (`false`) to keep log output concise and avoid accidentally logging sensitive data such as passwords or personal information. --- ## Controlling Log Level The `level` attribute controls the `System.Logger.Level` used for log output. If the configured logger is not enabled for the specified level, the interceptor is skipped entirely, so there is zero overhead when logging is disabled. [Kotlin] ```kotlin @SqlLog(level = System.Logger.Level.DEBUG) interface UserRepository : EntityRepository { ... } ``` [Java] ```java @SqlLog(level = System.Logger.Level.DEBUG) public interface UserRepository extends EntityRepository { ... } ``` The available levels follow the standard `System.Logger.Level` enum: | Level | Typical use | |-------|-------------| | `TRACE` | Very detailed diagnostics, high volume | | `DEBUG` | Development-time query inspection | | `INFO` | Default; visible in standard log output | | `WARNING` | Highlight potentially problematic queries | --- ## Custom Logger Name By default, the logger name is the fully qualified name of the repository interface (e.g., `com.example.UserRepository`). This works well with standard logging configuration where you can enable or disable logging per package. If you need a different logger name, for example, to group all SQL logs under a single category, use the `name` attribute: [Kotlin] ```kotlin @SqlLog(name = "sql") interface UserRepository : EntityRepository { ... } ``` [Java] ```java @SqlLog(name = "sql") public interface UserRepository extends EntityRepository { ... } ``` This logs to a logger named `sql` instead of the repository's class name, so you can configure a single logger to capture (or silence) SQL output from all repositories at once. --- ## Attribute Reference | Attribute | Type | Default | Description | |-----------|------|---------|-------------| | `inlineParameters` | `boolean` | `false` | Replace `?` placeholders with actual parameter values in log output | | `level` | `System.Logger.Level` | `INFO` | Log level for SQL output; logging is skipped entirely if the level is not enabled | | `name` | `String` | `""` (repository class name) | Custom logger name; useful for grouping all SQL logging under one category | --- ## Where It Works `@SqlLog` is processed by the repository proxy, so it works everywhere repositories are used: - Repositories obtained via `orm.repository()` (standalone usage, no framework required) - Spring-managed repository beans (auto-configured through the Spring Boot starter) No additional configuration or dependencies are needed beyond the Storm dependency you already have. --- ## Tips 1. **Start with type-level annotation** during development to see all queries a repository generates, then narrow down to method-level once you know which queries to focus on. 2. **Use `inlineParameters = true` for debugging** to get copy-pasteable SQL. Switch back to `false` before committing to avoid leaking sensitive values in production logs. 3. **Set level to `DEBUG` or `TRACE`** for repositories in production code, so SQL logging is available on demand through log level configuration without code changes. 4. **Combine with a custom logger name** like `@SqlLog(name = "sql")` to create a single switch for all SQL logging across your application. ======================================== ## Source: metrics.md ======================================== # Metrics Storm exposes runtime metrics through JMX (Java Management Extensions) MBeans. These metrics give you visibility into template compilation performance, dirty checking behavior, and entity cache efficiency. All MBeans are registered automatically when Storm initializes and aggregate across all `ORMTemplate` instances in the JVM. To view these metrics, connect to the JVM with any JMX client (JConsole, VisualVM, or your monitoring platform) and navigate to the `st.orm` domain. If your application uses Spring Boot Actuator, the MBeans are also accessible through Actuator's JMX endpoint. --- ## Template Metrics **MBean name:** `st.orm:type=TemplateMetrics` Storm compiles SQL templates into reusable prepared statement shapes. This compilation step resolves aliases, derives joins, and expands column lists. The template cache avoids repeating this work for the same query pattern with different parameter values. These metrics help you understand whether the cache is effective. ### Available Attributes | Attribute | Description | |-----------|-------------| | `Requests` | Total number of template requests | | `Hits` | Number of cache hits | | `Misses` | Number of cache misses | | `HitRatioPercent` | Hit ratio as a percentage (0-100) | | `AvgRequestMicros` | Average request duration in microseconds | | `MaxRequestMicros` | Maximum request duration in microseconds | | `AvgHitMicros` | Average cache hit duration in microseconds | | `MaxHitMicros` | Maximum cache hit duration in microseconds | | `AvgMissMicros` | Average cache miss duration in microseconds | | `MaxMissMicros` | Maximum cache miss duration in microseconds | A high `HitRatioPercent` (above 95%) indicates the cache is working well. If you see frequent misses, your application may have many dynamically constructed query patterns. Consider increasing the cache size via `storm.template_cache.size` (see [Configuration](configuration.md#template-cache-properties)) or reducing the number of distinct query shapes. ### Operations | Operation | Description | |-----------|-------------| | `reset()` | Resets all counters to zero | --- ## Dirty Check Metrics **MBean name:** `st.orm:type=DirtyCheckMetrics` Dirty checking determines whether an UPDATE statement is sent to the database and which columns it includes. These metrics aggregate across all dirty checks performed by entity repositories, giving you visibility into how often updates are skipped, which resolution paths are taken, and how your `max_shapes` budget is being used. For background on how dirty checking works, see [Dirty Checking](dirty-checking.md). ### Entity-Level Counters | Attribute | Description | |-----------|-------------| | `Checks` | Total number of dirty checks performed | | `Clean` | Number of checks that found the entity unchanged (update skipped) | | `Dirty` | Number of checks that found the entity changed (update triggered) | | `CleanRatioPercent` | Percentage of checks where the update was skipped (0-100) | A high `CleanRatioPercent` indicates that many updates are avoided because the entity has not changed since it was read. If this ratio is low and your application frequently calls `update()` on unmodified entities, consider reviewing your update logic. ### Resolution Path Counters | Attribute | Description | |-----------|-------------| | `IdentityMatches` | Checks resolved by identity comparison (`cached == entity`), the cheapest path | | `CacheMisses` | Checks where no cached baseline was available, causing a fallback to full-entity update | High `CacheMisses` may indicate that the entity cache is being cleared prematurely. Consider switching from `light` to `default` cache retention if cache misses are frequent. See [Entity Cache](entity-cache.md) for details. ### Mode and Strategy Breakdown | Attribute | Description | |-----------|-------------| | `EntityModeChecks` | Checks that used `ENTITY` update mode (full-row UPDATE on any change) | | `FieldModeChecks` | Checks that used `FIELD` update mode (column-level UPDATE) | | `InstanceStrategyChecks` | Checks that used `INSTANCE` strategy (identity-based field comparison) | | `ValueStrategyChecks` | Checks that used `VALUE` strategy (equality-based field comparison) | ### Field-Level Counters | Attribute | Description | |-----------|-------------| | `FieldComparisons` | Total number of individual field comparisons across all dirty checks | | `FieldClean` | Number of field comparisons where the field was unchanged | | `FieldDirty` | Number of field comparisons where the field was different | ### Shape Counters | Attribute | Description | |-----------|-------------| | `EntityTypes` | Number of distinct entity types that have generated UPDATE shapes | | `Shapes` | Total number of distinct UPDATE statement shapes across all entity types | | `ShapesPerEntity` | Map of entity type name to the number of shapes for that type | Compare `ShapesPerEntity` values against the configured `storm.update.max_shapes` to determine if any entity type is exhausting its shape budget. When the limit is reached, Storm falls back to full-row updates for that entity type. ### Per-Entity Configuration | Attribute | Description | |-----------|-------------| | `UpdateModePerEntity` | Map of entity type name to effective update mode (`FIELD`, `ENTITY`, `OFF`) | | `DirtyCheckPerEntity` | Map of entity type name to effective dirty check strategy (`INSTANCE`, `VALUE`) | | `MaxShapesPerEntity` | Map of entity type name to configured max shapes limit | ### Operations | Operation | Description | |-----------|-------------| | `reset()` | Resets all counters to zero | --- ## Entity Cache Metrics **MBean name:** `st.orm:type=EntityCacheMetrics` Storm maintains a transaction-scoped entity cache that ensures the same database row maps to the same object instance within a single transaction. These metrics aggregate across all transaction-scoped entity caches, providing visibility into cache hit rates, eviction patterns, and retention behavior. For background on how the cache works, see [Entity Cache](entity-cache.md). ### Lookup Counters | Attribute | Description | |-----------|-------------| | `Gets` | Total number of `get()` calls (cache lookups) | | `GetHits` | Number of lookups that returned a cached entity | | `GetMisses` | Number of lookups where no cached entity was available | | `GetHitRatioPercent` | Get hit ratio as a percentage (0-100) | A low `GetHitRatioPercent` in combination with frequent `update()` calls suggests that entities are being evicted before they can serve as dirty-check baselines. Consider switching to `default` cache retention. ### Intern Counters | Attribute | Description | |-----------|-------------| | `Interns` | Total number of `intern()` calls (cache insertions) | | `InternHits` | Number of intern calls that reused an existing canonical instance | | `InternMisses` | Number of intern calls that stored a new or updated instance | | `InternHitRatioPercent` | Intern hit ratio as a percentage (0-100) | ### Lifecycle Counters | Attribute | Description | |-----------|-------------| | `Removals` | Number of cache entries removed due to entity mutations (insert, update, delete) | | `Clears` | Number of full cache clears | | `Evictions` | Number of cache entries cleaned up after garbage collection | High `Evictions` values indicate that entities are being garbage collected while still in the cache. This is expected with `light` retention but unusual with `default` retention unless the JVM is under memory pressure. ### Per-Entity Configuration | Attribute | Description | |-----------|-------------| | `RetentionPerEntity` | Map of entity type name to effective retention mode (`DEFAULT`, `LIGHT`) | ### Operations | Operation | Description | |-----------|-------------| | `reset()` | Resets all counters to zero | ======================================== ## Source: security.md ======================================== # Security Storm is designed with security as a structural property rather than an afterthought. The framework's template-based query model makes SQL injection difficult by construction, and its stateless entity design reduces the surface area for common ORM-related vulnerabilities. This page covers how Storm prevents SQL injection, the escape hatches that exist and when to use them, and patterns for building audit trails and access control into your application. --- ## SQL Injection Prevention ### Parameterized by Construction The most important security property of Storm is that **all values are parameterized by default**. When you write a query using Storm's template API, values are never concatenated into the SQL string. They are always sent as JDBC parameters: [Kotlin] ```kotlin // The 'email' value is sent as a JDBC parameter, not interpolated into SQL. val user = userRepository.find(User_.email eq email) ``` Generated SQL: ```sql SELECT ... FROM "user" WHERE "email" = ? ``` [Java] ```java // The 'email' value is sent as a JDBC parameter, not interpolated into SQL. User user = userRepository.find(User_.email.eq(email)); ``` Generated SQL: ```sql SELECT ... FROM "user" WHERE "email" = ? ``` This applies to all Storm APIs, including: - Repository methods (`find`, `findAll`, `select`, `insert`, `update`, `remove`, `delete`) - Query builder operations (`.where()`, `.set()`, `.values()`) - SQL templates with embedded expressions When using SQL templates directly, embedded values are also parameterized: [Kotlin] ```kotlin // Both 'status' and 'minAge' become JDBC parameters. val users = orm.query("SELECT * FROM user WHERE status = $status AND age > $minAge") .getResultList(User::class) ``` [Java] ```java // Both 'status' and 'minAge' become JDBC parameters. List users = orm.query(RAW.""" SELECT * FROM user WHERE status = \{status} AND age > \{minAge}""") .getResultList(User.class); ``` There is no way to accidentally create an injection vulnerability through normal Storm API usage. ### How It Works Storm's SQL template processor separates the query structure (the SQL text with placeholders) from the values (the parameters). The JDBC driver receives the SQL template and the parameter values independently, so the database never interprets user-supplied data as SQL syntax. ``` Application Code Storm Template Engine JDBC Driver │ │ │ │ query with values │ │ ├─────────────────────────▶│ │ │ │ SQL with ? placeholders │ │ ├─────────────────────────▶│ │ │ Parameter values │ │ ├─────────────────────────▶│ │ │ │ ``` --- ## The unsafe() Escape Hatch Storm includes safety checks that prevent potentially dangerous operations. For example, executing a `DELETE` or `UPDATE` without a `WHERE` clause will throw a `PersistenceException` because this would affect every row in the table. When you intentionally need to perform such an operation, call `unsafe()` on the query: [Kotlin] ```kotlin // This would throw: "DELETE without WHERE clause is potentially unsafe." // userRepository.delete().executeUpdate() // Explicitly marking as unsafe allows the operation. orm.entity(User::class).delete().unsafe().executeUpdate() ``` [Java] ```java // This would throw: "DELETE without WHERE clause is potentially unsafe." // userRepository.delete().executeUpdate(); // Explicitly marking as unsafe allows the operation. orm.entity(User.class).delete().unsafe().executeUpdate(); ``` ### When unsafe() Is Appropriate - **Test setup and teardown:** Clearing tables between tests. - **Data migration scripts:** Bulk operations that intentionally affect all rows. - **Administrative operations:** One-time cleanup or maintenance tasks. ### When unsafe() Is Not Appropriate - **Any operation involving user-supplied input.** The `unsafe()` marker disables Storm's safety checks for the query shape, but it does not change how parameters are handled. However, using `unsafe()` in a code path that processes user input is a design smell that suggests the operation should be restructured. --- ## Audit Trail Patterns Storm's `EntityCallback` interface provides lifecycle hooks that execute before and after every mutation. These hooks are ideal for building audit trails because they are invoked consistently regardless of which code path triggers the mutation. ### Timestamped Auditing [Kotlin] ```kotlin @DbTable("document") data class Document( @PK val id: Int, val title: String, val createdAt: Instant?, val updatedAt: Instant? ) : Entity class AuditCallback : EntityCallback> { override fun beforeInsert(entity: Entity<*>): Entity<*> { if (entity is Document) { val now = Instant.now() return entity.copy(createdAt = now, updatedAt = now) } return entity } override fun beforeUpdate(entity: Entity<*>): Entity<*> { if (entity is Document) { return entity.copy(updatedAt = Instant.now()) } return entity } } ``` Register the callback when creating the ORM template: ```kotlin val orm = ORMTemplate.of(dataSource) .withEntityCallback(AuditCallback()) ``` Or with Spring Boot, declare it as a bean and it will be auto-registered: ```kotlin @Bean fun auditCallback(): EntityCallback<*> = AuditCallback() ``` [Java] ```java @DbTable("document") public record Document( @PK int id, String title, Instant createdAt, Instant updatedAt ) implements Entity {} public class AuditCallback implements EntityCallback> { @Override public Entity beforeInsert(Entity entity) { if (entity instanceof Document document) { var now = Instant.now(); return new Document(document.id(), document.title(), now, now); } return entity; } @Override public Entity beforeUpdate(Entity entity) { if (entity instanceof Document document) { return new Document(document.id(), document.title(), document.createdAt(), Instant.now()); } return entity; } } ``` Register the callback when creating the ORM template: ```java ORMTemplate orm = ORMTemplate.of(dataSource) .withEntityCallback(new AuditCallback()); ``` Or with Spring Boot, declare it as a bean and it will be auto-registered: ```java @Bean public EntityCallback auditCallback() { return new AuditCallback(); } ``` ### Mutation Logging Use `afterInsert`, `afterUpdate`, and `afterDelete` callbacks to record mutations for compliance or debugging: [Kotlin] ```kotlin class MutationLogger : EntityCallback> { private val logger = System.getLogger("audit") override fun afterInsert(entity: Entity<*>) { logger.log(System.Logger.Level.INFO, "INSERT: ${entity::class.simpleName} id=${entity.id()}") } override fun afterUpdate(entity: Entity<*>) { logger.log(System.Logger.Level.INFO, "UPDATE: ${entity::class.simpleName} id=${entity.id()}") } override fun afterDelete(entity: Entity<*>) { logger.log(System.Logger.Level.INFO, "DELETE: ${entity::class.simpleName} id=${entity.id()}") } } ``` [Java] ```java public class MutationLogger implements EntityCallback> { private final System.Logger logger = System.getLogger("audit"); @Override public void afterInsert(Entity entity) { logger.log(System.Logger.Level.INFO, "INSERT: %s id=%s".formatted(entity.getClass().getSimpleName(), entity.id())); } @Override public void afterUpdate(Entity entity) { logger.log(System.Logger.Level.INFO, "UPDATE: %s id=%s".formatted(entity.getClass().getSimpleName(), entity.id())); } @Override public void afterDelete(Entity entity) { logger.log(System.Logger.Level.INFO, "DELETE: %s id=%s".formatted(entity.getClass().getSimpleName(), entity.id())); } } ``` --- ## DataSource Credentials Management Storm does not manage database credentials directly. It receives a `DataSource` from your application and uses it for all database operations. This means credential security is your responsibility, and standard Java best practices apply. ### Recommended Practices **Never hardcode credentials.** Use environment variables, a secrets manager (e.g., AWS Secrets Manager, HashiCorp Vault, Azure Key Vault), or Spring's externalized configuration: ```yaml # application.yml - reference environment variables spring: datasource: url: ${DB_URL} username: ${DB_USERNAME} password: ${DB_PASSWORD} ``` **Use connection pooling with credential rotation.** When using HikariCP (the default for Spring Boot), configure it to support credential rotation: ```yaml spring: datasource: hikari: maximum-pool-size: 10 minimum-idle: 5 connection-timeout: 30000 ``` **Restrict database user permissions.** The database user your application connects with should have only the permissions it needs. For a typical CRUD application, grant `SELECT`, `INSERT`, `UPDATE`, and `DELETE` on application tables, but not `DROP`, `ALTER`, or `GRANT`. --- ## Column-Level Access Control Storm does not provide built-in column-level access control, but you can implement it using projections and entity callbacks. ### Read Control via Projections Use projections to expose different views of the same table to different user roles. A projection only reads the columns it declares, so restricted columns are never fetched: [Kotlin] ```kotlin // Full entity (for admin users). @DbTable("user") data class User( @PK val id: Int, val name: String, val email: String, val socialSecurityNumber: String, val salary: BigDecimal ) : Entity // Restricted projection (for regular users). @DbTable("user") data class UserPublicView( @PK val id: Int, val name: String, val email: String ) : Projection ``` [Java] ```java // Full entity (for admin users). @DbTable("user") public record User( @PK int id, String name, String email, String socialSecurityNumber, BigDecimal salary ) implements Entity {} // Restricted projection (for regular users). @DbTable("user") public record UserPublicView( @PK int id, String name, String email ) implements Projection {} ``` ### Write Control via Callbacks Use an entity callback to enforce write-level access control by validating or rejecting mutations: [Kotlin] ```kotlin class WriteAccessCallback : EntityCallback { override fun beforeUpdate(entity: User): User { val currentRole = SecurityContext.currentUserRole() if (currentRole != "ADMIN") { throw PersistenceException("Only administrators can modify user records.") } return entity } override fun beforeDelete(entity: User) { val currentRole = SecurityContext.currentUserRole() if (currentRole != "ADMIN") { throw PersistenceException("Only administrators can delete user records.") } } } ``` [Java] ```java public class WriteAccessCallback implements EntityCallback { @Override public User beforeUpdate(User entity) { String currentRole = SecurityContext.currentUserRole(); if (!"ADMIN".equals(currentRole)) { throw new PersistenceException("Only administrators can modify user records."); } return entity; } @Override public void beforeDelete(User entity) { String currentRole = SecurityContext.currentUserRole(); if (!"ADMIN".equals(currentRole)) { throw new PersistenceException("Only administrators can delete user records."); } } } ``` ======================================== ## Source: error-handling.md ======================================== # Error Handling When something goes wrong, Storm communicates the problem through a small, well-defined set of exception types. Understanding which exceptions can be thrown and when helps you write robust error handling that distinguishes between recoverable situations (like a missing entity) and programming mistakes (like a schema mismatch). This page covers Storm's exception hierarchy, the most common error scenarios you will encounter, and strategies for diagnosing problems when they arise. --- ## Exception Hierarchy Storm uses unchecked exceptions for most error conditions. The root type is `PersistenceException`, which extends `RuntimeException`. This means you are not forced to catch exceptions at every call site; instead, you can handle them at the appropriate layer of your application. | Exception | Extends | When It Is Thrown | |---|---|---| | `PersistenceException` | `RuntimeException` | General database or SQL errors. This is the root of Storm's exception hierarchy. | | `NoResultException` | `PersistenceException` | `getSingleResult()` returns no rows. | | `NonUniqueResultException` | `PersistenceException` | `getSingleResult()` or `getOptionalResult()` returns more than one row. | | `OptimisticLockException` | `PersistenceException` | An update or delete detects a version conflict (the row was modified by another transaction). | | `SchemaValidationException` | `PersistenceException` | Schema validation finds mismatches between entity definitions and the database schema. | | `SqlTemplateException` | `SQLException` | An error occurred during SQL template processing. Often attached as a suppressed exception to provide the generated SQL alongside the original error. | The hierarchy is intentionally flat. Most code only needs to catch `PersistenceException` and, occasionally, its specific subtypes. ``` RuntimeException └── PersistenceException ├── NoResultException ├── NonUniqueResultException ├── OptimisticLockException └── SchemaValidationException SQLException └── SqlTemplateException ``` --- ## Common Error Scenarios ### No Result Found When you call `getSingleResult()` on a query that returns zero rows, Storm throws `NoResultException`. [Kotlin] ```kotlin // Throws NoResultException if no user has this email. val user = orm.entity(User::class).select(User_.email eq "nobody@example.com").getSingleResult() ``` To handle the missing-result case without exceptions, use `getOptionalResult()`: ```kotlin val user: User? = orm.entity(User::class) .select(User_.email eq "nobody@example.com") .getOptionalResult(User::class) ``` Or use the repository's `findById` method: ```kotlin val user: User? = userRepository.findById(42) ``` [Java] ```java // Throws NoResultException if no user has this email. User user = orm.entity(User.class).select(User_.email.eq("nobody@example.com")).getSingleResult(); ``` To handle the missing-result case without exceptions, use `getOptionalResult()`: ```java Optional user = orm.entity(User.class) .select(User_.email.eq("nobody@example.com")) .getOptionalResult(); ``` Or use the repository's `findById` method: ```java Optional user = userRepository.findById(42); ``` ### Multiple Results When One Was Expected `getSingleResult()` and `getOptionalResult()` both throw `NonUniqueResultException` when the query returns more than one row. This typically signals a logical error in your query or data: ``` NonUniqueResultException: Expected single result, but found more than one. ``` If multiple results are valid, use `getResultList()` or `getResultStream()` instead. ### Optimistic Lock Conflicts When an entity has a `@Version` column and the version in the database no longer matches the version in your entity, the update or delete fails with an `OptimisticLockException`. This happens when another transaction modified the same row between your read and your write. [Kotlin] ```kotlin try { userRepository.update(outdatedUser) } catch (exception: OptimisticLockException) { // The entity was modified by another transaction. // Reload and retry, or inform the user. val freshUser = userRepository.getById(outdatedUser.id()) // ... merge changes and retry } ``` [Java] ```java try { userRepository.update(outdatedUser); } catch (OptimisticLockException exception) { // The entity was modified by another transaction. // Reload and retry, or inform the user. User freshUser = userRepository.getById(outdatedUser.id()); // ... merge changes and retry } ``` The exception includes a reference to the entity that caused the conflict, accessible via `getEntity()`. ### Constraint Violations Database constraint violations (unique constraints, foreign key constraints, not-null constraints) surface as `PersistenceException` wrapping the underlying JDBC `SQLException`. The original SQL error message and vendor-specific error code are preserved in the exception chain: [Kotlin] ```kotlin try { userRepository.insert(duplicateUser) } catch (exception: PersistenceException) { val cause = exception.cause if (cause is java.sql.SQLIntegrityConstraintViolationException) { // Handle duplicate key, foreign key violation, etc. } } ``` [Java] ```java try { userRepository.insert(duplicateUser); } catch (PersistenceException exception) { Throwable cause = exception.getCause(); if (cause instanceof java.sql.SQLIntegrityConstraintViolationException) { // Handle duplicate key, foreign key violation, etc. } } ``` ### Schema Validation Errors When schema validation is enabled, Storm checks your entity definitions against the actual database schema at startup or first use. If there are mismatches, it throws a `SchemaValidationException` with a detailed list of errors: ``` SchemaValidationException: Schema validation failed with 2 error(s): - Table 'user': column 'email' not found in database - Table 'user': column 'name' type mismatch: expected VARCHAR, found INTEGER ``` Each individual error is available programmatically through `getErrors()`, making it possible to build custom reporting or migration tooling. ### Connection and Database Errors Low-level database problems (connection failures, query timeouts, syntax errors) are wrapped in `PersistenceException`. The original `SQLException` is always available as the cause, preserving the vendor error code and SQL state: [Kotlin] ```kotlin try { userRepository.findAll() } catch (exception: PersistenceException) { val sqlCause = exception.cause as? java.sql.SQLException if (sqlCause != null) { println("SQL State: ${sqlCause.sqlState}") println("Error Code: ${sqlCause.errorCode}") } } ``` [Java] ```java try { userRepository.findAll(); } catch (PersistenceException exception) { if (exception.getCause() instanceof SQLException sqlCause) { System.out.println("SQL State: " + sqlCause.getSQLState()); System.out.println("Error Code: " + sqlCause.getErrorCode()); } } ``` --- ## Debugging Strategies ### Enable SQL Logging The fastest way to diagnose a query problem is to see the generated SQL. Use the `@SqlLog` annotation on your repository to log every statement: [Kotlin] ```kotlin @SqlLog interface UserRepository : EntityRepository ``` [Java] ```java @SqlLog public interface UserRepository extends EntityRepository {} ``` For more targeted logging, annotate individual methods instead of the entire repository. See the [SQL Logging](sql-logging.md) page for details. ### Use SqlCapture in Tests The `SqlCapture` class from `storm-test` records all SQL statements generated during a block of code. This is useful for verifying that the correct queries are being generated: ```java var capture = new SqlCapture(); capture.run(() -> { userRepository.findAll(); }); // Inspect the captured SQL. List statements = capture.statements(); assertEquals(1, statements.size()); assertTrue(statements.get(0).statement().contains("SELECT")); ``` See the [Testing](testing.md) page for full details on `SqlCapture` and the `@StormTest` annotation. ### Read the Suppressed SQL When a `PersistenceException` is thrown during query execution, Storm attaches the generated SQL as a suppressed `SqlTemplateException`. This means the full SQL text is available in the exception chain even when the original error is a JDBC-level failure: ```java try { userRepository.findAll(); } catch (PersistenceException exception) { for (Throwable suppressed : exception.getSuppressed()) { if (suppressed instanceof SqlTemplateException) { System.out.println("Generated SQL: " + suppressed.getMessage()); } } } ``` ### Enable Schema Validation Schema validation catches entity-to-database mismatches early, before they surface as cryptic SQL errors at runtime. Enable it through configuration to get clear, actionable error messages about missing columns, type mismatches, and other structural issues. See the [Validation](validation.md) page for configuration details. --- ## Common Mistakes ### Using getSingleResult() Without a WHERE Clause Calling `getSingleResult()` on a query that returns all rows will throw `NonUniqueResultException` unless the table contains exactly one row. If you want to check whether results exist, use `getResultCount()` or `getResultStream()`. ### Catching PersistenceException Too Broadly Catching `PersistenceException` at a high level can hide programming errors like schema mismatches or invalid queries. Prefer catching specific subtypes where possible, and let unexpected exceptions propagate to your application's global error handler. ### Ignoring OptimisticLockException When using `@Version` columns, always have a strategy for handling `OptimisticLockException`. Common approaches include retrying the operation after reloading the entity, or returning a conflict response to the client and letting them resolve it. ### Not Closing Streams `getResultStream()` holds a database cursor open. Always close it when done, either with a try-with-resources block or by collecting into a list: [Kotlin] ```kotlin // Collect into a list (automatically closes the stream). val users = userRepository.select().getResultList() // Or use try-with-resources for lazy processing. userRepository.select().getResultStream().use { stream -> stream.forEach { user -> process(user) } } ``` [Java] ```java // Collect into a list (automatically closes the stream). List users = userRepository.select().getResultList(); // Or use try-with-resources for lazy processing. try (var stream = userRepository.select().getResultStream()) { stream.forEach(user -> process(user)); } ``` --- ## Common Beginner Mistakes ### Metamodel Class Does Not Compile (`User_` Not Found) **Symptom:** Your code references `User_` but the compiler reports that the class does not exist. **Cause:** The metamodel processor is not configured. Storm generates companion classes like `User_` at compile time using an annotation processor (Java) or KSP plugin (Kotlin). **Fix:** For **Kotlin with Gradle**, add the KSP plugin and processor dependency: ```kotlin plugins { id("com.google.devtools.ksp") } dependencies { ksp("st.orm:storm-metamodel-ksp:${stormVersion}") } ``` For **Java with Maven**, configure the annotation processor in the compiler plugin: ```xml st.orm storm-metamodel-processor ${storm.version} ``` ### Using `var` Instead of `val` in Kotlin Data Class Fields **Symptom:** Storm throws an error or behaves unexpectedly when reading or writing entities. **Cause:** Storm entities are designed to be immutable. Kotlin data class fields should use `val`, not `var`. **Fix:** Change all `var` declarations to `val`: ```kotlin // Wrong data class User( @PK var id: Int = 0, var name: String ) : Entity // Correct data class User( @PK val id: Int = 0, val name: String ) : Entity ``` ### Using `@Column` Instead of `@DbColumn` **Symptom:** Your custom column name annotation is ignored. Storm maps the field using its default naming convention instead. **Cause:** Storm uses `@DbColumn` for column name overrides, not `@Column` (which is a JPA annotation that Storm does not process). **Fix:** Replace `@Column` with `@DbColumn`: ```kotlin // Wrong data class User( @PK val id: Int = 0, @Column("email_address") val email: String ) : Entity // Correct data class User( @PK val id: Int = 0, @DbColumn("email_address") val email: String ) : Entity ``` ### Forgot `@FK` on a Relationship Field **Symptom:** Storm treats the field as an embedded component or fails with a mapping error instead of generating a JOIN. **Cause:** Without the `@FK` annotation, Storm does not know that the field represents a foreign key relationship. **Fix:** Add `@FK` to any field that references another entity: ```kotlin // Wrong data class User( @PK val id: Int = 0, val city: City ) : Entity // Correct data class User( @PK val id: Int = 0, @FK val city: City ) : Entity ``` ### Forgot Dialect Dependency When Using Upsert **Symptom:** Calling `upsert()` throws an `UnsupportedOperationException` or `PersistenceException`. **Cause:** Upsert requires a database-specific dialect module because the SQL syntax differs between databases (e.g., `ON CONFLICT` for PostgreSQL, `ON DUPLICATE KEY` for MySQL). **Fix:** Add the dialect dependency for your database: ```xml st.orm storm-postgresql st.orm storm-mysql ``` ======================================== ## Source: performance.md ======================================== # Performance Storm is designed to add minimal overhead on top of JDBC. In most applications, the bottleneck is the database itself, not the ORM layer. Still, understanding how Storm processes queries, caches compiled templates, and manages entity state helps you make informed decisions about configuration and optimization. This page covers Storm's internal performance mechanisms, the configuration properties that control them, and the JMX metrics you can use to monitor behavior in production. --- ## Query Execution Model When you execute a query through Storm, the framework performs these steps: ``` 1. Template Compilation Parse the query template, resolve entity mappings, and generate the SQL string with ? placeholders. 2. Cache Lookup Check the template cache for a previously compiled result with the same shape. 3. Parameter Binding Bind runtime values to the compiled SQL template. 4. JDBC Execution Send the PreparedStatement to the database via JDBC. 5. Result Mapping Map result set rows to record instances. ``` Steps 1 and 2 are where Storm's compilation cache provides its largest performance benefit. Steps 4 and 5 are dominated by database I/O and are largely outside the framework's control. --- ## Template Compilation Cache The compilation cache is Storm's most significant performance optimization. SQL template compilation involves parsing the template structure, resolving entity metadata, generating column lists, and building the final SQL string. This work is substantial, and the compilation cache avoids repeating it. ### How It Works Each unique template shape (the combination of entity types, column selections, and query structure) produces a compiled result that is stored in a bounded LRU cache. When the same template shape is requested again, the cached result is reused and only the runtime parameter binding step is repeated. The performance difference is significant: a cache hit typically completes in single-digit microseconds, while a cache miss (full compilation) can take tens to hundreds of microseconds depending on entity complexity. ``` First request (cache miss): ~100-500 us Full compilation Subsequent requests (cache hit): ~1-10 us Reuse compiled result ``` ### Configuration The cache size is configured via the `storm.template_cache.size` property: | Property | Default | Description | |---|---|---| | `storm.template_cache.size` | `2048` | Maximum number of compiled templates to cache. Set to `0` to disable caching. | With Spring Boot, use the `storm.template-cache.size` property in `application.yml`: ```yaml storm: template-cache: size: 4096 ``` Or configure programmatically: ```java StormConfig config = StormConfig.of(Map.of( TEMPLATE_CACHE_SIZE, "4096" )); ORMTemplate orm = ORMTemplate.of(dataSource, config); ``` For most applications, the default of 2048 is sufficient. If you have a large number of distinct query shapes (hundreds of different entity types or complex dynamic queries), consider increasing it. Monitor the hit ratio via JMX to determine if the cache is sized appropriately. --- ## Entity Cache Storm maintains a transaction-scoped entity cache that serves multiple purposes: avoiding redundant database round-trips, preserving object identity within a transaction, and providing the baseline for dirty checking. ### Transaction Scope The entity cache is created when a transaction begins and discarded when it commits or rolls back. There is no second-level or cross-transaction cache. This design avoids cache coherency problems and aligns with standard transaction isolation semantics. ### Isolation-Level Awareness The cache behavior adapts to your transaction isolation level: | Isolation Level | Cache Behavior | |---|---| | `READ_UNCOMMITTED` | Observation is disabled by default. All entities are treated as dirty. | | `READ_COMMITTED` | Observation is enabled. Cache serves dirty checking. | | `REPEATABLE_READ` | Full caching. Returning cached instances matches database guarantees. | | `SERIALIZABLE` | Full caching. Same as `REPEATABLE_READ`. | ### Cache Retention The `storm.entity_cache.retention` property controls how long cached entity state is retained: | Value | Description | |---|---| | `DEFAULT` | Retained for the duration of the transaction. Higher memory usage, better dirty-check hit rate. | | `LIGHT` | May be cleaned up when the application no longer holds a reference. Lower memory usage, but may cause dirty-check cache misses. | ```yaml storm: entity-cache: retention: LIGHT ``` ### Hit and Miss Patterns A cache **hit** occurs when Storm finds a previously observed entity by primary key. This means the entity was already read in the current transaction and can be returned immediately (or used as the dirty-check baseline) without a database round-trip. A cache **miss** occurs when the entity is not in the cache. This results in a database query and the entity being stored in the cache for future use. For dirty checking specifically, a miss means no baseline is available and Storm falls back to a full-row update (all columns are included regardless of what changed). --- ## Dirty Checking Costs When dirty checking is enabled (via `@DynamicUpdate` or the `storm.update.default_mode` property), Storm compares entity state before generating UPDATE statements. The cost of this comparison depends on the strategy used: ### INSTANCE vs VALUE Comparison | Strategy | How It Works | Performance | Trade-off | |---|---|---|---| | `INSTANCE` | Compares field references using `==` (identity). | Very fast; no value inspection. | Treats structurally equal but different instances as dirty. | | `VALUE` | Compares field values using `equals()`. | Depends on field types and `equals()` cost. | More precise; only truly changed fields are dirty. | The default strategy is `INSTANCE`, which is fast and sufficient for most applications. If you construct entities by copying with modifications, `INSTANCE` will detect the change because the field references differ, even if the values are the same. Use `VALUE` when precision is more important than speed (for example, when `equals()` is cheap and unnecessary updates are expensive). ### When FIELD Mode Helps With `UpdateMode.FIELD`, Storm generates UPDATE statements that include only the dirty columns. This reduces write amplification and lock scope in the database. However, it introduces additional overhead: - **Shape diversity:** Each unique combination of dirty columns produces a distinct SQL shape. These shapes are cached, but too many shapes can reduce cache effectiveness. - **Shape limit:** The `storm.update.max_shapes` property (default: `5`) limits the number of shapes per entity type. Beyond this limit, Storm falls back to full-row updates to preserve batching efficiency. ```yaml storm: update: default-mode: FIELD dirty-check: VALUE max-shapes: 10 ``` ### Configuration Properties | Property | Default | Description | |---|---|---| | `storm.update.default_mode` | `ENTITY` | Default update mode: `OFF`, `ENTITY`, or `FIELD`. | | `storm.update.dirty_check` | `INSTANCE` | Default dirty check strategy: `INSTANCE` or `VALUE`. | | `storm.update.max_shapes` | `5` | Maximum distinct UPDATE shapes per entity type before falling back to full-row updates. | --- ## Batch Operations Batch operations group multiple SQL statements into a single JDBC round-trip. Storm automatically uses JDBC batching when you pass collections to `insert`, `update`, `remove`, or `upsert`. ### Performance Characteristics Batching reduces the number of network round-trips from N (one per entity) to 1 (or a few, depending on batch size). The performance improvement depends on network latency and database server efficiency: - **Low-latency connections** (same host or same datacenter): 2-5x improvement. - **High-latency connections** (cross-region): 10-100x improvement. ### Streaming with Batch Size For large data sets that do not fit in memory, use the streaming batch methods: [Kotlin] ```kotlin // Insert a stream of entities in batches of 1000. orm.entity(User::class).insert(userStream, batchSize = 1000) ``` [Java] ```java // Insert a stream of entities in batches of 1000. orm.entity(User.class).insert(userStream, 1000); ``` The batch size controls the trade-off between memory usage and database efficiency. Larger batches use more memory but reduce the number of round-trips. A batch size of 100-1000 is a good starting point for most applications. --- ## Connection Management Storm does not manage connections or connection pools. It receives a `DataSource` from your application and acquires connections through it. This means connection pooling is your responsibility. ### Recommended Setup HikariCP is the recommended connection pool for Storm applications. It is the default for Spring Boot applications. ```yaml spring: datasource: hikari: maximum-pool-size: 10 minimum-idle: 5 connection-timeout: 30000 idle-timeout: 600000 max-lifetime: 1800000 ``` Key sizing considerations: - **`maximum-pool-size`:** Should match your application's concurrency level. A common formula is `connections = (2 * CPU cores) + disk spindles`. For most applications, 10-20 is sufficient. - **`minimum-idle`:** Set equal to `maximum-pool-size` for fixed-size pools, or lower for variable workloads. - **`connection-timeout`:** How long a thread waits for a connection before throwing an exception. Set this lower than your application's request timeout. --- ## JMX Metrics Storm registers three MXBeans that provide runtime visibility into template compilation, entity caching, and dirty checking. These metrics are available through any JMX client (JConsole, VisualVM, Prometheus JMX exporter, etc.). ### Template Metrics **MBean name:** `st.orm:type=TemplateMetrics` | Attribute | Type | Description | |---|---|---| | `Requests` | `long` | Total number of template requests. | | `Hits` | `long` | Number of cache hits. | | `Misses` | `long` | Number of cache misses. | | `HitRatioPercent` | `long` | Hit ratio as a percentage (0-100). | | `AvgRequestMicros` | `long` | Average request duration in microseconds. | | `MaxRequestMicros` | `long` | Maximum request duration in microseconds. | | `AvgHitMicros` | `long` | Average cache hit duration in microseconds. | | `MaxHitMicros` | `long` | Maximum cache hit duration in microseconds. | | `AvgMissMicros` | `long` | Average cache miss duration in microseconds. | | `MaxMissMicros` | `long` | Maximum cache miss duration in microseconds. | | `TemplateCacheSize` | `int` | Configured cache size. | **Operation:** `reset()` clears all counters. **What to look for:** - A `HitRatioPercent` below 90% suggests the cache is too small or the application has many distinct query shapes. Consider increasing `storm.template_cache.size`. - A large gap between `AvgHitMicros` and `AvgMissMicros` confirms that caching is providing a significant benefit. ### Entity Cache Metrics **MBean name:** `st.orm:type=EntityCacheMetrics` | Attribute | Type | Description | |---|---|---| | `Gets` | `long` | Total number of `get()` calls. | | `GetHits` | `long` | Cache hits (entity found in cache). | | `GetMisses` | `long` | Cache misses (entity not cached). | | `GetHitRatioPercent` | `long` | Get hit ratio as a percentage (0-100). | | `Interns` | `long` | Total number of `intern()` calls (storing entities). | | `InternHits` | `long` | Intern hits (reused an existing canonical instance). | | `InternMisses` | `long` | Intern misses (stored a new instance). | | `InternHitRatioPercent` | `long` | Intern hit ratio as a percentage (0-100). | | `Removals` | `long` | Entries removed due to entity mutations. | | `Clears` | `long` | Full cache clears (transaction commit/rollback). | | `Evictions` | `long` | Entries cleaned up after garbage collection. | | `RetentionPerEntity` | `Map` | Effective cache retention mode per entity type. | **Operation:** `reset()` clears all counters. **What to look for:** - High `Evictions` with `LIGHT` retention suggests the JVM is under memory pressure. Consider switching to `DEFAULT` retention or increasing heap size. - High `GetHitRatioPercent` indicates the cache is working effectively for identity preservation and query optimization. ### Dirty Check Metrics **MBean name:** `st.orm:type=DirtyCheckMetrics` | Attribute | Type | Description | |---|---|---| | `Checks` | `long` | Total number of dirty checks performed. | | `Clean` | `long` | Checks that found the entity clean (update skipped). | | `Dirty` | `long` | Checks that found the entity dirty (update triggered). | | `CleanRatioPercent` | `long` | Percentage of checks where the update was skipped. | | `IdentityMatches` | `long` | Checks resolved by identity comparison (`cached == entity`). | | `CacheMisses` | `long` | Checks where no cached baseline was available (fallback to full update). | | `EntityModeChecks` | `long` | Checks using `ENTITY` update mode. | | `FieldModeChecks` | `long` | Checks using `FIELD` update mode. | | `InstanceStrategyChecks` | `long` | Checks using `INSTANCE` dirty check strategy. | | `ValueStrategyChecks` | `long` | Checks using `VALUE` dirty check strategy. | | `FieldComparisons` | `long` | Total individual field comparisons across all checks. | | `FieldClean` | `long` | Field comparisons where the field was clean. | | `FieldDirty` | `long` | Field comparisons where the field was dirty. | | `EntityTypes` | `long` | Number of distinct entity types that have generated UPDATE shapes. | | `Shapes` | `long` | Total number of distinct UPDATE statement shapes. | | `ShapesPerEntity` | `Map` | Number of shapes per entity type. | | `UpdateModePerEntity` | `Map` | Effective update mode per entity type. | | `DirtyCheckPerEntity` | `Map` | Effective dirty check strategy per entity type. | | `MaxShapesPerEntity` | `Map` | Configured maximum shapes per entity type. | **Operation:** `reset()` clears all counters. **What to look for:** - A high `CleanRatioPercent` means many updates are being skipped because the entity has not changed. This is the primary benefit of dirty checking. - `CacheMisses` indicates how often a dirty check falls back to a full update because no baseline was available. High values suggest entities are being updated without being read first in the same transaction. - `ShapesPerEntity` approaching `MaxShapesPerEntity` indicates that `FIELD` mode is generating many distinct column combinations. Consider raising `storm.update.max_shapes` or switching to `ENTITY` mode for that entity type. ======================================== ## Source: common-patterns.md ======================================== # Common Patterns This page collects practical patterns for recurring requirements that are not covered by a dedicated Storm API but are straightforward to implement using the framework's building blocks. Each pattern includes a complete example with the entity definition, supporting code, and usage. --- ## Loading One-to-Many Relationships Storm does not support collection fields on entities. This is by design: embedding collections inside entities leads to lazy loading, N+1 queries, and unpredictable fetch behavior. Instead, you load the "many" side with an explicit query and assemble the result in your application code. Unlike JPA's `@OneToMany` collection, Storm loads relationships via explicit queries. This gives you full control over when and how children are loaded, preventing N+1 problems and making the data flow visible in the source code. ### Entity Definitions [Kotlin] ```kotlin @DbTable("purchase_order") data class Order( @PK val id: Long = 0, val customerId: Long, val status: String, val createdAt: Instant? ) : Entity data class LineItem( @PK val id: Long = 0, @FK val order: Order, val productName: String, val quantity: Int, val unitPrice: BigDecimal ) : Entity ``` [Java] ```java @DbTable("purchase_order") public record Order( @PK Long id, long customerId, String status, @Nullable Instant createdAt ) implements Entity {} public record LineItem( @PK Long id, @FK Order order, String productName, int quantity, BigDecimal unitPrice ) implements Entity {} ``` ### Fetching and Assembling Fetch the parent entity, then query its children using the foreign key. Assemble the result into a response object that your service or controller returns. [Kotlin] ```kotlin data class OrderWithItems( val order: Order, val lineItems: List ) fun findOrderWithItems(orderId: Long): OrderWithItems? { val order = orm.entity(Order::class).findById(orderId) ?: return null val lineItems = orm.entity(LineItem::class).findAll(LineItem_.order eq order) return OrderWithItems(order, lineItems) } ``` [Java] ```java public record OrderWithItems(Order order, List lineItems) {} public Optional findOrderWithItems(long orderId) { return orm.entity(Order.class).findById(orderId) .map(order -> { List lineItems = orm.entity(LineItem.class) .select() .where(LineItem_.order, EQUALS, order) .getResultList(); return new OrderWithItems(order, lineItems); }); } ``` This pattern generalizes to any one-to-many relationship. Both queries are explicit and visible in the source code, so you can easily add filtering, sorting, or pagination to the child query without affecting the parent fetch. --- ## Auditing Most applications need to track when records were created and last modified. Storm's `EntityCallback` interface provides the hooks for this without requiring special annotations or framework-specific column types. ### Entity Definition [Kotlin] ```kotlin @DbTable("article") data class Article( @PK val id: Int = 0, val title: String, val content: String, val createdAt: Instant?, val updatedAt: Instant? ) : Entity ``` [Java] ```java @DbTable("article") public record Article( @PK Integer id, String title, String content, Instant createdAt, Instant updatedAt ) implements Entity {} ``` ### Callback Implementation [Kotlin] ```kotlin class AuditTimestampCallback : EntityCallback
{ override fun beforeInsert(entity: Article): Article { val now = Instant.now() return entity.copy(createdAt = now, updatedAt = now) } override fun beforeUpdate(entity: Article): Article = entity.copy(updatedAt = Instant.now()) } ``` Register it with the ORM template: ```kotlin val orm = ORMTemplate.of(dataSource) .withEntityCallback(AuditTimestampCallback()) ``` Or declare it as a Spring bean for automatic registration: ```kotlin @Bean fun auditTimestampCallback(): EntityCallback<*> = AuditTimestampCallback() ``` [Java] ```java public class AuditTimestampCallback implements EntityCallback
{ @Override public Article beforeInsert(Article entity) { var now = Instant.now(); return new Article(entity.id(), entity.title(), entity.content(), now, now); } @Override public Article beforeUpdate(Article entity) { return new Article(entity.id(), entity.title(), entity.content(), entity.createdAt(), Instant.now()); } } ``` Register it with the ORM template: ```java ORMTemplate orm = ORMTemplate.of(dataSource) .withEntityCallback(new AuditTimestampCallback()); ``` Or declare it as a Spring bean for automatic registration: ```java @Bean public EntityCallback auditTimestampCallback() { return new AuditTimestampCallback(); } ``` To apply auditing to all entities (not just `Article`), parameterize the callback with `Entity` and use pattern matching to handle each entity type. See the [Entity Lifecycle](entity-lifecycle.md) page for details. --- ## Soft Deletes Soft deletes mark records as deleted without physically removing them from the database. This preserves data for audit trails, undo operations, or compliance requirements. The pattern uses a boolean or timestamp column to indicate deletion status. ### Entity Definition [Kotlin] ```kotlin @DbTable("customer") data class Customer( @PK val id: Int, val name: String, val email: String, val deletedAt: Instant? // null means not deleted ) : Entity ``` [Java] ```java @DbTable("customer") public record Customer( @PK int id, String name, String email, Instant deletedAt // null means not deleted ) implements Entity {} ``` ### Repository with Soft Delete Methods [Kotlin] ```kotlin interface CustomerRepository : EntityRepository { /** Find only non-deleted customers. */ fun findActive(): List = findAll(Customer_.deletedAt.isNull()) /** Find a non-deleted customer by ID. */ fun findActiveOrNull(customerId: Int): Customer? = find((Customer_.id eq customerId) and Customer_.deletedAt.isNull()) /** Soft-delete a customer by setting the deletedAt timestamp. */ fun softDelete(customer: Customer): Customer { val softDeleted = customer.copy(deletedAt = Instant.now()) update(softDeleted) return softDeleted } /** Restore a soft-deleted customer. */ fun restore(customer: Customer): Customer { val restored = customer.copy(deletedAt = null) update(restored) return restored } } ``` [Java] ```java public interface CustomerRepository extends EntityRepository { /** Find only non-deleted customers. */ default List findActive() { return findAll(Customer_.deletedAt.isNull()); } /** Find a non-deleted customer by ID. */ default Optional findActiveById(int customerId) { return select() .where(Customer_.id.eq(customerId).and(Customer_.deletedAt.isNull())) .getOptionalResult(); } /** Soft-delete a customer by setting the deletedAt timestamp. */ default Customer softDelete(Customer customer) { var softDeleted = new Customer(customer.id(), customer.name(), customer.email(), Instant.now()); update(softDeleted); return softDeleted; } /** Restore a soft-deleted customer. */ default Customer restore(Customer customer) { var restored = new Customer(customer.id(), customer.name(), customer.email(), null); update(restored); return restored; } } ``` ### Enforcing Soft Deletes via Callback To prevent accidental hard deletes, use an entity callback that converts `delete()` calls into soft deletes: [Kotlin] ```kotlin class SoftDeleteGuard : EntityCallback { override fun beforeDelete(entity: Customer) { throw PersistenceException( "Hard deletes are not allowed for Customer. Use softDelete() instead." ) } } ``` [Java] ```java public class SoftDeleteGuard implements EntityCallback { @Override public void beforeDelete(Customer entity) { throw new PersistenceException( "Hard deletes are not allowed for Customer. Use softDelete() instead."); } } ``` --- ## Pagination and Scrolling Storm provides two strategies for traversing large result sets: pagination (by page number) and scrolling (by cursor). You do not need to define your own page wrappers or write raw `LIMIT`/`OFFSET` queries. ### Offset-Based Pagination Use the `page()` method on any entity or projection repository. Storm executes the data query and count query automatically, returning a `Page` that includes the result list and total count. [Kotlin] ```kotlin // First page of 20 users (page numbers are zero-based) val page: Page = userRepository.page(0, 20) // With sort order using Pageable val pageable = Pageable.ofSize(20).sortBy(User_.name) val sortedPage: Page = userRepository.page(pageable) // Navigate forward if (sortedPage.hasNext()) { val nextPage = userRepository.page(sortedPage.nextPageable()) } // Navigate backward if (sortedPage.hasPrevious()) { val previousPage = userRepository.page(sortedPage.previousPageable()) } ``` [Java] ```java // First page of 20 users (page numbers are zero-based) Page page = userRepository.page(0, 20); // With sort order using Pageable Pageable pageable = Pageable.ofSize(20).sortBy(User_.name); Page sortedPage = userRepository.page(pageable); // Navigate forward if (sortedPage.hasNext()) { Page nextPage = userRepository.page(sortedPage.nextPageable()); } // Navigate backward if (sortedPage.hasPrevious()) { Page previousPage = userRepository.page(sortedPage.previousPageable()); } ``` The `Page` record carries all the metadata you need for building pagination controls: | Field / Method | Description | |---|---| | `content` | List of results for the current page | | `totalCount` | Total matching rows across all pages | | `pageNumber()` | Zero-based index of the current page | | `pageSize()` | Maximum elements per page | | `totalPages()` | Computed total number of pages | | `hasNext()` / `hasPrevious()` | Navigation checks | | `nextPageable()` / `previousPageable()` | Returns a `Pageable` for the adjacent page | To load only primary keys instead of full entities, use `pageRef()`: [Kotlin] ```kotlin val refPage: Page> = userRepository.pageRef(0, 20) ``` [Java] ```java Page> refPage = userRepository.pageRef(0, 20); ``` ### Scrolling Use the `scroll()` method on any entity repository with a `Scrollable` that captures the cursor state. These navigate sequentially using a unique column value (typically the primary key) as a cursor, which lets the database seek directly to the correct position using an index. [Kotlin] ```kotlin // First page of 20 users ordered by ID val window: Window = userRepository.scroll(Scrollable.of(User_.id, 20)) // Navigate forward: next() is non-null whenever the window has content. // hasNext() is an informational flag indicating whether more rows existed at // query time, but the developer decides whether to follow the cursor. val next: Window = userRepository.scroll(window.next()) // Navigate backward val previous: Window = userRepository.scroll(window.previous()) ``` [Java] ```java // First page of 20 users ordered by ID Window window = userRepository.scroll(Scrollable.of(User_.id, 20)); // Navigate forward: next() is non-null whenever the window has content. // hasNext() is an informational flag indicating whether more rows existed at // query time, but the developer decides whether to follow the cursor. Window next = userRepository.scroll(window.next()); // Navigate backward Window previous = userRepository.scroll(window.previous()); ``` Each method returns a `Window` containing the page content and navigation cursors for sequential traversal. The `hasNext()` and `hasPrevious()` flags reflect whether additional rows existed at query time, but they are not prerequisites for calling `next()` or `previous()`. Both methods return a non-null `Scrollable` whenever the window contains at least one element, and return `null` only when the window is empty. This means you can always follow the cursor if you choose to; for example, new rows may have been inserted after the original query. For REST APIs, `Window` also provides `nextCursor()` and `previousCursor()` to serialize the scroll position as an opaque string, and `Scrollable.fromCursor(key, cursor)` to reconstruct a `Scrollable` from a cursor string. See [Repositories: Scrolling](repositories.md#scrolling) for the full API, including sort overloads, filtering, and Ref variants. ### Choosing Between the Two | Factor | Pagination (`page`) | Scrolling (`scroll`) | |---|---|---| | Request type | `Pageable` | `Scrollable` | | Result type | `Page` | `Window` | | Navigation | page number | cursor | | Count query | yes | no | | Random access | yes | no | | Performance at page 1 | Good | Good | | Performance at page 1,000 | Degrades (database must skip rows) | Consistent (index seek) | | Handles concurrent inserts | Rows may shift between pages | Stable cursor | | Navigate forward | `page.nextPageable()` | `window.next()` | | Navigate backward | `page.previousPageable()` | `window.previous()` | Use pagination when you need random page access or a total count (for example, displaying "Page 3 of 12" in a UI). Use scrolling when you need consistent performance over deep result sets or when the data changes frequently between requests. --- ## Bulk Import For large-scale data imports, use Storm's streaming batch methods. These process entities from a `Stream` in configurable batch sizes, keeping memory usage constant regardless of the total number of entities. [Kotlin] ```kotlin // Read from a CSV file and insert in batches of 500. val entityStream = csvReader.lines() .map { line -> parseUser(line) } orm.entity(User::class).insert(entityStream, batchSize = 500) ``` For imports where auto-generated primary keys should be ignored (e.g., migrating data with existing IDs): ```kotlin orm.entity(User::class).insert(entityStream, batchSize = 500, ignoreAutoGenerate = true) ``` [Java] ```java // Read from a CSV file and insert in batches of 500. Stream entityStream = csvReader.lines() .map(line -> parseUser(line)); orm.entity(User.class).insert(entityStream, 500); ``` For imports where auto-generated primary keys should be ignored (e.g., migrating data with existing IDs): ```java orm.entity(User.class).insert(entityStream, 500, true); ``` The streaming API processes entities lazily: only one batch is held in memory at a time. This makes it suitable for importing millions of rows without running out of memory. --- ## Row-Level Security Row-level security restricts which rows a user can access based on their identity or role. Storm does not provide built-in row-level security, but you can implement it using entity callbacks and the SQL interceptor. ### Via Entity Callbacks Use a callback to enforce read-level security by filtering or rejecting unauthorized access: [Kotlin] ```kotlin class TenantIsolationCallback : EntityCallback> { override fun beforeInsert(entity: TenantEntity<*>): TenantEntity<*> { val currentTenant = TenantContext.current() if (entity.tenantId != currentTenant) { throw PersistenceException("Cannot insert entity for tenant ${entity.tenantId}") } return entity } override fun beforeUpdate(entity: TenantEntity<*>): TenantEntity<*> { val currentTenant = TenantContext.current() if (entity.tenantId != currentTenant) { throw PersistenceException("Cannot update entity belonging to tenant ${entity.tenantId}") } return entity } override fun beforeDelete(entity: TenantEntity<*>) { val currentTenant = TenantContext.current() if (entity.tenantId != currentTenant) { throw PersistenceException("Cannot delete entity belonging to tenant ${entity.tenantId}") } } } ``` [Java] ```java public class TenantIsolationCallback implements EntityCallback> { @Override public TenantEntity beforeInsert(TenantEntity entity) { String currentTenant = TenantContext.current(); if (!entity.tenantId().equals(currentTenant)) { throw new PersistenceException("Cannot insert entity for tenant " + entity.tenantId()); } return entity; } @Override public TenantEntity beforeUpdate(TenantEntity entity) { String currentTenant = TenantContext.current(); if (!entity.tenantId().equals(currentTenant)) { throw new PersistenceException("Cannot update entity belonging to tenant " + entity.tenantId()); } return entity; } @Override public void beforeDelete(TenantEntity entity) { String currentTenant = TenantContext.current(); if (!entity.tenantId().equals(currentTenant)) { throw new PersistenceException("Cannot delete entity belonging to tenant " + entity.tenantId()); } } } ``` ======================================== ## Source: comparison.md ======================================== # Storm vs Other Frameworks There is no universally "best" database framework. Each has strengths suited to different situations, team preferences, and project requirements. Teams approach data access differently, including using frameworks at various abstraction levels or even plain SQL. This page provides a comparison to help you evaluate whether Storm fits your needs, particularly if you value explicit and predictable behavior and fast development. We encourage you to explore the linked documentation for each framework and form your own conclusions. ## Feature Comparison The following tables provide a side-by-side comparison of concrete features across all frameworks discussed on this page. "Yes" and "No" indicate built-in support; "Manual" means the feature is achievable but requires explicit effort from the developer. ### Entity & Data Modeling | Feature | Storm | JPA | Spring Data | MyBatis | jOOQ | JDBI | Exposed | Ktorm | |---------|-------|-----|-------------|---------|------|------|---------|-------| | Lines per entity | ~5 | ~301 | ~301 | ~20+ | Generated | ~15 | ~12 | ~15 | | Immutable entities | Yes | No | No | Yes | Yes | Yes | DSL only | No | | Polymorphism | Yes2 | Yes | Via JPA | No | No | No | No | No | | Automatic relationships | Yes | Yes3 | Via JPA | No | No | No | DAO only | No | | Cascade persist | No | Yes | Yes | No | No | No | No | No | | Lifecycle callbacks | Yes | Yes | Via JPA | No | Yes | No | DAO only | No | 1 JPA/Spring Data lines without Lombok; ~10 lines with Lombok. 2 Storm supports Single-Table, Joined Table, and Polymorphic FK strategies using sealed types. JPA additionally supports Table-per-Class and multi-level inheritance hierarchies. 3 JPA relationships are runtime-managed via proxies. ### Querying & Data Access | Feature | Storm | JPA | Spring Data | MyBatis | jOOQ | JDBI | Exposed | Ktorm | |---------|-------|-----|-------------|---------|------|------|---------|-------| | Type-safe queries | Yes | Criteria | No | No | Yes | No | Yes | Yes | | SQL Templates | Yes | No | No | XML/Ann | Yes | Yes | No | No | | N+1 prevention | Yes | No | No | No | Manual | Manual | No | No | | Lazy loading | Refs | Yes | Yes | No | No | No | Yes | Yes | | Scrolling | Yes | No | Yes | No | Yes | No | No | No | | JSON columns | Yes | Yes4 | Via JPA | Manual | Yes | Module | Yes | Module | | JSON aggregation | Yes | No | No | No | Yes | No | No | No | 4 JPA requires Hibernate 6.2+ for built-in JSON support; older versions need a third-party library or custom `AttributeConverter`. ### Runtime & Ecosystem | Feature | Storm | JPA | Spring Data | MyBatis | jOOQ | JDBI | Exposed | Ktorm | |---------|-------|-----|-------------|---------|------|------|---------|-------| | Transactions | Both | Both | Declarative | Both | Programmatic | Both | Both6 | Required | | Schema validation | Yes | Yes | Via JPA | No | N/A5 | No | Yes | No | | Java support | Yes | Yes | Yes | Yes | Yes | Yes | No | No | | Kotlin support | First-class | Good | Good | Good | Good | Good | Native | Native | | Coroutines | Yes | No | No | No | No | No | Yes | Limited | | Spring integration | Yes | Yes | Native | Yes | Yes | Yes | Yes | Yes | | Runtime mechanism | Codegen7 | Bytecode | Bytecode | Reflection | Codegen | Reflection | Reflection | Reflection | | Community | New | Huge | Huge | Large | Medium | Medium | Medium | Small | 5 jOOQ generates code from the database schema, so schema validation is inherent in its code generation step. 6 Exposed requires `transaction {}` blocks natively, but supports declarative `@Transactional` via its Spring integration module. 7 Storm uses codegen with reflection fallback. --- ## Storm vs JPA/Hibernate JPA (typically implemented by Hibernate) is the most widely used persistence framework in the Java ecosystem. It provides a full object-relational mapping layer with managed entities and second-level caching. Storm takes a fundamentally different approach: entities are plain values with no managed state, and database interactions are explicit rather than implicit. This makes Storm simpler to reason about at the cost of JPA's more automated (but less predictable) features. | Aspect | Storm | JPA/Hibernate | |--------|-------|------------------------------------------| | **Entities** | Immutable records/data classes | Mutable classes with getters/setters | | **Polymorphism** | Sealed types (Single-Table, Joined, Polymorphic FK); STRING, INTEGER, CHAR discriminators | Class hierarchy (Single-Table, Joined, Table-per-Class); STRING, INTEGER, CHAR discriminators | | **State** | Stateless; no persistence context | Managed entities | | **Loading** | Loading in single query | Lazy loading common | | **N+1 Problem** | Prevented by design; requires explicit opt-in | Common pitfall | | **Queries** | Type-safe DSL, SQL Templates | JPQL, Criteria API | | **Caching** | Transaction-scoped observation | First/second level cache | | **Transactions** | Programmatic + `@Transactional` (Spring) | `@Transactional`, JTA, container-managed | | **Schema Validation** | Programmatic + Spring Boot | `ddl-auto=validate` | | **Learning Curve** | Gentle; SQL-like | Steep; many concepts | | **Magic** | What you see is what you get | Proxies, bytecode enhancement | **Polymorphism differences.** Storm and JPA overlap on Single-Table and Joined Table, but diverge beyond that. Storm adds [Polymorphic FK](polymorphism.md#polymorphic-foreign-keys), a two-column foreign key (type + id) that references independent tables with no shared base. This has no JPA equivalent (Hibernate offers the non-standard `@Any` annotation for a similar purpose). JPA adds Table-per-Class, which duplicates all fields into per-subtype tables and queries the base type via `UNION ALL`, and multi-level inheritance (e.g., `Animal` → `Pet` → `Cat`). Storm intentionally limits hierarchies to a single sealed level, which covers the vast majority of real-world use cases while keeping SQL generation predictable. ### When to Choose Storm - You want predictable, explicit database behavior - You want concise entity definitions with minimal boilerplate - N+1 queries have been a recurring problem - You prefer immutable data structures - You value simplicity over complexity - You want a lightweight, minimal dependency footprint - You're using Kotlin and want idiomatic APIs ### When to Choose JPA/Hibernate - You rely on second-level caching - You have complex multi-level inheritance hierarchies (Storm supports [single-level sealed type polymorphism](polymorphism.md)) - You have an existing JPA codebase to maintain - You need JPA compliance for vendor reasons - You want access to a large community and extensive resources ## Storm vs Spring Data JPA Spring Data JPA wraps JPA with a repository abstraction that derives query implementations from method names. It reduces boilerplate but inherits all of JPA's runtime complexity (proxies, managed state, lazy loading). Storm's Spring integration provides similar repository convenience with explicit query bodies instead of naming conventions. | Aspect | Storm | Spring Data JPA | |--------|-------|-----------------| | **Foundation** | Custom ORM | JPA/Hibernate | | **Polymorphism** | Sealed types (Single-Table, Joined, Polymorphic FK) | Via JPA | | **Repositories** | Interface with default methods | Interface with method naming, `@Query` | | **Query Methods** | Explicit DSL in method body | Derived from method names, `@Query` | | **Entities** | Records/data classes | JPA entities | | **State** | Stateless | Managed | | **Transactions** | Programmatic + `@Transactional` (Spring) | `@Transactional` (Spring-managed) | ### When to Choose Storm - You want stateless, immutable entities with minimal boilerplate - You prefer explicit query logic over naming conventions - You want to avoid JPA's complexity - You want a lightweight, minimal dependency footprint ### When to Choose Spring Data JPA - You need JPA features (lazy loading, caching) - You like query derivation from method names - You're already invested in the JPA ecosystem ## Storm vs MyBatis MyBatis is a SQL mapper that gives you full control over every query. You write SQL in XML files or annotations and map results to POJOs manually. Storm sits at a higher abstraction level, inferring SQL from entity definitions while still allowing raw SQL when needed. The trade-off is flexibility vs. automation: MyBatis never generates SQL for you, while Storm handles the common cases and lets you drop to raw SQL for complex queries. | Aspect | Storm | MyBatis | |--------|-------|---------| | **Approach** | Stateless ORM | SQL mapper | | **Polymorphism** | Sealed types (Single-Table, Joined, Polymorphic FK) | Manual (via SQL) | | **SQL Definition** | Inferred from entities, SQL Templates (optional) | XML files or annotations | | **Result Mapping** | Automatic from entity definitions | Manual XML/annotation mapping | | **Entities** | Records/data classes with annotations | POJOs, manual mapping | | **Relationships** | Automatic via `@FK` | Manual nested queries/joins | | **Type Safety** | Compile-time checked | String SQL, typed result mapping | | **N+1 Problem** | Prevented by design; requires explicit opt-in | Manual optimization | | **Transactions** | Programmatic + `@Transactional` (Spring) | Manual or Spring `@Transactional` | | **Dynamic SQL** | Kotlin/Java code | XML tags (``, ``) | | **Learning Curve** | Gentle; annotation-based | Moderate; XML knowledge helpful | ### When to Choose Storm - You want automatic entity mapping without XML and minimal boilerplate - You prefer type-safe queries over string SQL - You want relationships handled automatically - You value compile-time safety - You're starting a new project without legacy SQL ### When to Choose MyBatis - You have complex SQL that doesn't fit ORM patterns - You need fine-grained control over every query - You're working with legacy databases or stored procedures - You need XML-based SQL externalization ## Storm vs jOOQ jOOQ generates Java code from your database schema, providing a type-safe SQL DSL that mirrors the structure of your tables. Storm also treats the database schema as the source of truth, but instead of generating code from the schema, you write entity definitions that reflect it, and the metamodel is generated from those entities. Both frameworks provide compile-time type safety, but queries look very different. jOOQ excels at complex SQL (window functions, CTEs, recursive queries) where its DSL closely follows SQL syntax, but this means every join, column reference, and condition must be spelled out explicitly. Storm queries are more concise: the metamodel and automatic join derivation from `@FK` annotations let you write queries that focus on what you want rather than how to join it. Storm excels at entity-oriented operations where automatic relationship handling and repository patterns reduce boilerplate. | Aspect | Storm | jOOQ | |--------|-------|------| | **Approach** | Schema-reflective ORM | Schema-driven code generation | | **Polymorphism** | Sealed types (Single-Table, Joined, Polymorphic FK) | Manual (via SQL DSL) | | **Type Safety** | Metamodel from entities | Generated from schema | | **Setup** | Define entities, code generation | Schema, code generation | | **Entities** | Records/data classes with `Entity` | Records or POJOs | | **Query Style** | Repository + ORM DSL + SQL Templates | SQL-like DSL | | **Query Verbosity** | Concise; auto joins from `@FK`, metamodel shortcuts | Verbose; explicit joins, columns, and conditions | | **Relationships** | Automatic from `@FK` | Manual joins | | **Transactions** | Programmatic + `@Transactional` (Spring) | DSL context, Spring integration | | **License** | Apache 2.0 | Commercial for some DBs | ### When to Choose Storm - You prefer writing entity definitions that reflect the schema over generating code from it - You want concise, type-safe queries with automatic join derivation - You want automatic relationship handling - You value convention over configuration - You need a fully open-source solution ### When to Choose jOOQ - You prefer pure SQL control - You want native DSL support for advanced SQL features (window functions, CTEs) - You want a thin layer over SQL with minimal runtime overhead ## Storm vs JDBI JDBI is a lightweight SQL convenience library that sits just above JDBC. It handles parameter binding, result mapping, and connection management without imposing an object model. Storm provides more structure with entity definitions, automatic relationship loading, and a repository pattern. Choose JDBI when you want minimal abstraction and full SQL control; choose Storm when you want the framework to handle common patterns while still allowing raw SQL escape hatches. | Aspect | Storm | JDBI | |--------|-------|------| | **Level** | Stateless ORM | Low-level SQL mapping | | **Polymorphism** | Sealed types (Single-Table, Joined, Polymorphic FK) | Manual | | **Entities** | Automatic from annotations | Manual mapping | | **Relationships** | Automatic via `@FK` | Manual | | **Type Safety** | Metamodel DSL | String SQL | | **Transactions** | Programmatic + `@Transactional` (Spring) | Manual, `@Transaction` annotation | ### When to Choose Storm - You want automatic entity mapping with concise entity definitions - You need relationship handling - You prefer type-safe queries over raw SQL ### When to Choose JDBI - You want full SQL control - You prefer minimal abstraction - You have mostly complex queries that don't fit ORM patterns --- ## Kotlin-Only Frameworks The following frameworks are Kotlin-only. Storm supports both Kotlin and Java. ## Storm vs Exposed Exposed is JetBrains' official Kotlin database framework. It offers two APIs: a DSL that mirrors SQL syntax and a DAO layer for ORM-style access. Exposed defines tables as Kotlin objects rather than annotations on data classes. Storm and Exposed share the goal of idiomatic Kotlin database access but differ in entity design (mutable DAO entities vs. immutable data classes) and relationship loading strategy (lazy references vs. eager single-query loading). | Aspect | Exposed | Storm | |--------|---------|--------------------------------------| | **Language** | Kotlin only | Kotlin + Java | | **Polymorphism** | No | Sealed types (Single-Table, Joined, Polymorphic FK) | | **APIs** | DSL (SQL) + DAO (ORM) | Unified ORM + SQL Templates | | **Table Definition** | DSL objects (`object Users : Table()`) | Annotations on data classes | | **Entities (DAO)** | Mutable, extend `Entity` class | Immutable data classes (Kotlin) / records (Java) | | **Relationships** | Lazy references, manual loading | Loading in single query | | **N+1 Problem** | Possible with DAO | Prevented by design; requires explicit opt-in | | **Coroutines** | Supported (added later) | First-class from the start | | **Type Safety** | Column references | Metamodel DSL | | **Transactions** | `transaction {}` block, declarative via Spring module | Optional, programmatic + declarative | #### Transaction Propagation Both Storm and Exposed use a `transaction { }` block for programmatic transaction management, but they differ significantly in propagation support. Exposed's native API supports two modes: shared nesting (the default, where inner blocks join the outer transaction) and savepoint-based nesting (via `useNestedTransactions = true`). For other propagation behaviors, Exposed relies on Spring's `@Transactional` through its `SpringTransactionManager` integration module. Storm supports all seven standard propagation modes natively in its `transaction { }` block, without requiring Spring: | Propagation | Storm | Exposed | |-------------|-------|---------| | `REQUIRED` | Yes | Yes (default behavior) | | `REQUIRES_NEW` | Yes | No (Spring only) | | `NESTED` | Yes | Yes (`useNestedTransactions`) | | `MANDATORY` | Yes | No | | `SUPPORTS` | Yes | No | | `NOT_SUPPORTED` | Yes | No | | `NEVER` | Yes | No | This means Storm's programmatic API can express patterns like audit logging (`REQUIRES_NEW`), defensive boundary enforcement (`MANDATORY`, `NEVER`), and non-transactional operations (`NOT_SUPPORTED`) directly in code, while Exposed requires Spring integration for these use cases. See [Transactions](transactions.md) for details and examples of each propagation mode. #### Transaction Callbacks Both frameworks allow running logic after a transaction commits or rolls back, but the APIs differ significantly. Storm provides `onCommit` and `onRollback` callbacks on the `Transaction` object. Callbacks accept suspend functions, execute in registration order, and are resilient to individual failures (remaining callbacks still run). When a callback is registered inside a joined scope (`REQUIRED`, `NESTED`), it is automatically deferred to the outermost physical transaction's commit or rollback, so it only fires when data is actually durable: ```kotlin transaction { orderRepository.insert(order) onCommit { emailService.sendConfirmation(order) } // Fires after physical commit } ``` Exposed uses a `StatementInterceptor` interface with lifecycle methods (`beforeCommit`, `afterCommit`, `beforeRollback`, `afterRollback`, among others) that is registered on the transaction via `registerInterceptor()`. Global interceptors can be registered via Java `ServiceLoader`. This approach is well suited for cross-cutting concerns that apply to many transactions: ```kotlin transaction { // Exposed: register an interceptor registerInterceptor(object : StatementInterceptor { override fun afterCommit(transaction: Transaction) { emailService.sendConfirmation(order) } }) OrderTable.insert { it[id] = order.id } } ``` | Aspect | Storm | Exposed | |--------|-------|---------| | API style | Lambda (`onCommit { }`) | Interface (`StatementInterceptor`) | | Suspend support | Yes (JDBC) | R2DBC only (`SuspendStatementInterceptor`) | | Nested transaction behavior | Deferred to physical commit | Fires after savepoint release (data not yet durable) | | Callback isolation | Yes (remaining callbacks still run on failure) | No (exception propagates, skipping remaining interceptors) | | Global interceptors | No | Yes (via `ServiceLoader`) | | Additional hooks | No | `beforeCommit`, `beforeRollback`, `beforeExecution`, `afterExecution` | The most significant behavioral difference is with nested transactions. Exposed's `afterCommit` fires on the nested transaction's own "commit," which for savepoint-based nesting is just a savepoint release, not an actual database commit. If the outer transaction subsequently rolls back, the `afterCommit` callback will have already executed despite the data never becoming durable. Storm avoids this by deferring callbacks to the outermost physical transaction. Storm's callback isolation behavior (remaining callbacks still execute when one fails) follows the same approach as Spring's `TransactionSynchronization`, where post-commit and post-completion callbacks are invoked independently. Since callbacks fire after the transaction outcome is final, there is nothing to undo; silently skipping remaining side effects because of one failure would be worse than running them all and surfacing the first exception. Exposed's `StatementInterceptor` also provides hooks that Storm intentionally does not offer: `beforeCommit`, `beforeRollback`, and statement-level interceptors (`beforeExecution`, `afterExecution`). In Storm's stateless model, pre-commit logic belongs at the end of the `transaction { }` block itself, since there is no persistence context to flush or dirty state to reconcile before the commit. Statement-level observability is covered by Storm's [`@SqlLog`](sql-logging.md) annotation and `SqlCapture` test utility rather than a general interceptor mechanism. #### Schema Migration Exposed provides built-in schema management through its `SchemaUtils` utility. You can create tables, add missing columns, and generate migration statements programmatically: ```kotlin transaction { SchemaUtils.create(UsersTable, OrdersTable) // CREATE TABLE IF NOT EXISTS SchemaUtils.createMissingTablesAndColumns(UsersTable) // ALTER TABLE ADD COLUMN ... SchemaUtils.statementsRequiredToActualizeScheme() // Returns DDL statements without executing } ``` This is convenient for prototyping and simple applications. For production use, JetBrains recommends pairing Exposed with a dedicated migration tool like Flyway or Liquibase, since `SchemaUtils` does not handle column removal, type changes, or data migration. Storm does not include schema management or migration utilities. Schema management is expected to be handled externally using tools like Flyway, Liquibase, or plain SQL scripts. Storm's [schema validation](validation.md) feature can verify at startup that entity definitions match the database schema, catching mismatches early without modifying the schema itself. ### When to Choose Storm - You need Kotlin and Java support - You want concise, immutable entities without base class inheritance - You prefer annotation-based entity definitions - N+1 queries are a concern - You want relationships loaded automatically - You need full support for transaction propagation modes ### When to Choose Exposed - You're building a Kotlin-only project - You prefer DSL-based table definitions - You want to switch between SQL DSL and DAO styles - You like the JetBrains ecosystem integration - You need fine-grained control over lazy loading - You need R2DBC support for reactive database access* *Storm uses JDBC and relies on JVM virtual threads for non-blocking I/O instead of R2DBC. ## Storm vs Ktorm Ktorm is a lightweight Kotlin ORM that uses entity interfaces and DSL-based table definitions. It requires no code generation and has minimal dependencies. Storm differs primarily in its use of immutable data classes (instead of mutable interfaces), automatic relationship loading, and optional metamodel generation for compile-time type safety. | Aspect | Ktorm | Storm | |--------|-------|-------| | **Language** | Kotlin only | Kotlin + Java | | **Polymorphism** | No | Sealed types (Single-Table, Joined, Polymorphic FK) | | **Entities** | Interfaces extending `Entity` | Data classes with annotations | | **Table Definition** | DSL objects (`object Users : Table`) | Annotations on data classes | | **Query Style** | Sequence API, DSL | ORM DSL + SQL Templates | | **Relationships** | References, manual loading | Automatic loading | | **N+1 Problem** | Possible | Prevented by design; requires explicit opt-in | | **Code Generation** | None required | Optional metamodel | | **Immutability** | Mutable entity interfaces | Immutable data classes | | **Coroutines** | Limited | First-class support | | **Transactions** | `useTransaction {}` block | Programmatic + `@Transactional` (Spring) | ### When to Choose Storm - You need Kotlin and Java support - You want concise, immutable data classes instead of mutable interfaces - You prefer annotation-based definitions - N+1 prevention is important - You want automatic relationship loading ### When to Choose Ktorm - You're building a Kotlin-only project - You prefer no code generation - You like the Sequence API style - You prefer DSL-based table definitions ## Summary Storm is a newer framework, so community resources and third-party tutorials are still growing. However, the API is designed to be intuitive for developers familiar with SQL and Kotlin and modern Java. Choose Storm if you value: - **Simplicity** over complexity - **Predictability** over magic - **Immutability** over managed state - **Explicit** over implicit behavior - **Kotlin and modern Java** development with first-class support for both Ready to try it? See the [Getting Started](getting-started.md) guide. ## Framework Links - [Hibernate ORM](https://hibernate.org/orm/) - [Spring Data JPA](https://spring.io/projects/spring-data-jpa) - [MyBatis](https://mybatis.org/mybatis-3/) - [jOOQ](https://www.jooq.org/) - [JDBI](https://jdbi.org/) - [Exposed](https://github.com/JetBrains/Exposed) - [Ktorm](https://www.ktorm.org/) ======================================== ## Source: faq.md ======================================== # Frequently Asked Questions ## General ### What databases does Storm support? Storm works with any JDBC-compatible database. Dialect packages provide optimized support for PostgreSQL, MySQL, MariaDB, Oracle, and MS SQL Server. See [Database Dialects](dialects.md). ### Does Storm require preview features? - **Kotlin:** No. The Kotlin API has no preview dependencies. - **Java:** Yes. The Java API is built on String Templates, a preview feature that is still evolving in the JDK. String Templates are the best way to write SQL that is both readable and injection-safe by design, and Storm ships with support today rather than waiting for the feature to stabilize. If you prefer a stable API right now, the Kotlin API requires no preview features. Only `storm-java21` depends on this preview feature. The Java API is production-ready from a quality perspective, but its API surface will adapt as String Templates move toward a stable release. ### Can I use Storm with Spring Boot? Yes. Storm integrates seamlessly with Spring Boot. See [Spring Integration](spring-integration.md). ### Is Storm production-ready? Yes. Storm is used in production environments and follows semantic versioning for stable releases. ### Does Storm support schema validation? Yes. Storm can validate your entity definitions against the actual database schema, catching mismatches like missing tables, missing columns, type incompatibilities, type narrowing (potential precision loss), nullability differences, primary key mismatches, missing sequences, missing unique constraints, and missing foreign key constraints. This works similarly to Hibernate's `ddl-auto=validate`, but Storm never modifies the schema. Enable it in Spring Boot: ```yaml storm: validation: schema-mode: fail # or "warn" or "none" ``` Or call it programmatically: [Kotlin] ```kotlin orm.validateSchemaOrThrow() ``` [Java] ```java orm.validateSchemaOrThrow(); ``` See [Configuration: Schema Validation](configuration.md#schema-validation) for full details. --- ## What Storm Does Not Do Storm is intentionally scoped. The following are conscious design decisions, not missing features. Each reflects a trade-off that keeps the framework simple, predictable, and free of hidden behavior. ### No Schema Generation or Migration Storm never issues DDL statements (CREATE TABLE, ALTER TABLE, DROP TABLE). It reads and writes data, but never modifies the database structure. For schema management, use dedicated migration tools like [Flyway](https://flywaydb.org/) or [Liquibase](https://www.liquibase.org/). Storm's [schema validation](validation.md) can verify that your entities match the database at startup, serving as a safety net alongside your migration tool. ### No Lazy-Loading Proxies Storm does not use bytecode manipulation or runtime proxies to intercept field access. This eliminates `LazyInitializationException`, hidden database queries, and session-dependent entity behavior. Relationships declared with `@FK` are loaded eagerly in a single query. When you need deferred loading (for example, a rarely-accessed large sub-graph), use `Ref` to make the database access explicit and intentional. See [Entities: Deferred Loading](entities.md#deferred-loading-with-ref) for details. ### No Second-Level Cache Storm maintains only a transaction-scoped entity cache for identity guarantees and dirty checking. There is no cross-transaction or application-wide cache. This avoids cache invalidation complexity, stale data bugs, and the configuration burden of managing cache regions. For caching reference data or frequently-read entities, use Spring's `@Cacheable` annotation or a dedicated caching layer (Redis, Caffeine) at the service level, where cache scope and invalidation strategy are explicit. ### No Bytecode Manipulation Storm does not enhance, instrument, or proxy your entity classes at build time or runtime. Entities are plain Kotlin data classes or Java records with no hidden behavior. The metamodel is generated at compile time by a KSP plugin (Kotlin) or annotation processor (Java), but this is standard code generation, not bytecode rewriting. --- ## Entities ### Why use records/data classes instead of regular classes? Storm entities are pure data carriers. They never need to intercept method calls, track dirty fields, or manage lifecycle state. Data classes (Kotlin) and records (Java) are the natural fit because the language enforces immutability and generates `equals`, `hashCode`, and `toString` for free. This eliminates an entire category of bugs related to mutable shared state, identity confusion, and missing boilerplate. - **Immutability:** Prevents accidental state changes. - **Simplicity:** No boilerplate getters/setters. - **Equality:** Value-based equals/hashCode by default. - **Transparency:** No hidden proxy magic. ### How do I modify a Java record entity? Since Java records are immutable, you need to create a new instance with the changed values. There are several approaches: **Lombok `@Builder(toBuilder = true)` (recommended):** Generates a builder that copies all fields from an existing instance. This is the most ergonomic option and is used throughout Storm's own test suite. See [Modifying Entities](entities.md#modifying-entities) for the annotation setup. ```java var updated = user.toBuilder().email("new@example.com").build(); orm.entity(User.class).update(updated); ``` **Canonical constructor:** Call the record constructor directly. This works but becomes unwieldy as the number of fields grows. ```java var updated = new User(user.id(), "new@example.com", user.name(), user.city()); ``` **Custom wither methods:** Define `with*` methods on the record that return a new instance with a single field changed. Clean API but requires a method per field. ```java record User(@PK Integer id, @Nonnull String email, @Nonnull String name, @FK City city ) implements Entity { User withEmail(String email) { return new User(id, email, name, city); } } ``` **Future: JEP 468 (Derived Record Creation):** Java has proposed language-level support through [JEP 468](https://openjdk.org/jeps/468), which would allow concise copy-with-modification syntax without any external tooling: ```java // Proposed syntax (not yet available) var updated = user with { email = "new@example.com"; } ``` Until this feature is finalized, `@Builder(toBuilder = true)` remains the recommended approach. > **Note:** Kotlin data classes have a built-in `copy()` method that handles this naturally: `user.copy(email = "new@example.com")`. ### Can I use inheritance with Storm entities? Kotlin data classes and Java records cannot extend other classes, but Storm supports polymorphic entity hierarchies using sealed interfaces. A sealed interface defines the type hierarchy, and each permitted subtype is a record or data class. Storm provides three inheritance strategies: **Single-Table** (all subtypes in one table), **Joined Table** (base table plus extension tables), and **Polymorphic FK** (independent tables referenced via a two-column foreign key). See the [Polymorphism](polymorphism.md) guide for details. To share fields across unrelated entities (without a polymorphic hierarchy), extract them into an embedded record or data class and include it as a field. ### Which discriminator type should I use (STRING, INTEGER, CHAR)? Storm supports three discriminator column types via the `type()` attribute on `@Discriminator`: - **STRING** (default): Uses a `VARCHAR` column. Values are human-readable strings like the class name (`"Cat"`, `"Dog"`) or custom labels. This is the best choice for most new schemas because the discriminator values are self-documenting in the database. - **INTEGER**: Uses an `INTEGER` column. Each subtype must declare an explicit numeric value (e.g., `@Discriminator("1")`). Use this when your schema already has a numeric type code column, or when you need compact discriminator storage on high-volume tables. - **CHAR**: Uses a `CHAR(1)` column. Each subtype must declare a single-character value (e.g., `@Discriminator("C")`). This provides a compact, fixed-width discriminator that is still somewhat readable. If you are designing a new schema, `STRING` is the simplest choice. If you are integrating with an existing schema that uses integer or character type codes, use `INTEGER` or `CHAR` to match. See [Polymorphism: Discriminator Types](polymorphism.md#discriminator-types) for code examples. ### Why does the Polymorphic FK sealed interface extend `Data` instead of `Entity`? In Storm, `Entity` represents a type backed by a specific database table. For Polymorphic FK, the sealed interface does not correspond to any table. It groups unrelated entities under a common type so they can be referenced by a two-column foreign key (discriminator + ID). Because the interface has no table, it extends `Data` (a marker for types that participate in SQL generation without owning a table). Each subtype independently implements `Entity` because each one maps to its own independent table. This design ensures that Storm treats the sealed interface as a type constraint rather than a table reference. The discriminator column in the referencing entity identifies which subtype (and therefore which table) the foreign key points to, while the ID column identifies the specific row. [Kotlin] ```kotlin // Data: no table, just a type grouping sealed interface Commentable : Data { // Entity: has its own table data class Post(@PK val id: Int = 0, val title: String) : Commentable, Entity data class Photo(@PK val id: Int = 0, val url: String) : Commentable, Entity } ``` [Java] ```java // Data: no table, just a type grouping sealed interface Commentable extends Data permits Post, Photo {} // Entity: has its own table record Post(@PK Integer id, String title) implements Commentable, Entity {} record Photo(@PK Integer id, String url) implements Commentable, Entity {} ``` See [Polymorphism: Polymorphic Foreign Keys](polymorphism.md#polymorphic-foreign-keys) for the full explanation. ### How do I handle auto-generated IDs? Storm detects auto-generated IDs by checking whether the primary key is set to its default value (Kotlin) or `null` (Java). When inserting an entity with a null or default-valued primary key, Storm omits the ID from the INSERT statement and lets the database assign it. The generated ID is returned and available on the inserted instance. ```kotlin data class User(@PK val id: Int = 0, val name: String) : Entity val user = orm insert User(name = "Alice") // id will be populated ``` ### Can I use UUID primary keys? Yes. Storm supports any type as a primary key, including `UUID`. When using UUIDs, you typically generate the ID on the client side rather than relying on database auto-increment. This works well for distributed systems where coordination-free ID generation is important. ```kotlin data class User(@PK val id: UUID = UUID.randomUUID(), val name: String) : Entity ``` --- ## Data Classes ### When should I use Data vs Entity vs Projection vs plain records? Storm provides multiple data class types to match different use cases. `Entity` is the primary type for tables you read from and write to. `Projection` maps to the same table but exposes a subset of columns, useful for read-heavy queries where you do not need the full row. `Data` is a marker interface for ad-hoc query results that span multiple tables or include computed columns; Storm can still generate SQL fragments for `Data` types. Plain records (with no Storm interface) work when you write the entire SQL yourself and just need result mapping. | Use Case | Type | Example | |----------|------|---------| | Reusable types for CRUD operations | `Entity` | `User`, `Order` | | Reusable read-only views | `Projection` | `UserSummary`, `OrderView` | | Single-use query with SQL template support | `Data` | Ad-hoc joins with SQL generation | | Single-use query with complete manual SQL | Plain record | Complex aggregations, CTEs | See [SQL Templates](sql-templates.md) for details on using `Data` and plain records. --- ## Queries ### How do I prevent N+1 queries? You do not need to take any special action. Storm prevents N+1 queries by design. When you define a relationship with `@FK`, Storm generates a single SQL query that joins the related tables and hydrates the entire entity graph from one result set. There is no lazy loading that triggers additional queries behind the scenes. If you need a reference to a related entity without loading its full graph, use `Ref` to defer fetching until you explicitly call `fetch()`. ### Can I write raw SQL? Yes. Use SQL templates for raw queries: [Kotlin] ```kotlin orm.query { "SELECT * FROM user WHERE email = $email" } .resultList ``` [Java] ```java orm.query(RAW."SELECT * FROM user WHERE email = \{email}") .getResultList(); ``` Interpolated values like `email` are automatically converted to bind variables (`?`) in the generated SQL, preventing SQL injection. ### How do I handle pagination? Storm supports two strategies. **Offset-based pagination** uses `offset()` and `limit()` on the query builder, which translates directly to SQL `OFFSET` and `LIMIT`. This works well for small tables or when users need to jump to arbitrary page numbers. [Kotlin] ```kotlin val page = orm.entity(User::class) .select() .orderByDescending(User_.createdAt) .offset(20) .limit(10) .resultList ``` [Java] ```java var page = orm.entity(User.class) .select() .orderByDescending(User_.createdAt) .offset(20) .limit(10) .getResultList(); ``` For large tables where users scroll through results sequentially, prefer **scrolling** via `scroll()` with a `Scrollable`. This is available directly on repositories and on the query builder, and remains performant regardless of how deep into the result set you are. `Window` intentionally does not include a total element count, since a separate `COUNT(*)` must execute the same joins and filters as the main query, which can be expensive on large or complex result sets. Total counts are also inherently unstable, as rows may be inserted or deleted while a user navigates through pages. If you need a total count separately, use the `count` (Kotlin) or `getCount()` (Java) method on the query builder. See [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling) for a full explanation. [Kotlin] ```kotlin val window = userRepository.scroll(Scrollable.of(User_.id, 20)) // next() is non-null when the window has content. // hasNext() is informational; the developer decides whether to follow the cursor. val next = userRepository.scroll(window.next()) ``` [Java] ```java Window window = userRepository.scroll(Scrollable.of(User_.id, 20)); // next() is non-null when the window has content. // hasNext() is informational; the developer decides whether to follow the cursor. Window next = userRepository.scroll(window.next()); ``` ### Why does my DELETE without a WHERE clause throw an exception? By default, Storm rejects DELETE and UPDATE queries that have no WHERE clause with a `PersistenceException`. This is a safety mechanism that prevents accidental deletion or modification of every row in a table. This protection is particularly valuable because `QueryBuilder` is immutable. If you accidentally ignore the return value of `where()` on a delete builder, the WHERE clause is silently lost and the query would affect all rows. The safety check catches this at runtime: [Kotlin] ```kotlin // This throws PersistenceException: the where() return value is discarded, // so the delete has no WHERE clause and Storm blocks it. val builder = userRepository.delete() builder.where(User_.city eq city) builder.executeUpdate() // Correct: chain the calls so the WHERE clause is included. userRepository.delete() .where(User_.city eq city) .executeUpdate() ``` [Java] ```java // This throws PersistenceException: the where() return value is discarded, // so the delete has no WHERE clause and Storm blocks it. var builder = userRepository.delete(); builder.where(User_.city, EQUALS, city); builder.executeUpdate(); // Correct: chain the calls so the WHERE clause is included. userRepository.delete() .where(User_.city, EQUALS, city) .executeUpdate(); ``` If you genuinely need to delete all rows from a table, use the `removeAll()` convenience method: [Kotlin] ```kotlin userRepository.removeAll() ``` [Java] ```java userRepository.removeAll(); ``` Alternatively, you can use the builder approach and call `unsafe()` to opt out of the safety check: [Kotlin] ```kotlin userRepository.delete().unsafe().executeUpdate() ``` [Java] ```java userRepository.delete().unsafe().executeUpdate(); ``` The `unsafe()` method signals that the absence of a WHERE clause is intentional. Without it, Storm assumes the missing WHERE clause is a mistake. The `removeAll()` convenience method calls `unsafe()` internally. ### Can I use database-specific functions? Yes. Use SQL templates for database-specific SQL: [Kotlin] ```kotlin orm.query { "SELECT * FROM user WHERE LOWER(email) = LOWER($email)" } .resultList ``` [Java] ```java orm.query(RAW."SELECT * FROM user WHERE LOWER(email) = LOWER(\{email})") .getResultList(); ``` --- ## Relationships ### How do I model one-to-many relationships? Storm does not store collections on entities. This is intentional: collection fields on entities are the root cause of lazy loading, N+1 queries, and unpredictable fetch behavior in JPA. Instead, query the "many" side explicitly. This makes the database access visible in your code and gives you full control over filtering, ordering, and pagination of the related records. ```kotlin // Instead of user.orders (not supported) val orders = orm.findAll(Order_.user eq user) ``` ### Why doesn't Storm support lazy loading? Lazy loading requires runtime proxies that intercept method calls on entity fields. This introduces hidden database access, makes entity behavior depend on session state, and is the primary source of `LazyInitializationException` in JPA applications. Storm avoids this entirely by loading the full entity graph in one query. When you genuinely need to defer loading of a relationship (for example, a rarely-accessed large sub-graph), use `Ref`. A `Ref` holds only the foreign key ID until you explicitly call `fetch()`, making the database access visible and intentional. ```kotlin data class User(@PK val id: Int = 0, @FK val department: Ref) : Entity ``` ### How do I handle circular references? Circular references (such as an employee who references a manager, who is also an employee) would cause infinite recursion during eager loading. Use `Ref` to break the cycle. The `Ref` stores only the foreign key ID, preventing Storm from recursively loading the full graph. You can fetch the referenced entity on demand when needed. ```kotlin data class Employee(@PK val id: Int = 0, @FK val manager: Ref?) : Entity ``` --- ## Transactions ### How do transactions work in Kotlin? Storm provides a `transaction {}` block that wraps its body in a JDBC transaction. The block commits automatically on successful completion and rolls back on any exception. You can nest transactions with propagation modes (such as `NESTED` for savepoints or `REQUIRES_NEW` for independent transactions). Inside the block, all Storm operations share the same connection and participate in the same transaction. ```kotlin transaction { orm insert User(name = "Alice") // Commits on success, rolls back on exception } ``` ### Can I use Spring's @Transactional? Yes. Storm participates in Spring-managed transactions automatically. Enable transaction integration for Kotlin to mix declarative and programmatic styles. ### How do I do nested transactions? Use propagation modes: ```kotlin transaction(propagation = REQUIRED) { transaction(propagation = NESTED) { // Creates savepoint; can rollback independently } } ``` --- ## Performance ### Is Storm fast? Yes. Storm adds minimal overhead on top of JDBC. There are no runtime proxies, no bytecode enhancement, and no reflection on the hot path (when using the generated metamodel). The framework generates SQL at query build time and executes it directly through JDBC prepared statements. Key performance features: - Single-query entity graph loading - Batch insert/update/delete - Streaming for large result sets - Connection pooling support ### How do I optimize large result sets? Loading millions of rows into a `List` consumes proportional memory and delays processing until the entire result set is fetched. Streaming processes rows one at a time as the database returns them, keeping memory usage constant regardless of result set size. In Kotlin, Storm exposes streams as `Flow`, which integrates naturally with coroutines. ```kotlin val users: Flow = orm.entity(User::class).selectAll() users.collect { processUser(it) } ``` ### How does dirty checking work? When you read an entity within a transaction, Storm stores the original field values in the entity cache. When you later call `update()` with a modified copy, Storm compares the new values against the cached original to determine which fields actually changed. In `FIELD` mode, only the changed columns appear in the UPDATE statement. In `ENTITY` mode, Storm issues a full-row update but can skip the statement entirely if nothing changed. See [Dirty Checking](dirty-checking.md) for configuration details. --- ## Troubleshooting ### My where/orderBy/limit clause has no effect `QueryBuilder` is immutable. Every builder method returns a *new* instance with the modification applied, leaving the original unchanged. If you call a method like `where()`, `orderBy()`, or `limit()` and ignore the return value, the change is silently lost. [Kotlin] ```kotlin // Wrong: the where clause is discarded val builder = userRepository.select() builder.where(User_.active, EQUALS, true) // returns a new builder, but it's ignored builder.resultList // executes without the WHERE clause // Correct: chain the calls val results = userRepository.select() .where(User_.active, EQUALS, true) .resultList ``` [Java] ```java // Wrong: the where clause is discarded var builder = userRepository.select(); builder.where(User_.active, EQUALS, true); // returns a new builder, but it's ignored builder.getResultList(); // executes without the WHERE clause // Correct: chain the calls var results = userRepository.select() .where(User_.active, EQUALS, true) .getResultList(); ``` This applies to all builder methods: `where()`, `orderBy()`, `limit()`, `offset()`, `distinct()`, `groupBy()`, `having()`, joins, and locking methods like `forUpdate()`. Always use the returned builder. For DELETE and UPDATE queries, this mistake is especially dangerous because a lost WHERE clause means the operation applies to every row in the table. Storm guards against this by default: executing a DELETE or UPDATE without a WHERE clause throws a `PersistenceException`. See [Why does my DELETE without a WHERE clause throw an exception?](#why-does-my-delete-without-a-where-clause-throw-an-exception) below for details. ### My entity isn't mapping correctly Storm maps entity fields to database columns by converting field names from camelCase to snake_case. If your schema uses a different convention, explicit column annotations are required. The most common mapping issues stem from missing annotations or name mismatches. 1. Check that `@PK` is present on the primary key field. 2. Verify field names match database columns (or use `@DbColumn`). 3. Ensure the entity implements `Entity` for repository operations. ### I'm getting "column not found" errors Storm uses snake_case by default. `birthDate` maps to `birth_date`. Use `@DbColumn` for custom mappings: ```kotlin @DbColumn("dateOfBirth") val birthDate: LocalDate ``` ### Upsert isn't working Upsert (INSERT ... ON CONFLICT) is a database-specific feature. Storm delegates to the dialect module for your database to generate the correct SQL. Without the dialect dependency, Storm cannot produce the upsert syntax. 1. Ensure you have included the dialect dependency for your database. 2. Verify your table has a primary key or unique constraint. 3. Pass default `0` (Kotlin) or `null` (Java) for the primary key. ### Refs won't fetch A `Ref` created manually with `Ref.of(Type.class, id)` holds only the foreign key value. It is not connected to a database session and cannot fetch the referenced entity. Only `Ref` instances loaded from the database within an active transaction have the context needed to execute the fetch query. If you need to resolve a reference by ID, use the repository's `findById()` or `getById()` method instead. ### Streams are empty or already closed Storm's Java streams are backed by a JDBC `ResultSet`, which is tied to the database connection. The stream must be consumed within the scope that opened it. Returning an unconsumed stream from a try-with-resources block closes the underlying `ResultSet` before the caller can read any rows. Either consume the stream inside the block or ensure the caller is responsible for closing it. ```java // Wrong: stream closed before consumption Stream getUsers() { try (var users = orm.entity(User.class).selectAll()) { return users; // Stream is closed when method returns } } // Right: consume within the block List getUsers() { try (var users = orm.entity(User.class).selectAll()) { return users.toList(); } } ``` ### How do I see the SQL Storm generates? Annotate your repository with `@SqlLog` to log all generated SQL: ```java @SqlLog public interface UserRepository extends EntityRepository { ... } ``` To see executable SQL with actual parameter values instead of `?` placeholders, use `inlineParameters`: ```java @SqlLog(inlineParameters = true) public interface UserRepository extends EntityRepository { ... } ``` See [SQL Logging](sql-logging.md) for the full guide. ### Schema validation reports type narrowing warnings for my Integer columns Some databases (notably Oracle) use a single numeric type for all integer columns. For example, Oracle's `NUMBER` maps to `java.sql.Types.NUMERIC`, which Storm considers a "narrowing" conversion for `Integer` fields. These are logged as warnings because the mapping works at runtime but may involve precision differences. If the warnings are expected, you can suppress them per field with `@DbIgnore`: ```kotlin data class User( @PK val id: Int = 0, @DbIgnore("Oracle NUMBER maps to NUMERIC") val score: Int ) : Entity ``` Alternatively, enable strict mode to treat these warnings as errors if you want zero tolerance: ```yaml storm: validation: strict: true ``` See [Configuration: Schema Validation](configuration.md#schema-validation) for details. ### Can I use Storm without Spring? Yes. Storm has no dependency on Spring. Create an `ORMTemplate` from any JDBC `DataSource`: ```kotlin val orm = ORMTemplate.of(dataSource) ``` Spring integration is optional via the `storm-spring` or `storm-kotlin-spring` modules. ======================================== ## Source: migration-from-jpa.md ======================================== # Migration from JPA This guide helps you transition from JPA/Hibernate to Storm. The two frameworks can coexist in the same application, allowing you to migrate gradually, one entity or repository at a time. ## Key Differences | JPA/Hibernate | Storm | |---------------|-------| | Mutable entities with proxies | Immutable records/data classes | | Managed persistence context | Stateless operations | | Lazy loading by default | Eager loading in single query | | `@Entity`, `@Id`, `@Column` | `@PK`, `@FK`, `@DbColumn` | | JPQL / Criteria API | Type-safe DSL / SQL Templates | | EntityManager | ORMTemplate | | `@OneToMany`, `@ManyToOne` | `@FK` annotation | ## Entity Migration The biggest conceptual shift from JPA to Storm is the move from mutable, proxy-backed classes to immutable records and data classes. In JPA, entities carry hidden state: change-tracking proxies, managed lifecycle, and lazy-loading hooks injected via bytecode. Storm eliminates all of this. An entity is a plain value object with annotations that describe its mapping. The database interaction happens in repositories and templates, not inside the entity itself. This separation makes entities safe to pass across layers, serialize, and store in collections without worrying about detachment or session scope. ### Complete Before/After Walkthrough The following example demonstrates migrating a complete JPA entity with relationships, a Spring Data repository, and JPQL queries to their Storm equivalents. **JPA Entity:** ```java @Entity @Table(name = "user") public class User { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(nullable = false, unique = true) private String email; @Column(nullable = false) private String name; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "city_id") private City city; @Column(name = "created_at") private LocalDateTime createdAt; // Getters and setters (15+ lines omitted)... } ``` **Storm Entity:** [Kotlin] ```kotlin data class User( @PK val id: Long = 0, val email: String, val name: String, @FK val city: City?, val createdAt: LocalDateTime? ) : Entity ``` Storm derives the table name (`user`) from the class name and column names (`email`, `name`, `city_id`, `created_at`) from the field names, both using camelCase-to-snake_case conversion. The `@PK` annotation marks the primary key, and `@FK` marks the foreign key relationship. No `@Column`, `@Table`, `@GeneratedValue`, or `@JoinColumn` annotations are needed because the defaults match. The default value `id: Long = 0` tells Storm that the ID is auto-generated. [Java] ```java record User( @PK Long id, @Nonnull String email, @Nonnull String name, @Nullable @FK City city, @Nullable LocalDateTime createdAt ) implements Entity {} ``` Storm derives the table name (`user`) from the class name and column names (`email`, `name`, `city_id`, `created_at`) from the field names, both using camelCase-to-snake_case conversion. The `@PK` annotation marks the primary key, and `@FK` marks the foreign key relationship. No `@Column`, `@Table`, `@GeneratedValue`, or `@JoinColumn` annotations are needed because the defaults match. Passing `null` for the ID tells Storm that the ID is auto-generated. **JPA Repository (Spring Data):** ```java @Repository public interface UserRepository extends JpaRepository { Optional findByEmail(String email); List findByCityOrderByNameAsc(City city); @Query("SELECT u FROM User u WHERE u.createdAt > :since") List findRecentUsers(@Param("since") LocalDateTime since); } ``` **Storm Repository:** [Kotlin] ```kotlin interface UserRepository : EntityRepository { fun findByEmail(email: String): User? = find(User_.email eq email) fun findByCity(city: City): List = select() .where(User_.city eq city) .orderBy(User_.name) .resultList fun findRecentUsers(since: LocalDateTime): List = findAll(User_.createdAt gt since) } ``` [Java] ```java interface UserRepository extends EntityRepository { default Optional findByEmail(String email) { return select() .where(User_.email, EQUALS, email) .getOptionalResult(); } default List findByCity(City city) { return select() .where(User_.city, EQUALS, city) .orderBy(User_.name) .getResultList(); } default List findRecentUsers(LocalDateTime since) { return select() .where(User_.createdAt, GREATER_THAN, since) .getResultList(); } } ``` The key difference is that Storm repository methods have explicit method bodies with the query logic visible in the source code. There is no query derivation from method names. Every query is IDE-navigable and compiler-checked. ## Annotation Mapping Storm uses fewer annotations than JPA because it derives most mapping information from the entity structure itself. Table names follow from the class name (converted to snake_case), and column names follow from field names. You only need annotations for primary keys, foreign keys, and cases where the default naming does not match your schema. | JPA | Storm | |-----|-------| | `@Entity` | Implement `Entity` interface | | `@Table(name = "...")` | `@DbTable("...")` | | `@Id` | `@PK` | | `@Column(name = "...")` | `@DbColumn("...")` | | `@ManyToOne` | `@FK` | | `@JoinColumn` | Column name in `@FK("...")` | | `@Version` | `@Version` | ## Repository Migration JPA repositories (particularly Spring Data JPA) rely on method name conventions or `@Query` annotations to define queries. Storm repositories use explicit method bodies with a type-safe DSL. This means slightly more code per method, but every query is visible in the source, IDE-navigable, and compiler-checked. There are no hidden query derivation rules to memorize. ### JPA Repository ```java @Repository public interface UserRepository extends JpaRepository { Optional findByEmail(String email); List findByCity(City city); } ``` ### Storm Repository [Kotlin] ```kotlin interface UserRepository : EntityRepository { fun findByEmail(email: String): User? = find(User_.email eq email) fun findByCity(city: City): List = findAll(User_.city eq city) } ``` [Java] ```java interface UserRepository extends EntityRepository { default Optional findByEmail(String email) { return select() .where(User_.email, EQUALS, email) .getOptionalResult(); } default List findByCity(City city) { return select() .where(User_.city, EQUALS, city) .getResultList(); } } ``` ## Query Migration Storm offers two query approaches: the type-safe DSL (using the generated metamodel) and SQL Templates (for raw SQL with type interpolation). The DSL covers common CRUD patterns concisely, while SQL Templates let you write arbitrary SQL without losing type safety on parameters and result mapping. The examples below show how each JPA query style maps to Storm equivalents. ### JPQL [Kotlin] ```java // JPA @Query("SELECT u FROM User u WHERE u.email = :email") Optional findByEmail(@Param("email") String email); ``` ```kotlin // Storm fun findByEmail(email: String): User? = find(User_.email eq email) ``` [Java] ```java // JPA @Query("SELECT u FROM User u WHERE u.email = :email") Optional findByEmail(@Param("email") String email); // Storm default Optional findByEmail(String email) { return select() .where(User_.email, EQUALS, email) .getOptionalResult(); } ``` ### Criteria API [Kotlin] ```java // JPA Criteria CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery cq = cb.createQuery(User.class); Root root = cq.from(User.class); cq.where(cb.equal(root.get("city"), city)); return em.createQuery(cq).getResultList(); ``` ```kotlin // Storm orm.findAll(User_.city eq city) ``` [Java] ```java // JPA Criteria CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery cq = cb.createQuery(User.class); Root root = cq.from(User.class); cq.where(cb.equal(root.get("city"), city)); return em.createQuery(cq).getResultList(); // Storm orm.entity(User.class) .select() .where(User_.city, EQUALS, city) .getResultList(); ``` ### Native Queries ```java // JPA @Query(value = "SELECT * FROM users WHERE email LIKE %:pattern%", nativeQuery = true) List searchByEmail(@Param("pattern") String pattern); // Storm (Java) orm.query(RAW."SELECT \{User.class} FROM \{User.class} WHERE email LIKE \{pattern}") .getResultList(User.class); ``` ## Relationship Changes JPA models relationships bidirectionally with annotations like `@OneToMany` and `@ManyToOne`, relying on lazy-loading proxies to defer fetching. Storm takes a different approach: relationships are unidirectional, defined by `@FK` on the owning side, and loaded eagerly by default in the same query. When you need deferred loading (for example, to avoid loading a large sub-graph), wrap the field type in `Ref` to make fetching explicit. ### Lazy Loading to Eager/Ref JPA default: lazy loading with proxy ```java // JPA - fetches city on access (N+1 risk) user.getCity().getName(); ``` Storm options: 1. **Eager loading** (default with `@FK`): ```kotlin data class User(@PK val id: Long = 0, @FK val city: City) : Entity // City loaded in same query as User ``` 2. **Deferred loading** (with `Ref`): ```kotlin data class User(@PK val id: Long = 0, @FK val city: Ref) : Entity // City loaded explicitly when needed val cityName = user.city.fetch().name ``` ### OneToMany Collections JPA approach: ```java @OneToMany(mappedBy = "user") private List orders; ``` Storm approach (query the "many" side): ```kotlin val orders = orm.findAll(Order_.user eq user) ``` ## Transaction Migration Storm supports both Spring's `@Transactional` annotation and its own programmatic `transaction {}` block. If you are migrating a Spring application, your existing `@Transactional` annotations continue to work unchanged. Storm participates in the same Spring-managed transaction. The programmatic API is useful when you want explicit control over isolation levels, propagation, or when working outside of Spring entirely. ### JPA @Transactional ```java @Transactional public void createUser(String email) { userRepository.save(new User(email)); } ``` ### Storm (works with Spring @Transactional) [Kotlin] ```kotlin @Transactional fun createUser(email: String) { orm insert User(email = email) } ``` ### Storm Programmatic ```kotlin transaction { orm insert User(email = email) } ``` [Java] ```java @Transactional public void createUser(String email) { orm.entity(User.class).insert(new User(null, email, null, null, null)); } ``` ## Schema Management Storm validates your schema but does not generate or migrate it. Storm never issues DDL statements (`CREATE TABLE`, `ALTER TABLE`, etc.) against your database. For schema migrations, use dedicated tools like [Flyway](https://flywaydb.org/) or [Liquibase](https://www.liquibase.org/) alongside Storm. If you are coming from JPA with `ddl-auto=update` or `ddl-auto=create`, you will need to manage schema changes explicitly. This is a deliberate choice: automatic schema generation is convenient for prototyping but dangerous in production, where unreviewed DDL can cause data loss or downtime. Flyway and Liquibase give you version-controlled, reviewable, repeatable migrations. ### Flyway Example Layout A typical project structure places Flyway migrations alongside your Storm entities: ``` src/ ├── main/ │ ├── kotlin/com/example/ │ │ ├── entity/ │ │ │ ├── User.kt │ │ │ └── Order.kt │ │ └── repository/ │ │ ├── UserRepository.kt │ │ └── OrderRepository.kt │ └── resources/ │ ├── application.yml │ └── db/migration/ │ ├── V1__create_user_table.sql │ ├── V2__create_order_table.sql │ └── V3__add_user_email_index.sql ``` Each `V*__.sql` file contains the DDL for that migration step. Flyway runs them in order and tracks which migrations have been applied. Spring Boot auto-configures Flyway when it is on the classpath, so no additional setup is needed beyond adding the dependency. ### Recommended Schema Validation Configuration Storm's schema validation (see [Validation](validation.md)) acts as a safety net that catches drift between your entity definitions and the actual database structure. Use different modes depending on the environment: ```yaml # Development: warn on mismatches but allow startup storm: validation: schema-mode: warn # CI and production: block startup if entities don't match the schema storm: validation: schema-mode: fail ``` The `warn` mode is useful during development when you are iterating on both entities and migrations simultaneously. The `fail` mode is recommended for CI pipelines and production, where a mismatch indicates either a missing migration or an entity definition that is out of sync. See [Validation](validation.md) for details on the checks performed and how to suppress known mismatches with `@DbIgnore`. ## Gradual Migration Strategy A full rewrite is rarely practical. Storm and JPA can share the same DataSource, so you can migrate incrementally without a flag day. Start with leaf entities (those with no inbound foreign keys from other JPA entities) and work inward. Each migrated entity reduces your JPA surface area without breaking existing code. 1. **Add Storm dependencies** alongside JPA. 2. **Create Storm entities** for new tables. 3. **Migrate simple entities first** (no complex relationships). 4. **Replace lazy loading with Ref** where needed. 5. **Migrate repositories** one at a time. 6. **Update service layer** to use Storm repositories. 7. **Remove JPA entities and dependencies** when complete. ## Running Storm Alongside JPA Storm and JPA can coexist in the same Spring Boot application. Both frameworks use JDBC under the hood, so they share the same `DataSource`, connection pool, and Spring-managed transactions. This means a single `@Transactional` method can call both a JPA repository and a Storm repository, and both operations will participate in the same database transaction. This works because Spring's `PlatformTransactionManager` manages the underlying JDBC connection. Both JPA (via its `EntityManager`) and Storm (via its `ORMTemplate`) obtain connections from the same `DataSource`, and Spring ensures they share the transaction context. ### Configuration No special configuration is needed beyond making sure Storm's `ORMTemplate` uses the same `DataSource` that JPA uses. Spring Boot's auto-configuration handles this automatically when you include the Storm Spring Boot Starter. ### Example: Mixed JPA and Storm Service [Kotlin] ```kotlin // JPA entity (legacy) @Entity @jakarta.persistence.Table(name = "legacy_customer") class LegacyCustomer { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) var id: Long? = null var name: String = "" var email: String = "" } // JPA repository (legacy) interface LegacyCustomerRepository : JpaRepository // Storm entity (new) data class CustomerProfile( @PK val id: Long = 0, val customerId: Long, val bio: String, val avatarUrl: String? ) : Entity // Storm repository (new) interface CustomerProfileRepository : EntityRepository { fun findByCustomerId(customerId: Long): CustomerProfile? = find(CustomerProfile_.customerId eq customerId) } // Service that uses both @Service class CustomerService( private val legacyCustomerRepository: LegacyCustomerRepository, private val customerProfileRepository: CustomerProfileRepository ) { @Transactional fun createCustomerWithProfile(name: String, email: String, bio: String): CustomerProfile { // JPA insert val customer = LegacyCustomer().apply { this.name = name this.email = email } legacyCustomerRepository.save(customer) // Storm insert in the same transaction val profile = CustomerProfile( customerId = customer.id!!, bio = bio, avatarUrl = null ) customerProfileRepository.insert(profile) return profile } } ``` [Java] ```java // JPA entity (legacy) @jakarta.persistence.Entity @jakarta.persistence.Table(name = "legacy_customer") public class LegacyCustomer { @jakarta.persistence.Id @jakarta.persistence.GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String name; private String email; // getters and setters omitted } // JPA repository (legacy) public interface LegacyCustomerRepository extends JpaRepository {} // Storm entity (new) public record CustomerProfile( @PK Long id, long customerId, String bio, @Nullable String avatarUrl ) implements Entity {} // Storm repository (new) public interface CustomerProfileRepository extends EntityRepository { default Optional findByCustomerId(long customerId) { return select() .where(CustomerProfile_.customerId, EQUALS, customerId) .getOptionalResult(); } } // Service that uses both @Service public class CustomerService { private final LegacyCustomerRepository legacyCustomerRepository; private final CustomerProfileRepository customerProfileRepository; public CustomerService(LegacyCustomerRepository legacyCustomerRepository, CustomerProfileRepository customerProfileRepository) { this.legacyCustomerRepository = legacyCustomerRepository; this.customerProfileRepository = customerProfileRepository; } @Transactional public CustomerProfile createCustomerWithProfile(String name, String email, String bio) { // JPA insert var customer = new LegacyCustomer(); customer.setName(name); customer.setEmail(email); legacyCustomerRepository.save(customer); // Storm insert in the same transaction var profile = new CustomerProfile(null, customer.getId(), bio, null); customerProfileRepository.insert(profile); return profile; } } ``` Both the JPA `save()` and the Storm `insert()` execute within the same database transaction. If either operation fails, the entire transaction rolls back. This works because both frameworks delegate to Spring's transaction manager, which coordinates the underlying JDBC connection. ## Common Pitfalls The most frequent issues arise from habits carried over from JPA. The following patterns cover the mistakes that developers encounter most often during migration. ### Missing Eager Fetch In JPA, relationships are lazy-loaded by default, so you can define a foreign key column as a raw ID and still access the related entity through the proxy. Storm has no proxies. If you declare a field as a raw ID (e.g., `val cityId: Long`), Storm treats it as a plain column value with no relationship. To load the related entity, use `@FK` with the entity type. ```kotlin // Wrong - city not available data class User(@PK val id: Long = 0, val cityId: Long) : Entity // Right - city loaded with user data class User(@PK val id: Long = 0, @FK val city: City) : Entity ``` ### Mutable Habits JPA entities are mutable: you call setters, and the persistence context tracks changes automatically. Storm entities are immutable values. To modify an entity, create a new instance with the changed fields using Kotlin's `copy()` method or Java's record `with` pattern. The original instance remains unchanged, which makes reasoning about state straightforward. ```kotlin // Wrong (JPA style) user.setName("New Name") // Right (Storm style) val updated = user.copy(name = "New Name") orm update updated ``` ### Collection Expectations Storm intentionally does not support collection fields on entities. This is a deliberate design choice. Collections on entities lead to lazy loading, N+1 queries, and unpredictable behavior. Query relationships explicitly: ```kotlin // Wrong expectation val orders = user.orders // Not supported // Right approach val orders = orm.findAll(Order_.user eq user) ``` ## Schema Validation If you relied on Hibernate's `ddl-auto=validate` to catch entity/schema mismatches, Storm offers the same capability through its schema validation feature. Enable it in `application.yml`: ```yaml storm: validation: schema-mode: fail ``` This validates all entity definitions against the database schema at startup and blocks if any mismatches are found. See [Configuration: Schema Validation](configuration.md#schema-validation) for details on the checks performed, warning vs. error severity, strict mode, and `@DbIgnore` for suppressing known mismatches. ## What You Gain After migrating from JPA to Storm, you can expect: - **No more N+1 queries.** Entity graphs load in a single query by default. - **No more LazyInitializationException.** No proxies, no surprise database hits. - **No more detached entity errors.** Entities are stateless and always safe to use. - **Simpler entities.** Records and data classes with a few annotations replace complex JPA mappings. - **Predictable SQL.** What you see is what gets executed, no hidden query generation. - **Fewer lines of code.** Typically ~5 lines per entity vs. ~30 for JPA. ======================================== ## Source: glossary.md ======================================== # Glossary This page defines key terms used throughout the Storm documentation. --- **Dirty Checking** The process of determining which fields of an entity have changed since it was last read from the database. Storm compares the current entity state against the observed state stored in the transaction context. Only changed columns are included in the UPDATE statement. Because entities are immutable, dirty checking is fast and requires no bytecode manipulation. See [Dirty Checking](dirty-checking.md). **Entity** A Kotlin data class or Java record that implements the `Entity` interface and maps to a database table. Entities support full CRUD operations (insert, update, remove) through repositories. They are stateless and immutable, with no proxies or hidden state. See [Entities](entities.md). **Entity Cache** A transaction-scoped cache that stores entities by primary key during a transaction. It avoids redundant database round-trips, skips repeated object construction during hydration, preserves object identity within a transaction, and tracks observed state for dirty checking. The cache is automatically cleared on commit or rollback. See [Entity Cache](entity-cache.md). **Entity Graph** The tree of related entities loaded through `@FK` relationships in a single query using JOINs. When Storm loads a `User` that has `@FK val city: City`, it automatically joins the `city` table and returns a fully populated `User` with its `City` object. This eliminates the N+1 query problem. See [Relationships](relationships.md). **Entity Lifecycle** The set of callback hooks (`beforeInsert`, `afterInsert`, `beforeUpdate`, `afterUpdate`, `beforeDelete`, `afterDelete`) that fire around mutation operations. Implemented via the `EntityCallback` interface, these hooks enable cross-cutting concerns like auditing and validation. See [Entity Lifecycle](entity-lifecycle.md). **Hydration** The process of transforming flat database rows into structured Kotlin data classes or Java records. Storm maps SELECT columns to constructor parameters by position, with no runtime reflection on column names. Hydration plans are compiled once per type and reused. See [Hydration](hydration.md). **Inline Record** A plain data class or record (without implementing `Entity`) that is embedded within an entity. Inline records group related fields (like an address or compound key) into a reusable structure. Their fields are stored as columns in the parent entity's table, not in a separate table. Also called an "embedded component." See [Entities](entities.md#embedded-components). **Metamodel** A set of companion classes (e.g., `User_`, `City_`) generated at compile time by Storm's KSP processor (Kotlin) or annotation processor (Java). The metamodel provides type-safe references to entity fields for use in queries, predicates, and ordering. See [Metamodel](metamodel.md). **ORM Template** The central entry point for all Storm database operations (`ORMTemplate`). Created from a JDBC `DataSource`, `Connection`, or JPA `EntityManager`, it is thread-safe and typically instantiated once at application startup. It provides access to entity repositories, query builders, and SQL template execution. See [First Entity](first-entity.md#create-the-orm-template). **Projection** A read-only data class or record that implements the `Projection` interface. Projections represent database views or complex query results defined via `@ProjectionQuery`. Unlike entities, projections only support read operations. See [Projections](projections.md). **Ref** A lightweight identifier (`Ref`) that carries only the record type and primary key, deferring the loading of the full record until `fetch()` is called. Using `Ref` instead of `City` in a foreign key field avoids the automatic JOIN, reducing query width when the related data is not always needed. See [Refs](refs.md). **Repository** An interface that provides database access methods for an entity or projection type. `EntityRepository` offers built-in CRUD operations; `ProjectionRepository` offers read-only operations. Custom repositories extend these interfaces with domain-specific query methods. See [Repositories](repositories.md). **Scrollable** A scroll request that captures cursor state for fetching a window of results. The scrolling counterpart of `Pageable`. Created via `Scrollable.of(key, size)` or obtained from `Window.next()` / `Window.previous()`, which are always non-null when the window has content. Supports cursor serialization for REST APIs via `toCursor()` / `Scrollable.fromCursor(key, cursor)`. See [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling). **SQL Template** Storm's template engine that uses string interpolation to embed entity types, metamodel fields, and parameter values into SQL text. Types expand to column lists, metamodel fields to column names, and values to parameterized placeholders. SQL Templates are the foundation of all Storm queries, including those generated by repositories. See [SQL Templates](sql-templates.md). **Static Metamodel** See [Metamodel](#metamodel) above. **Storm Config** A configuration object (`StormConfig`) that controls runtime behavior for features like dirty checking mode, entity cache retention, and template cache size. All settings have sensible defaults, so configuration is optional. See [Configuration](configuration.md). **Window** A window of query results from a scrolling operation. A `Window` contains the result list (`content`), informational `hasNext` and `hasPrevious` flags (a snapshot at query time), and navigation tokens (`next()`, `previous()`) for sequential traversal. The navigation tokens are always non-null when the window has content; `hasNext` and `hasPrevious` are not prerequisites for accessing them, since new data may appear after the query. Also provides `nextCursor()` / `previousCursor()` for REST API cursor strings. See [Pagination and Scrolling: Scrolling](pagination-and-scrolling.md#scrolling). ======================================== ## Source: ai.md ======================================== # AI-Assisted Development Storm is an AI-first ORM. Entities are plain Kotlin data classes or Java records. Queries are explicit SQL. Built-in verification lets AI validate its own work before anything touches production. > **Info:** Storm keeps you in control. `ORMTemplate.validateSchema()` validates that entities match the database. `SqlCapture` validates that queries match the intent. `@StormTest` runs both checks in an isolated in-memory database before anything reaches production. The AI generates code, then Storm verifies it. That is what AI-first means here. --- ## Quick Setup Install the Storm CLI and run it in your project: ```bash npm install -g @storm-orm/cli storm init ``` Or without installing globally: ```bash npx @storm-orm/cli init ``` The interactive setup walks you through three steps: ### 1. Select AI tools Choose which AI coding tools you use. Storm configures each one with rules, skills, and (optionally) a database-aware MCP server. You can select multiple tools if your team uses different editors. Storm currently supports Claude Code, Cursor, GitHub Copilot, Windsurf, and Codex. Each tool stores its configuration in a different location, but the content is the same: Storm's conventions, entity rules, query patterns, and verification guidelines. See [AI Tools Reference](ai-reference.md) for the full list of configuration locations. ### 2. Rules and skills For each selected tool, Storm installs two types of AI context: **Rules** are a project-level configuration file that is always loaded by the AI tool. They contain Storm's key patterns, naming conventions, and critical constraints (immutable QueryBuilder, no collection fields on entities, `Ref` for circular references, etc.). The rules ensure the AI follows Storm's conventions in every interaction, without you having to repeat them. **Skills** are per-topic guides that the AI loads on demand when working on a specific task. Each skill contains focused instructions, code examples, and common pitfalls for one area of Storm (entities, queries, repositories, migrations, JSON, serialization, and more). Skills are fetched from orm.st during setup and can be updated automatically on each run without requiring a CLI update. See [AI Tools Reference](ai-reference.md#skills) for the full list. ### 3. Database connection (optional) If you have a local development database running, Storm can set up a schema-aware MCP server. This gives your AI tool access to your actual database structure (table definitions, column types, foreign keys) without exposing credentials or data. The MCP server runs locally on your machine, exposes only schema metadata by default, and stores credentials in `~/.storm/` (outside your project, outside the LLM's reach). It supports PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, SQLite, and H2. You can connect multiple databases to a single project, even across different database types. Optionally, you can enable read-only data access per connection. This lets the AI query individual records to inform type decisions — for example, recognizing that a `VARCHAR` column contains enum-like values, or that a `TEXT` column stores JSON. Data access is disabled by default because it means actual data from your database flows through the AI's context. When enabled, the database connection is read-only (enforced at both the application and database driver level), and the AI cannot write, modify, or delete data. See [Database Connections & MCP — Security](database-and-mcp.md#security) for the full details. With the database connected, three additional skills become available for schema inspection, entity validation against the live schema, and entity generation from database tables. See [AI Tools Reference](ai-reference.md#database-skills) for details. To manage database connections later, use `storm db` for the global connection library and `storm mcp` for project-level configuration. See [Database Connections & MCP](database-and-mcp.md) for the full guide. > **Tip:** The Storm MCP server works standalone — no Storm ORM required. Run `npx @storm-orm/cli mcp init` to set up schema access and optional read-only data queries without installing Storm rules or skills. See [Using Without Storm ORM](database-and-mcp.md#using-without-storm-orm). --- ## Manual Setup If you prefer to configure your AI tool manually, Storm publishes two machine-readable documentation files following the [llms.txt standard](https://llmstxt.org/): | File | URL | Best for | |------|-----|----------| | `llms.txt` | [orm.st/llms.txt](https://orm.st/llms.txt) | Quick reference with essential patterns and gotchas | | `llms-full.txt` | [orm.st/llms-full.txt](https://orm.st/llms-full.txt) | Complete documentation for tools with large context windows | [Claude Code] Use `@url` to fetch Storm context in a conversation: ``` @url https://orm.st/llms-full.txt ``` [Cursor] Add Storm documentation as a doc source in Cursor settings: 1. Open **Settings > Features > Docs** 2. Click **Add new doc** 3. Enter `https://orm.st/llms-full.txt` [Other Tools] Most AI coding tools support adding context through URLs or pasted text. Point your tool at `https://orm.st/llms-full.txt` for complete documentation. --- ## Why Storm Works Well With AI AI works better when framework behavior is explicit and visible in source code. Traditional ORMs rely on mechanisms that are powerful but implicit: proxy objects that intercept field access, lazy loading that triggers queries at unpredictable moments, persistence contexts that track entity state across transaction boundaries, and cascading rules that propagate changes through the object graph. These features serve real purposes, but they make AI-assisted development harder. The AI has to account for behavior that does not appear in the code. Code that compiles and looks correct can still break at runtime because of invisible framework state. Storm eliminates all of that. Entities are plain Kotlin data classes or Java records. There are no proxies, no managed state, no persistence context, and no lazy loading. Queries are explicit, and what you see in the source code is exactly what happens at runtime. This makes Storm's behavior predictable for AI tools: the code is the complete picture. The design choices that matter most: - **Immutable entities.** No hidden state transitions for the AI to track or miss. - **No proxies.** The entity class is the entity. No invisible bytecode transformations to account for. - **No persistence context.** No session scope, flush ordering, or detachment rules that require deep framework knowledge. - **Convention over configuration.** Fewer annotations and config files for the AI to keep consistent. - **Compile-time metamodel.** Type errors caught at build time, not at runtime. The AI gets immediate feedback. - **Secure schema access.** The MCP server gives AI tools structural database knowledge without exposing credentials. Data access is opt-in, read-only by construction, and enforced at the database driver level. Beyond the data model, Storm provides dedicated tooling for AI-assisted workflows: - **Skills** guide AI tools through specific tasks (entity creation, queries, repositories, migrations) with framework-aware conventions and rules. - **A locally running MCP server** gives AI tools access to your live database schema: table definitions, column types, constraints, and foreign keys. Optionally, the AI can also query individual records (read-only) when sample data would improve type decisions. The AI can inspect your actual database structure to generate entities that match, or validate entities it just created. - **Built-in verification** through `ORMTemplate.validateSchema()` and `SqlCapture` lets the AI validate its own work. After generating entities, the AI can validate them against the database. After writing queries, it can capture and inspect the actual SQL. Both checks run in an isolated in-memory database through `@StormTest`, so verification happens before anything touches production. For dialect-specific code, `@StormTest` supports a static `dataSource()` factory method on the test class, allowing integration with Testcontainers to test against the actual target database. --- ## Schema-First and Entity-First Storm fully supports both directions of working: starting from the database schema and generating entities to match, or starting from the entity model and generating the migration scripts to create the schema. Both approaches share the same development cycle; they just enter it at a different point. ``` Entity-first Schema-first starts here starts here │ │ ▼ ▼ ┌─────────────────┐ ┌─────────────────┐ │ Define/update │──────────────────▶│ Generate/update │ │ entities │ │ migration │ │ │ │ │ │ [You / AI] │ │ [You / AI] │ └─────────────────┘ └─────────────────┘ ▲ │ │ ▼ ┌─────────────────┐ ┌─────────────────┐ │ Validate │◀──────────────────│ Apply schema │ │ │ │ │ │ [Storm] │ │ [Flyway / H2] │ └─────────────────┘ └─────────────────┘ ``` The AI generates and updates code (entities, migrations, queries). Storm validates correctness (`ORMTemplate.validateSchema()`, `SqlCapture`). The cycle repeats whenever either side changes: a schema change triggers entity updates; an entity change triggers a new migration. Schema validation closes the loop by proving that entities and schema agree after every change. ### Schema-first In a schema-first workflow, the database is the source of truth. The schema already exists (or is managed by a DBA), and entities need to match it. When the MCP server is configured, the AI has access to the live database through `list_tables` and `describe_table`. This gives it full visibility into table definitions, column types, constraints, and foreign key relationships. When data access is enabled, the AI can also use `select_data` to sample individual records — useful when the schema alone is ambiguous about intent (e.g., a `VARCHAR` that holds enum values, or a `TEXT` column that stores JSON). The AI workflow: 1. **Inspect the schema.** The AI calls `list_tables` to discover tables, then `describe_table` for each relevant table. 2. **Sample data (if available).** When `select_data` is enabled and the schema leaves a type decision ambiguous, the AI queries a few rows to inform the choice. 3. **Generate entities.** Based on the schema metadata (and optional sample data) and Storm's entity conventions (naming, `@PK`, `@FK`, `@UK`, nullability, `Ref` for circular or self-references), the AI generates Kotlin data classes or Java records. 4. **Validate.** The AI writes a temporary test that validates the generated entities against the database using `ORMTemplate.validateSchema()`. When the database schema evolves, the same flow applies: the AI inspects the changed tables, updates the affected entities, and re-validates. ### Entity-first In an entity-first workflow, the code is the source of truth. You design your domain model as entities, and the database schema is derived from them. The AI workflow: 1. **Design entities.** The AI creates Kotlin data classes or Java records based on the domain model you describe. 2. **Generate migration.** The AI writes a Flyway or Liquibase migration script that creates the tables, columns, constraints, and indexes to match the entity definitions, following Storm's naming conventions. 3. **Validate.** The AI writes a temporary test that applies the migration to an H2 in-memory database and validates the entities against the resulting schema using `ORMTemplate.validateSchema()`. This confirms that the entity definitions and the migration script are consistent with each other, before anything touches the real database. ### Verification with Schema Validation Both approaches converge on the same verification step. `ORMTemplate.validateSchema()` checks entities against the database at the JDBC level, catching mismatches that are difficult to spot by inspection: type incompatibilities, nullability disagreements, missing constraints, unmapped NOT NULL columns, and more. The AI can validate only the specific entities it created or modified: [Kotlin] ```kotlin @StormTest(scripts = ["schema.sql"]) class EntitySchemaTest { @Test fun validateNewEntities(orm: ORMTemplate) { val errors = orm.validateSchema( Order::class, OrderLine::class, Product::class ) assertTrue(errors.isEmpty()) { "Schema validation errors: $errors" } } } ``` [Java] ```java @StormTest(scripts = {"schema.sql"}) class EntitySchemaTest { @Test void validateNewEntities(ORMTemplate orm) { orm.validateSchemaOrThrow(List.of( Order.class, OrderLine.class, Product.class )); } } ``` In the schema-first case, `schema.sql` is the existing migration or DDL. In the entity-first case, it is the migration the AI just generated. Either way, schema validation confirms that entities and schema agree. --- ## Query Verification With SqlCapture The same pattern applies to queries. A query that compiles and runs without errors is not necessarily correct: the WHERE clause might filter on the wrong column, a JOIN might be missing, or an ORDER BY might not match the user's intent. After the AI writes a query, it can write a test that captures the actual SQL Storm generates and verifies it matches the intended behavior. `SqlCapture` records every SQL statement, its operation type, and its bind parameters: [Kotlin] ```kotlin @StormTest(scripts = ["schema.sql", "data.sql"]) class OrderQueryTest { @Test fun findShippedOrders(orm: ORMTemplate, capture: SqlCapture) { val orders = capture.execute { orm.entity(Order::class).select() .where(Order_.status eq "SHIPPED") .orderBy(Order_.createdAt) .resultList } // Verify the query structure matches the intent. val sql = capture.statements().first().statement() assertContains(sql, "WHERE") assertContains(sql, "ORDER BY") } } ``` [Java] ```java @StormTest(scripts = {"schema.sql", "data.sql"}) class OrderQueryTest { @Test void findShippedOrders(ORMTemplate orm, SqlCapture capture) { List orders = capture.execute(() -> orm.entity(Order.class).select() .where(Order_.status, EQUALS, "SHIPPED") .orderBy(Order_.createdAt) .getResultList()); // Verify the query structure matches the intent. String sql = capture.statements().getFirst().statement(); assertTrue(sql.contains("WHERE")); assertTrue(sql.contains("ORDER BY")); } } ``` `SqlCapture` is injected automatically in `@StormTest` methods. The AI can verify: - **SQL structure**: check that the expected WHERE, JOIN, GROUP BY, and ORDER BY clauses are present. - **Query count**: `capture.count(SELECT)` confirms the expected number of statements were issued. - **Operation types**: `capture.count(INSERT)`, `capture.count(UPDATE)`, etc. for mutation tests. - **Bind parameters**: `capture.statements().first().parameters()` to inspect parameterized values. If the test fails, the AI has the actual SQL in the failure output and can correct the query immediately. --- ## Temporary Self-Verification Tests The verification tests the AI writes do not need to become part of your codebase. The AI can write a test, run it, and remove it again, all within a single conversation. This gives the AI a way to validate its own work without leaving behind test artifacts you did not ask for. The workflow: 1. **Write.** The AI creates a test file in the project's test source directory (e.g., `src/test/kotlin/StormAIVerificationTest.kt`). For entity-first work, it may also write a temporary schema SQL file to `src/test/resources/`. 2. **Run.** The AI executes only that test using a targeted command: ```bash # Maven mvn test -pl your-module -Dtest=StormAIVerificationTest # Gradle ./gradlew :your-module:test --tests StormAIVerificationTest ``` 3. **Fix (if needed).** If the test fails, the error messages tell the AI exactly what is wrong. It fixes the entities, queries, or migration and re-runs the test. 4. **Clean up.** Once the test passes, the AI deletes the temporary test file (and any temporary SQL scripts it created). The verified code stays; the scaffolding goes. This works because `@StormTest` spins up an H2 in-memory database by default, executes the setup scripts, and tears everything down after the test. No external database, no persistent state, no side effects. When the code under test uses dialect-specific SQL, define a static `dataSource()` factory method on the test class to provide a Testcontainers-backed `DataSource` for the target database instead of H2. You can also ask the AI to keep the test as a permanent regression test. The choice is yours, and the AI should ask. --- ## The Gold Standard: Verify, Then Trust This is what makes Storm the gold standard for AI-assisted database development. The AI does not just generate code and hope for the best. It generates code, then validates it through Storm's own verification, before anything is committed. | Task | AI generates | Storm verifies | |------|-------------|-------------------| | **Entities (schema-first)** | Data classes/records from live schema | `validateSchema()` checks types, nullability, constraints, unmapped columns | | **Entities (entity-first)** | Data classes/records + migration script | `validateSchema()` confirms entity and migration agree | | **Queries** | QueryBuilder or SQL Template code | `SqlCapture` verifies the generated SQL matches the intended structure and parameters | | **Repositories** | Custom query methods | `SqlCapture` confirms each method produces the expected SQL | Storm's immutable entities, explicit queries, and convention-based naming make AI-generated code straightforward to verify. The verify-then-trust pattern below closes the gap between "looks right" and "is right": 1. **The AI generates code** using Storm's skills, documentation, and (when configured) live schema metadata from the MCP server. 2. **The AI writes a focused test** that exercises exactly the code it just wrote, using `ORMTemplate.validateSchema()` for entities or `SqlCapture` for queries. 3. **The AI runs the test.** If it passes, the code is correct by construction, verified by the same validation logic that Storm uses internally. If it fails, the error messages tell the AI exactly what to fix. 4. **The test stays or goes.** Keep it as a regression test, or let the AI remove it once verified. Either way, the verification happened. This is the combination that makes it work: an AI-friendly data model that produces stable code, a schema-aware MCP server that gives the AI structural knowledge, and built-in test tooling that lets the AI verify its own work through the framework rather than around it. ======================================== ## Source: api-kotlin.md ======================================== # Kotlin API Reference Storm's Kotlin API is organized into a set of focused modules. Each module has a specific role, from the core ORM engine with coroutine support to Spring Boot auto-configuration and validation. This page provides an overview of the module structure and links to detailed documentation for each concept. ## Module Overview ### storm-kotlin The main Kotlin API module. It provides the `ORMTemplate` interface, extension functions (`DataSource.orm`, `Connection.orm`), repository interfaces, coroutine support, and the type-safe query DSL. This is the primary dependency for Kotlin applications. ```kotlin // Gradle (Kotlin DSL) implementation("st.orm:storm-kotlin:@@STORM_VERSION@@") ``` ```xml st.orm storm-kotlin @@STORM_VERSION@@ ``` The Kotlin API does not depend on any preview features. All APIs are stable and production-ready. ### storm-kotlin-spring Spring Framework integration for Kotlin. Provides `RepositoryBeanFactoryPostProcessor` for repository auto-discovery and injection, `@EnableTransactionIntegration` for bridging Storm's programmatic transactions with Spring's `@Transactional`, and transaction-aware coroutine support. Add this module when you use Spring Framework without Spring Boot. ```kotlin implementation("st.orm:storm-kotlin-spring:@@STORM_VERSION@@") ``` See [Spring Integration](spring-integration.md) for configuration details. ### storm-kotlin-spring-boot-starter Spring Boot auto-configuration for Kotlin. Automatically creates an `ORMTemplate` bean from the `DataSource`, discovers repositories, enables transaction integration, and binds `storm.*` properties from `application.yml`. This is the recommended dependency for Spring Boot applications. ```kotlin implementation("st.orm:storm-kotlin-spring-boot-starter:@@STORM_VERSION@@") ``` See [Spring Integration: Spring Boot Starter](spring-integration.md#spring-boot-starter) for what the starter provides and how to override its defaults. ## Key Classes and Functions | Class/Function | Description | Guide | |----------------|-------------|-------| | `ORMTemplate` | The central entry point. Create with `dataSource.orm` or `ORMTemplate.of(dataSource)`. Provides access to entity/projection repositories and the SQL template query engine. | [Getting Started](getting-started.md) | | `EntityRepository` | Type-safe repository interface for CRUD operations on entities. Extend this interface and add custom query methods with default method bodies. | [Repositories](repositories.md) | | `ProjectionRepository` | Read-only repository for projections (subset of entity columns). | [Projections](projections.md) | | `Entity` | Marker interface for entity data classes. Implement this on your Kotlin data classes to enable repository operations. | [Entities](entities.md) | | `Projection` | Marker interface for projection data classes. | [Projections](projections.md) | | `DataSource.orm` | Extension property that creates an `ORMTemplate` from a `DataSource`. | [Getting Started](getting-started.md) | | `transaction { }` | Coroutine-aware programmatic transaction block. | [Transactions](transactions.md) | | `transactionBlocking { }` | Blocking variant of the programmatic transaction block. | [Transactions](transactions.md) | | `StormConfig` | Immutable configuration holder. Pass to `dataSource.orm(config)` to override defaults. | [Configuration](configuration.md) | ## Coroutine Support Storm's Kotlin API provides first-class coroutine support. Query results can be consumed as `Flow` for streaming, and the `transaction { }` block is a suspending function that integrates with structured concurrency. Storm leverages JVM virtual threads under the hood, so database operations do not block platform threads even when using JDBC (which is inherently synchronous). ```kotlin // Streaming with Flow val users: Flow = orm.entity(User::class).selectAll() users.collect { processUser(it) } // Suspending transaction transaction { orm insert User(name = "Alice") } ``` ## Metamodel Generation The metamodel generates type-safe companion classes (e.g., `User_`) at compile time. These classes provide static references to entity fields for use in the query DSL, enabling compile-time checked queries. There are two ways to configure metamodel generation for Kotlin projects, depending on your build tool: - **Gradle with KSP:** Use `storm-metamodel-ksp`, which is a Kotlin Symbol Processing plugin. - **Maven with kapt:** Use `storm-metamodel-processor`, which is a standard Java annotation processor invoked through kapt. Both generate the same metamodel classes; they are different build tool integrations. **Gradle (Kotlin DSL) with KSP:** ```kotlin plugins { id("com.google.devtools.ksp") version "2.0.21-1.0.28" } dependencies { ksp("st.orm:storm-metamodel-ksp:@@STORM_VERSION@@") } ``` **Maven with kapt:** ```xml org.jetbrains.kotlin kotlin-maven-plugin kapt kapt st.orm storm-metamodel-processor @@STORM_VERSION@@ ``` See [Metamodel](metamodel.md) for setup and usage. ## KDoc KDoc is generated per module using Dokka. Select a module below to browse its API documentation. | Module | Description | |--------|-------------| | [storm-kotlin](../api/kotlin/storm-kotlin/index.html) | Kotlin API with coroutine support | | [storm-kotlin-spring](../api/kotlin/storm-kotlin-spring/index.html) | Spring Framework integration for Kotlin | | [storm-kotlin-spring-boot-starter](../api/kotlin/storm-kotlin-spring-boot-starter/index.html) | Spring Boot auto-configuration for Kotlin | | [storm-metamodel-ksp](../api/kotlin/storm-metamodel-ksp/index.html) | Kotlin Symbol Processing for metamodel generation | | [storm-kotlinx-serialization](../api/kotlin/storm-kotlinx-serialization/index.html) | Kotlinx Serialization support | ======================================== ## Source: api-java.md ======================================== # Java API Reference Storm's Java API is organized into a set of focused modules. Each module has a specific role, from the core ORM engine to Spring Boot auto-configuration. This page provides an overview of the module structure and links to detailed documentation for each concept. ## Module Overview ### storm-java21 The main Java API module. It provides the `ORMTemplate` entry point, repository interfaces, SQL Templates using Java's String Templates (preview feature), and the type-safe query DSL. This is the primary dependency for Java applications. ```xml st.orm storm-java21 @@STORM_VERSION@@ ``` **String Templates (Preview Feature):** The Java API uses JDK String Templates for SQL construction. String Templates are a preview feature in Java 21+, which means you must compile with `--enable-preview` and run with `--enable-preview`. The preview status means the syntax may change in future JDK releases, and Storm's Java API surface will adapt accordingly. The Kotlin API does not depend on any preview features and is fully stable. To enable preview features in Maven: ```xml org.apache.maven.plugins maven-compiler-plugin --enable-preview ``` ### storm-spring Spring Framework integration for Java. Provides `RepositoryBeanFactoryPostProcessor` for repository auto-discovery and injection, plus transaction integration. Add this module when you use Spring Framework without Spring Boot. ```xml st.orm storm-spring @@STORM_VERSION@@ ``` See [Spring Integration](spring-integration.md) for configuration details. ### storm-spring-boot-starter Spring Boot auto-configuration for Java. Automatically creates an `ORMTemplate` bean from the `DataSource`, discovers repositories, and binds `storm.*` properties from `application.yml`. This is the recommended dependency for Spring Boot applications. ```xml st.orm storm-spring-boot-starter @@STORM_VERSION@@ ``` See [Spring Integration: Spring Boot Starter](spring-integration.md#spring-boot-starter) for what the starter provides and how to override its defaults. ## Key Classes | Class | Description | Guide | |-------|-------------|-------| | `ORMTemplate` | The central entry point. Create with `ORMTemplate.of(dataSource)`. Provides access to entity/projection repositories and the SQL template query engine. | [Getting Started](getting-started.md) | | `EntityRepository` | Type-safe repository interface for CRUD operations on entities. Extend this interface and add custom query methods with default method bodies. | [Repositories](repositories.md) | | `ProjectionRepository` | Read-only repository for projections (subset of entity columns). | [Projections](projections.md) | | `Entity` | Marker interface for entity records. Implement this on your Java records to enable repository operations. | [Entities](entities.md) | | `Projection` | Marker interface for projection records. | [Projections](projections.md) | | `StormConfig` | Immutable configuration holder. Pass to `ORMTemplate.of()` to override defaults. | [Configuration](configuration.md) | ## Metamodel Generation The `storm-metamodel-processor` annotation processor generates type-safe metamodel classes (e.g., `User_`) at compile time. These classes provide static references to entity fields for use in the query DSL, enabling compile-time checked queries. ```xml st.orm storm-metamodel-processor @@STORM_VERSION@@ provided ``` See [Metamodel](metamodel.md) for setup and usage. ## Javadoc The aggregated Javadoc covers all Java modules in the Storm framework: [Browse the Javadoc](../api/java/index.html)