Rust macro framework for building GraphQL APIs on top of sea-orm and async-graphql — automatic CRUD resolvers, nested filtering, sorting, pagination, relationships, and soft-delete.
- Quick start
- Model
- CRUD resolvers
- Custom resolvers
- Resolver bodies
- Context
- Transactions
- Relationships
- Filtering and sorting
- Active model helpers
- Error handling
- Authentication
- Authorization
- Debug macro outputs
use grand_line::prelude::*;
#[model]
pub struct Todo {
pub content: String,
pub done: bool,
}
#[search(Todo)]
fn resolver() {
(None, None)
}
#[gql_input]
pub struct TodoCreate {
pub content: String,
}
#[create(Todo)]
fn resolver() {
am_create!(Todo { content: data.content })
}That produces a todoSearch query with filter/sort/pagination, and a todoCreate mutation — all type-safe, all wired to the database.
#[model] turns a plain struct into a complete sea-orm entity with a paired GraphQL type. For struct Todo it will generate:
| Type | Description |
|---|---|
Todo |
sea-orm Entity |
TodoSql |
sea-orm Model |
TodoColumn |
sea-orm Column |
TodoActiveModel |
sea-orm ActiveModel |
TodoGql |
async-graphql output object, will be named Todo in the schema |
TodoFilter |
async-graphql filter input |
TodoOrderBy |
async-graphql order by enum |
These fields are added to every model automatically:
| Field | Type | Set on |
|---|---|---|
id |
String (26-char ULID) |
insert |
created_at |
DateTimeUtc |
insert |
updated_at |
DateTimeUtc |
every update |
deleted_at |
Option<DateTimeUtc> |
soft-delete |
created_by_id |
Option<String> |
manually |
updated_by_id |
Option<String> |
manually |
deleted_by_id |
Option<String> |
manually |
They can be configured through model macro attributes as follows:
#[model(no_created_at)] // no created_at / created_by_id
#[model(no_updated_at)] // no updated_at / updated_by_id
#[model(no_deleted_at)] // no deleted_at / deleted_by_id (also disable soft-delete on this model)
#[model(no_by_id)] // no *_by_id#[default(...)] — value applied at insert when the field is omitted from am_create!, can be any valid rust expression:
#[model]
pub struct Todo {
pub content: String,
#[default(false)]
pub done: bool,
// Alternatively, we can pass any other valid rust expression such as a function call.
// A function call can be useful as the expression can make runtime computation.
// We can define the function below or imported from somewhere.
#[default(days_from_now(7))]
pub due_at: DateTimeUtc,
}
fn days_from_now(n: i64) -> DateTimeUtc {
Utc::now() + Duration::days(days)
}
let t = am_create!(Todo { content: "Update documentation" }).insert(tx).await?;
// t.done == false, t.due_at == now + 7 days#[graphql(skip)] — hides a field from the GraphQL schema. Still stored in the database, accessible on UserSql, but invisible to clients:
#[model]
pub struct User {
pub email: String,
#[graphql(skip)]
pub password_hashed: String,
}#[sql_expr(...)] — mark this field as GraphQL-only field without actual sea-orm column. It will be resolved as a computed column from a sea-query expression, evaluated by the database at query time:
#[model]
pub struct Product {
pub price: f64,
pub discount_percentage: f64,
// Not stored in DB — computed as price * (1 - discount_percentage / 100).
// We can use Column:: here as it is in the same scope with the model definition.
#[sql_expr(Expr::col(Column::Price).mul(
Expr::val(1.0).sub(Expr::col(Column::DiscountPercentage).div(100.0))
))]
pub discounted_price: f64,
// Alternatively, we can pass any other valid rust expression such as a function call.
// A function call can be useful as the expression can make runtime computation.
// We can define the function below or imported from somewhere.
#[sql_expr(expr_discounted_price())]
pub discounted_price2: f64,
}
// Here we will need to use ProductColumn:: alias instead
// because the function is outside of the model definition scope.
fn expr_discounted_price() -> SimpleExpr {
Expr::col(ProductColumn::Price).mul(
Expr::val(1.0).sub(Expr::col(ProductColumn::DiscountPercentage).div(100.0))
)
}
// insert price=200.0, discount_percentage=25.0 → query discounted_price returns 150.0#[resolver(sql_dep = "col1, col2")] — mark this field as GraphQL-only field without actual sea-orm column. It requires a function in the same scope named resolve_{field_name}. sql_dep contains the columns that must be fetched from the DB to compute it:
#[model]
pub struct User {
pub first_name: String,
#[graphql(skip)]
pub last_name: String,
#[resolver(sql_dep = "first_name, last_name")]
pub full_name: String,
}
async fn resolve_full_name(u: &UserGql, _: &Context<'_>) -> Res<String> {
let first = u.first_name.clone().ok_or(CoreDbErr::GqlResolverNone)?;
let last = u.last_name.clone().ok_or(CoreDbErr::GqlResolverNone)?;
Ok(format!("{first} {last}"))
}sql_dep can reference #[sql_expr] fields too:
#[model]
pub struct User {
pub a: i64,
#[sql_expr(Expr::col(Column::A).add(1000))]
pub b: i64,
#[resolver(sql_dep = "a, b")]
pub c: i64,
}
async fn resolve_c(u: &UserGql, _: &Context<'_>) -> Res<i64> {
Ok(u.a.ok_or(CoreDbErr::GqlResolverNone)? + u.b.ok_or(CoreDbErr::GqlResolverNone)?)
}
// a=1 → b=1001 → c=1002#[gql_input] — defines a GraphQL input object. Use this for any mutation input, not just CRUD inputs:
#[gql_input]
pub struct TodoCreate {
pub content: String,
pub done: bool,
}#[gql_enum] — shortcut to create a GraphQL-only enum, not stored in the database:
#[gql_enum]
pub enum Direction { Asc, Desc }#[sql_enum] — shortcut to combine of sea-orm enum and async-graphql enum. It will be stored in the database as VARCHAR(255) in snake_case, and also exposed as a GraphQL enum:
#[sql_enum]
pub enum Status {
Active, // stored as "active"
Inactive, // stored as "inactive"
}
#[model]
pub struct Todo {
// now this enum can also be used as a db model column
pub status: Status,
}When the function is named resolver, the GraphQL field name defaults to {Model}{Operation} in camelCase (e.g. todoSearch, todoCreate). Use any other name to override:
#[search(Todo)]
fn resolver() { ... } // → todoSearch
#[search(Todo)]
fn todo_search_2024() { ... } // → todoSearch2024The input type for #[create] and #[update] is the PascalCase of the GraphQL field name:
| Function | GraphQL field | Input type |
|---|---|---|
#[create(Todo)] fn resolver() |
todoCreate |
TodoCreate |
#[create(Todo)] fn todo_upsert() |
todoUpsert |
TodoUpsert |
It will generate an async-graphql object follows the same pattern: todoCreate → TodoCreateMutation.
Returns a paginated list. The body returns (extra_filter, default_order_by) — both are combined with the values sent by the client.
#[search(Todo)]
fn resolver() {
(None, None)
}
// With server-side defaults:
#[search(Todo)]
fn todo_search_2024() {
let extra = filter!(Todo { content_starts_with: "2024" });
let sort = order_by!(Todo [DoneAsc, ContentAsc]);
(Some(extra), Some(sort))
}Auto-injected locals:
| Variable | Type |
|---|---|
filter |
Option<TodoFilter> |
order_by |
Option<Vec<TodoOrderBy>> |
page |
Option<Pagination> |
include_deleted |
Option<bool> |
page:
pub struct Pagination {
pub offset: Option<u64>,
pub limit: Option<u64>,
}include_deleted: when omitted, the framework auto-detects from the filter — if the filter already references any deletedAt condition, the default exclude-deleted clause is skipped. Pass true to explicitly include all soft-deleted rows.
Output: Vec<TodoGql>
Returns the number of matching records. The body returns an optional extra filter.
#[count(Todo)]
fn resolver() {
None
}Auto-injected locals:
| Variable | Type |
|---|---|
filter |
Option<TodoFilter> |
include_deleted |
Option<bool> |
Output: u64
Returns a single record by ID. The body runs before the fetch — use it for logging or pre-checks. No return value needed.
#[detail(Todo)]
fn resolver() {
println!("todoDetail id={id}");
}Auto-injected locals:
| Variable | Type |
|---|---|
id |
String |
include_deleted |
Option<bool> |
Output: Option<TodoGql>
Creates a record. The body must evaluate to a TodoActiveModel.
#[gql_input]
pub struct TodoCreate {
pub content: String,
}
#[create(Todo)]
fn resolver() {
am_create!(Todo { content: data.content })
}Auto-injected locals:
| Variable | Type |
|---|---|
data |
PascalCase of the GraphQL field name |
Output: TodoGql
Updates a record. The body must evaluate to a TodoActiveModel.
#[gql_input]
pub struct TodoUpdate {
pub content: String,
}
#[update(Todo)]
fn resolver() {
Todo::find_by_id(&id).exists_or_404(tx).await?;
am_update!(Todo {
id: id.clone(),
content: data.content,
})
}Auto-injected locals:
| Variable | Type |
|---|---|
id |
String |
data |
PascalCase of the GraphQL field name |
Output: TodoGql
Deletes a record. The body runs before deletion — use it for pre-delete validation. No return value needed.
#[delete(Todo)]
fn resolver() {
Todo::find_by_id(&id).exists_or_404(tx).await?;
}Auto-injected locals:
| Variable | Type |
|---|---|
id |
String |
permanent |
Option<bool> |
permanent: false(default) — soft-delete: setsdeleted_at, row stays in DBpermanent: true— hard-delete: row is removed from DB
Output: TodoGql with only id populated.
Configuration through macro attributes:
#[delete(Todo, no_permanent_delete)] // remove the permanent option entirelyUse #[query] and #[mutation] for anything not covered by the CRUD macros. ctx and tx are injected automatically.
#[query]
fn todo_count_done() -> u64 {
let f = filter!(Todo { done: true });
f.into_select().count(tx).await?
}
#[mutation]
fn todo_delete_done() -> Vec<TodoGql> {
let f = filter!(Todo { done: true });
Todo::soft_delete_many()?
.filter(f.clone().into_condition())
.exec(tx)
.await?;
f.gql_select_id().all(tx).await?
}These generate TodoCountDoneQuery / TodoDeleteDoneMutation structs later use in async-graphql MergedObject.
Resolver bodies are blocks, not functions. Every macro body is copied into a generated let r = { ... } expression. return does not work — use ? to exit early:
#[query]
fn my_query() -> String {
if some_condition {
Err(MyErr::NotFound)?; // early exit — NOT return
}
"ok".to_string()
}ctx and tx are injected automatically. Every resolver receives:
ctx— a&Context<'_>async-graphql context with enhanced traits included through imported prelude (see Context)tx— a&DatabaseTransactionshared across the entire request (see Transactions)
Use resolver_inputs to define fully custom inputs:
#[update(Todo, resolver_inputs)]
fn todo_toggle_done(id: String) {
let todo = Todo::find_by_id(&id).one_or_404(tx).await?;
am_update!(Todo {
id: id.clone(),
done: !todo.done,
})
}ctx is a &Context<'_> injected into every resolver. Several helper traits extend it with framework-specific methods.
ctx.tx().await? // Arc<DatabaseTransaction> — the request transaction (also available as injected `tx`)
ctx.cache(|| async { ... }).await? // Arc<T> — per-request cache keyed by type T; closure runs only on first call| Method | Returns | Description |
|---|---|---|
ctx.auth().await? |
String |
Current user's id; errors with Unauthenticated if no valid session |
ctx.auth_with_cache().await? |
Arc<Option<LoginSessionMinimal>> |
Current session, or None if unauthenticated; cached per request |
ctx.auth_ensure_authenticated().await? |
() |
Errors if the request has no valid session |
ctx.auth_ensure_not_authenticated().await? |
() |
Errors if the request already has a valid session |
The auth_ensure_* methods are called automatically by the #[query(auth)] / #[mutation(auth(unauthenticated))] attributes. Call them manually only when you need conditional logic.
| Method | Returns | Description |
|---|---|---|
ctx.authz().await? |
String |
Verified org_id from X-Org-Id header; only valid inside org-scoped authz(realm = "...") resolvers |
ctx.authz_role().await? |
RoleSql |
The matched Role row; valid inside any authz(...) resolver |
ctx.org_unauthorized().await? |
Arc<OrgMinimal> |
Resolves the org from X-Org-Id header without checking user auth; cached per request |
GrandLineExtension manages a single lazy database transaction per GraphQL request:
- Commit — if the request finishes with no errors.
- Rollback — if any resolver returns an error; all DB writes in the request are undone.
Register it when building the schema:
Schema::build(Query::default(), Mutation::default(), EmptySubscription)
.extension(GrandLineExtension)
.data(Arc::new(db.clone()))
.finish()Both .extension(GrandLineExtension) and .data(Arc::new(db)) are required.
Declare relationships as field attributes on #[model]. The framework resolves them with look-ahead — only the fields the client requests are fetched.
#[has_one] — the related model holds a {owner}_id foreign key:
#[model]
pub struct User {
#[has_one]
pub person: Person,
}
#[model]
pub struct Person {
pub gender: String,
pub user_id: String, // foreign key
}#[has_many] — same as #[has_one] but returns a list:
#[model]
pub struct User {
#[has_many]
pub aliases: Alias,
}#[belongs_to] — the current model holds the foreign key:
#[model]
pub struct Alias {
pub name: String,
pub user_id: String,
#[belongs_to]
pub user: User,
}#[many_to_many] — requires a join model with both foreign keys. The join model must be named {A}In{B} or {B}In{A}:
#[model]
pub struct User {
#[many_to_many]
pub orgs: Org,
}
#[model]
pub struct Org {
pub name: String
}
#[model]
pub struct UserInOrg {
pub user_id: String,
pub org_id: String,
}Related records with deleted_at set are excluded by default. Per-field includeDeleted overrides this in the GraphQL query:
query {
userDetail(id: "...") {
# has_one / belongs_to: soft-deleted record is null by default
person { gender }
person(includeDeleted: true) { gender }
# has_many / many_to_many: can also use filter directly
orgs(filter: { deletedAt_ne: null }) { name }
orgs(
filter: { OR: [{ deletedAt: null }, { deletedAt_ne: null }] },
orderBy: [NameAsc],
) { name }
}
}Builds a model filter. String literals are auto-converted to String, each field is wrapped in Some(...):
let f = filter!(Todo { done: true });
let f = filter!(Todo { content_starts_with: "2024", done: false });
// Combine two filters with AND
let f = TodoFilter::combine_and(f1, f2);Expands to TodoFilter { done: Some(true), ..Default::default() }.
Filter operators generated per column (e.g. for content: String):
content content_eq content_ne
content_in content_not_in
content_gt content_gte content_lt content_lte
content_like content_starts_with content_ends_with
TodoFilter also has top-level and, or, and not for composing nested conditions.
Builds a sort list:
let sort = order_by!(Todo [DoneAsc, ContentAsc]);
// → vec![TodoOrderBy::DoneAsc, TodoOrderBy::ContentAsc]Every column generates {Field}Asc and {Field}Desc variants.
Build a sea-orm ActiveModel and apply system-field defaults. String literals are auto-converted to String, each field is wrapped in Set(...).
// Generates id (ULID), sets created_at and updated_at
am_create!(Todo { content: "hello", done: false })
// Sets updated_at, requires id
am_update!(Todo { id: id.clone(), content: "new content" })
// Sets deleted_at and updated_at
am_soft_delete!(Todo { id: id.clone() })// Soft-delete one row by id
Todo::soft_delete_by_id(&id)?.exec(tx).await?;
// Soft-delete many rows with a custom filter
Todo::soft_delete_many()?
.filter(condition)
.exec(tx)
.await?;// Fetch one row or return a 404 error
let todo: TodoSql = Todo::find_by_id(&id).one_or_404(tx).await?;
// Assert a row exists or return a 404 error
Todo::find_by_id(&id).exists_or_404(tx).await?;
// Select only id (used internally by delete responses)
filter.gql_select_id().all(tx).await?#[grand_line_err] derives all required traits for a custom error enum. Variants marked #[client] are forwarded to the GraphQL response as-is. All others — including standard library errors — are replaced with a generic internal server error so implementation details are never leaked to clients.
#[grand_line_err]
enum MyErr {
#[error("record not found")]
#[client]
NotFound,
#[error("something went wrong internally")]
InternalProblem, // client only sees "internal server error"
}Use ? to raise errors from any resolver body:
#[query]
fn my_query() -> String {
if missing {
Err(MyErr::NotFound)?;
}
"ok".to_string()
}Downcast from a GraphQL response error's source field to read the error code:
let code = error.source
.as_deref()
.and_then(|e| e.downcast_ref::<GrandLineErr>())
.map(|e| e.0.code()); // e.g. "NotFound"The grand_line_auth package provides email + password authentication with OTP (one-time password) verification for register and forgot-password flows.
Register the built-in queries and mutations by merging AuthMergedQuery and AuthMergedMutation into your schema, and provide an AuthConfig:
use grand_line::prelude::*;
#[derive(Default, MergedObject)]
pub struct Query(AuthMergedQuery, /* your own queries */);
#[derive(Default, MergedObject)]
pub struct Mutation(AuthMergedMutation, /* your own mutations */);
let schema = Schema::build(Query::default(), Mutation::default(), EmptySubscription)
.extension(GrandLineExtension)
.data(Arc::new(db.clone()))
.data(AuthConfig::default())
.finish();The following tables must be created in the database:
tmp_db!(User, AuthOtp, LoginSession)Note: User only has basic fields (email, password_hashed). It is not extendable — add a second table (e.g. UserProfile) with a user_id foreign key to store extra fields.
Registration is a two-step OTP flow:
Step 1 — call register, which creates a pending OTP record and triggers on_otp_create (where you send the OTP code by email):
mutation {
register(data: { email: "user@example.com", password: "Str0ngP@ssw0rd?" }) {
secret # save this — needed in step 2
}
}Step 2 — call registerResolve with the OTP code the user received, plus the id and secret from step 1:
mutation {
registerResolve(data: { id: "...", secret: "...", otp: "123456" }) {
secret # session token — pass as Authorization: Bearer {secret}
inner { userId }
}
}On success: the User is created, a LoginSession is opened, and the session token is returned.
Single-step: verify email + password and open a session:
mutation {
login(data: { email: "user@example.com", password: "123123" }) {
secret # session token
inner { userId }
}
}The session token must be sent on subsequent requests:
Authorization: Bearer {secret}
Same two-step OTP flow as register:
Step 1:
mutation {
forgot(data: { email: "user@example.com" }) {
secret
}
}Step 2 — provide the OTP + new password:
mutation {
forgotResolve(data: { id: "...", secret: "...", otp: "123456" }, password: "NewP@ssw0rd!") {
secret
inner { userId }
}
}On success: the password is updated and a new LoginSession is opened.
# current session (requires auth)
query { loginSessionCurrent { userId ip } }
# all sessions for current user (requires auth)
query { loginSessionSearch { userId ip ua } }
query { loginSessionCount }
# delete a specific session by id (requires auth)
mutation { loginSessionDelete(id: "...") { id } }
# delete all sessions for current user (requires auth)
mutation { loginSessionDeleteAll }
# delete current session (requires auth)
mutation { logout { id } }Add to any resolver macro to enforce authentication. Use ctx.auth().await? (see Context) to read the current user's ID inside the resolver:
// Requires a valid session token
#[query(auth)]
fn my_profile() -> UserGql {
let user_id = ctx.auth().await?;
User::find_by_id(&user_id).gql_select(ctx)?.one_or_404(tx).await?
}
// Requires the user to NOT be authenticated (for login/register endpoints)
#[mutation(auth(unauthenticated))]
fn register() -> AuthOtpWithSecret { ... }
// Works on all CRUD macros too
#[search(Todo, auth)]
fn resolver() { (None, None) }
#[create(Todo, auth)]
fn resolver() { am_create!(Todo { ... }) }Implement AuthHandlers to hook into the auth lifecycle:
struct MyHandlers;
#[async_trait]
impl AuthHandlers for MyHandlers {
// See AuthHandlers for reference
}
let config = AuthConfig {
handlers: Arc::new(MyHandlers),
..Default::default()
// See AuthConfig for reference
};The grand_line_authz package provides role-based access control with organization scoping and fine-grained policy checks on GraphQL inputs and outputs.
Add the required tables and provide AuthzConfig:
let schema = Schema::build(Query::default(), Mutation::default(), EmptySubscription)
.extension(GrandLineExtension)
.data(Arc::new(db.clone()))
.data(AuthConfig::default()) // auth is required alongside authz
.data(AuthzConfig::default())
.finish();Note: Org only has basic fields (name). It is not extendable — add a second table (e.g. OrgProfile) with an org_id foreign key to store extra fields.
Roles are stored in the Role table. The realm field categorizes roles by the scope of access they govern. Access is enforced via the UserInRole table, which links users to roles using a user_id and an optional org_id. The skip_user and skip_org attributes control which of these fields are checked at query time:
skip_user—user_idis not checked (anonymous access allowed)skip_org—org_idis not checked (not org-scoped)
The three most common realms are:
system— requires a valid user, not org-scoped:#[authz(realm="system", skip_org)]org— requires a valid user and org membership:#[authz(realm="org")]public— no user or org required (e.g. a store front in an e-commerce app):#[authz(realm="public", skip_user, skip_org)]
realm is plain string data — you can define any name and enforce any logic. The above are just example of common realms.
// Org-scoped role: belongs to a specific org
am_create!(Role {
name: "Org Admin",
realm: "org",
org_id: Some(org_id.clone()),
operations: operations.to_json()?,
}).insert(tx).await?;
// System-wide role: no org
am_create!(Role {
name: "System Admin",
realm: "system",
operations: operations.to_json()?,
}).insert(tx).await?;
// Assign to a user
am_create!(UserInRole {
user_id: user_id.clone(),
role_id: role_id.clone(),
org_id: Some(org_id.clone()), // must match the role's org_id
}).insert(tx).await?;Add to any resolver macro. Two modes:
Org-realm — checks that the current user has a role with the given realm inside the org from the X-Org-Id request header. Use ctx.authz().await? to get the verified org ID:
// Request must include: Authorization: Bearer {token} and X-Org-Id: {org_id}
#[query(authz(realm = "org"))]
fn org_dashboard() -> OrgGql {
let org_id = ctx.authz().await?;
Org::find_by_id(&org_id).gql_select(ctx)?.one_or_404(tx).await?
}System-wide — checks that the current user has a role with the given realm globally (no org required):
// Request must include: Authorization: Bearer {token}
#[query(authz(realm = "system", skip_org))]
fn system_dashboard() -> String {
"ok".to_string()
}Use ctx.authz_role().await? inside any authz-guarded resolver to get the matched Role record (see Context).
Works on all resolver macros:
#[search(Todo, authz(realm = "org"))]
fn resolver() { (None, None) }
#[create(Todo, authz(realm = "org"))]
fn resolver() { am_create!(Todo { ... }) }Each Role has an operations field — a JSON-encoded PolicyOperations map that controls what the role is allowed to do:
pub type PolicyOperations = HashMap<String, PolicyOperation>;
pub struct PolicyOperation {
pub inputs: PolicyField, // which GraphQL arguments are allowed
pub output: PolicyField, // which GraphQL response fields are allowed
}
pub struct PolicyField {
pub allow: bool,
pub children: Option<PolicyFields>, // HashMap<String, PolicyField>
}The key in PolicyOperations is the GraphQL operation name, or "*" to match all operations.
Wildcards in PolicyFields:
| Key | Meaning |
|---|---|
"*" |
Allow any direct child field |
"**" |
Allow any nested field recursively |
Example — wildcard policy (allow everything):
let all = PolicyField { allow: true, children: Some(hashmap! {
"**".to_owned() => PolicyField { allow: true, children: None },
}) };
let operations: PolicyOperations = hashmap! {
"*".to_owned() => PolicyOperation { inputs: all.clone(), output: all },
};
role.operations = operations.to_json()?;Example — restricted policy (only allow specific fields):
let operations: PolicyOperations = hashmap! {
"todoSearch".to_owned() => PolicyOperation {
inputs: PolicyField { allow: true, children: Some(hashmap! {
"filter".to_owned() => PolicyField { allow: true, children: Some(hashmap! {
"**".to_owned() => PolicyField { allow: true, children: None },
}) },
}) },
output: PolicyField { allow: true, children: Some(hashmap! {
"id".to_owned() => PolicyField { allow: true, children: None },
"content".to_owned() => PolicyField { allow: true, children: None },
}) },
},
};The policy check runs automatically before the resolver body executes. If the inputs or requested output fields are not allowed, the framework returns an unauthorized error.
To inspect the code generated by macros, set the environment variable DEBUG_MACRO=1 and enable one of the following feature flags:
debug_macro_cli— prints generated code to stdout during the build.debug_macro_file— writes generated code to files undertarget/grand-line/during the build. To avoid stale output, clear the folder before building.

