Skip to content

Add EEP for native records#81

Merged
bjorng merged 1 commit intoerlang:masterfrom
bjorng:bjorn/native-records
Feb 13, 2026
Merged

Add EEP for native records#81
bjorng merged 1 commit intoerlang:masterfrom
bjorng:bjorn/native-records

Conversation

@bjorng
Copy link
Copy Markdown
Contributor

@bjorng bjorng commented Nov 6, 2025

No description provided.

@bjorng bjorng self-assigned this Nov 6, 2025
@bjorng bjorng closed this Nov 6, 2025
@bjorng bjorng reopened this Nov 6, 2025
Copy link
Copy Markdown
Contributor

@josevalim josevalim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @bjorng and @garazdawi! The proposal is well written and building on top of records does a beautiful job of keeping the language changes minimal. Removing the undefined as default for fields is another great change, as well as having the optional of having "required fields" (which must be given on creation).

My only criticism to the proposal is that it doesn't discuss maps at all. For the last ~10 years, we have been using maps as replacements for records, and while native records aim to ease the migration between "records -> native records", there are no paths between "maps -> native records". Perhaps this is intentional, you are not expecting anyone to migrate between maps and native records, but then we have to deal with the side-effect that, as native records improve, a lot of code (specially in Elixir) will be forever suboptimal.

If I am allowed to spitball a bit, I'd love if "native records" were implemented as "named maps" or "record maps" behind the scenes. The proposal would stay roughly the same, the only differences are that:

  • Accessing a field in any record could use the existing map syntax and just work:

    Expr#Field
  • Updating any record could use the existing map syntax and just work:

    Expr#{Field1=Expr1, ..., FieldN=ExprN}

You could also support the proposed #_{...} syntax and it would additionally check it is a record and not a "plain map". is_map/1 would obviously return true for records but you could do a more precise check with is_record.

Regarding the key-destructive map operations, such as maps:put/3 or maps:remove/2, I'd make them fail unless you "untag" the map (or you could allow them to succeed but the tag would be removed at the end, which I find too subtle).

Overall, the goal would be to effectively unify records and maps, removing the decade-old question "records or maps". This would also provide a path for Erlang and Elixir to unify behind the same constructs, so I'd love to hear your opinions.

eeps/eep-0079.md Outdated
If no value is provided for a field, and there is a default field
value in the native record definition, the default value is used. If
no value is provided for a field and there is no default field value
then a native record creation fails with a `badrecord` error.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps worth adding more context to the errors, such as {badrecord, {default_missing, key}}. I know format_error can be used (and have additional side-band information attached), but there may be a benefits in being upfront about it too?

This would also allow distinguishing from other errors below, such as {badrecord, not_found}, etc.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More context definitely better, even if it gives only one missing key out of several.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please! I think more fine-grained errors would would be greatly beneficial for newcomers especially.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've changed the error reason to {novalue,Field} (where novalue is a new error reason).

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accessing a field in any record could use the existing map syntax and just work:
Expr#Field

In erlang there is no existing map syntax for accessing a single field today,
that is still missing, was never implemented I guess because maps can have arbitrary terms as keys.

I guess it could be added for "literal" fields?
And it is really annoying to have to use the add record_name when accessing fields in records today, e.g. Rec#record_name.field

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dgud yes! I keep forgetting that it was part of EEP but not implemented.

And it is really annoying to have to use the add record_name when accessing fields in records today, e.g. Rec#record_name.field

I am assuming that, as long as you pattern match on the named record when the variable is defined, the compiler would be able to optimize "unamed" field access and updates?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am assuming that, as long as you pattern match on the named record when the variable is defined, the compiler would be able to optimize "unamed" field access and updates?

Yes, that should be possible.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the discussion in erlang/otp#9174 on a possible x[y] notation - @michalmuskala suggested M#[Field].

Copy link
Copy Markdown

@essen essen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!

Would it make sense to have finer grained control on who can do what? For example restrict creation to the defining module while still providing access to fields; or read-only fields outside the defining module. Probably doesn't matter for 95% of use cases I reckon.

eeps/eep-0079.md Outdated
If no value is provided for a field, and there is a default field
value in the native record definition, the default value is used. If
no value is provided for a field and there is no default field value
then a native record creation fails with a `badrecord` error.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More context definitely better, even if it gives only one missing key out of several.

@bjorng bjorng force-pushed the bjorn/native-records branch from a4bdeec to f5b4834 Compare November 6, 2025 11:44
@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Nov 6, 2025

there are no paths between "maps -> native records". Perhaps this is intentional

We didn't really think about migrations from maps. Your suggestion seems reasonable. We will discuss this in the OTP team.

Copy link
Copy Markdown

@lpil lpil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What a cool EEP! Thank you! I've a handful of questions

Language interop

One aspect of this which the document does not touch on that I think could be highly impactful for the BEAM ecosystem is language interop. Today each major language has a different preference for fixed key-value data structures:

  • Erlang: maps and records
  • Elixir: maps with a special field containing a module atom
  • Gleam: records

This creates some degree of friction when calling code from other BEAM languages. If they all were to largely use native records then this friction go away, making interop between languages would be a much better experience.

I'm not immediately seeing any problems for Gleam as we use records there, but it seems like it would be more challenging for Elixir there we use maps.

Adoption within existing OTP modules

It seems that in an ideal world that native records would be the ubiquitous data structure, once they are available. Would existing Erlang/OTP modules be updated to work with them?

Functions that expect classic records, tagged tuples, and maps could have new function clauses added to handle native records in a backwards compatible way, unless I am mistaken. It seems that due to not being compatible with maps or tuples there would be very little ability to update existing functions to return native records.

Is there something we could do here? Or is the expectation that Erlang/OTP code will use different data structures depending on how old it is?

Construction syntax

Is the only difference between the native and classic record syntaxes the # character in the name of the definition? This seems like it will be very error prone, and also hard for less familiar people to debug as the definition will be accepted by the Erlang compiler, but their attempts to construct the record will fail.


Thank you all!

eeps/eep-0079.md Outdated
If no value is provided for a field, and there is a default field
value in the native record definition, the default value is used. If
no value is provided for a field and there is no default field value
then a native record creation fails with a `badrecord` error.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please! I think more fine-grained errors would would be greatly beneficial for newcomers especially.


### Anonymous access of native records

The following syntax allows accessing field `Field` in any record:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a performance difference when using this syntax compared to using the non-anonymous syntax?

Are there situations in which one cannot use the anonymous syntax?

Copy link
Copy Markdown
Contributor Author

@bjorng bjorng Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a performance difference when using this syntax compared to using the non-anonymous syntax?

Yes, certain optimizations we are thinking about implementing will not be applied when using anonymous syntax.

Are there situations in which one cannot use the anonymous syntax?

No.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bjorng I assume the optimizations could still be applied for the anonymous syntax if you matched on the full syntax elsewhere before (basically when beam_types will have it narrowed down to a given the record type)?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the compiler can still do optimizations if previous matching has narrowed down the type. (When we'll actually implement type-analysis for native records, which we haven't done yet.)

The optimizations I was thinking of was the optimizations in the runtime system planned for OTP 30, which I've outlined under "Performance characteristics". Anonymous records will be handled similarly to small maps, which will still be quite efficient.

eeps/eep-0079.md Outdated

1. Creation of native records cannot be done in guards

2. `element/2` will not accept native records.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tuple records can be constructed with field names (using the #blah{a=1,b=2} syntax) or without field names (using the {blah, 1, 2}) syntax. Do native records have a field-nameless #blah{1, 2} construction syntax?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No.

The nameless syntax for tuple records only works because traditional records are implemented using tuples.

eeps/eep-0079.md Outdated
(tuples). So, their performance characteristics should align with maps
(with insignificant overhead for runtime validation). Additionally,
given that native-records are more specialized versions of maps (with
all keys being atoms), there is potential for optimizations.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the expectation that native records will have inferior performance to tuple records?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't know yet, and it also depends on exactly which operations we are talking about. For example, updating multiple elements in a small map (no more than 32 elements) is quite efficient because it can be done in one pass, and similarly matching out multiple elements is also efficient because it can be done in one pass. Accessing one element at the time is more expensive for a map than for a tuple record.

In the first implementation to be released, native records will be implemented similar to how small maps are implemented, with similar performance. We have an idea for an optimization that would make it faster than maps.

@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Nov 7, 2025

Functions that expect classic records, tagged tuples, and maps could have new function clauses added to handle native records in a backwards compatible way, unless I am mistaken. It seems that due to not being compatible with maps or tuples there would be very little ability to update existing functions to return native records.

Yes. If a function returns a tuple record, all we can do is to create a new function that returns a native record.

Or is the expectation that Erlang/OTP code will use different data structures depending on how old it is?

To some extent, yes. I think that is already the case.

If they all were to largely use native records then this friction go away, making interop between languages would be a much better experience.

Agreed.

@lpil
Copy link
Copy Markdown

lpil commented Nov 7, 2025

Thank you @bjorng.

To confirm: all fields must have names? There are no positional fields?

Can one define a record which does not have any fields?

-record #none{}.
-record #some{value = term()}.
-type option() :: #none{} | #some{}

@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Nov 7, 2025

To confirm: all fields must have names?

Yes.

Can one define a record which does not have any fields?

Yes.

@dgud
Copy link
Copy Markdown

dgud commented Nov 7, 2025

Great work!

Would it make sense to have finer grained control on who can do what? For example restrict creation to the defining module while still providing access to fields; or read-only fields outside the defining module. Probably doesn't matter for 95% of use cases I reckon.

We are trying to limit the scope here to try to get it in to 29.

Hmm, this would require some additional syntax, 'private' | 'protected' | 'public' (borrowing from ets access types), personally
I don't think this is necessary nor wanted, Opaqueness on fields have been up for discussion but dropped for now at, implementation and reflection reasons.

@RaimoNiskanen
Copy link
Copy Markdown
Contributor

@lpil wrote:

Is the only difference between the native and classic record syntaxes the # character in the name of the definition? This seems like it will be very error prone, ...

It is a different format, not only the # character, it is like record creation:

-record(foo, {a, b, c}).
%% vs.
-record #foo{a, b, c}.

So there is also a comma and parentheses that are different.

@tsloughter
Copy link
Copy Markdown

I don't see the concern raised that we'll now have record, native records and maps. I know they all serve different purposes, but the differences are slight and new users definitely get confused about which to use. I don't think it should be a blocker to adding a new data structure that can help improve devex or performance when choosing the right one but it does worry me that this can't replace records.

Related to not replacing records, I think -record being overloaded is confusing. To be clear, I don't like the idea of -native_record either.

@essen
Copy link
Copy Markdown

essen commented Nov 7, 2025

Hmm, this would require some additional syntax, 'private' | 'protected' | 'public' (borrowing from ets access types), personally I don't think this is necessary nor wanted, Opaqueness on fields have been up for discussion but dropped for now at, implementation and reflection reasons.

Right it's something that likely doesn't have to be done at runtime, but it is important to consider at least in the documentation part, as internal fields definitely shouldn't be documented. Read-only fields can be marked as such easily in the text. But this depends on what the documentation will look like I suppose.

@bjorng bjorng force-pushed the bjorn/native-records branch from 997a60f to d2ab33d Compare November 7, 2025 13:43
@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Nov 7, 2025

Update: there can now be two distinct errors when Rec#rec.field fails: {badrecord,Term} or {badfield,field}.

@RaimoNiskanen
Copy link
Copy Markdown
Contributor

@tsloughter wrote:

Related to not replacing records, I think -record being overloaded is confusing. ...

"Not replacing records" is really phrased: not "Replacing all tuple record usage scenarios.", and "Replace most tuple-record usages without having to update anything but the declaration.".

With that in mind I think overloading -record may be more good than bad.

@tsloughter
Copy link
Copy Markdown

@RaimoNiskanen unless it outright replaces, 100%, named tuple records I think using -record will cause additional confusion to the confusion that will already exist from there being 2 types of records and maps.

I take it there are a million reasons -type shouldn't and can't also define a record:

-type #pair(A, B) :: #pair{
                           first :: A,
                           second :: B
                          }.

At first I wanted #{} but that is already used by maps!

I can concede there is no good alternative to -record... except maybe -frame (I kid, I kid). But making the times that tuple records are needed be as small as possible may be important though, to not have Erlang grow its reputation of confusing. Probably ets usage is the biggest one there.

@lpil
Copy link
Copy Markdown

lpil commented Nov 7, 2025

I do share the concern that the similar syntax is confusing, and the difference being 2 characters of punctuation makes it challenging to differentiate between the two when reviewing code.

I can concede there is no good alternative to -record... except maybe -frame

struct was the one option that came to mind.

-struct #state{
  values = [] :: list(number()),
  avg = 0.0 :: float()
}.

Elixir does already have "defstruct", though that construct does seem very similar to this proposal in design and purpose.

@potatosalad
Copy link
Copy Markdown

Question: Behavior when adding a field across distributed nodes

Consider this scenario:

Node A (old code):

-record #state{count = 0, name = "default"}.
State = #state{count = 5, name = "server1"}

Node B (new code):

-record #state{count = 0, name = "default", version = 1}.

When Node A sends a State record to Node B:

  1. Reading works fine: Node B can read State#state.count and State#state.name since those fields exist in the record value.
  2. Reading the new field fails: State#state.version will raise {badfield, version} because the field doesn't exist in the record value (it was created with the old definition).
  3. Pattern matching is unclear: Can Node B do #state{version = V} = State? Based on the spec: "Pattern matching fails if the pattern references a FieldK and the native-record value does not contain this field." This would fail.
  4. Updating appears problematic: Can Node B do State#state{version = 1} to add the missing field? The EEP states: "A native-record value is updated according to its native-record definition" and "An update operation fails with a {badfield,FN} error if the native-record update expression references the field FN which is not
    defined (in the structure definition)."

Issue: This seems to check against the definition on Node B, not the fields in the value from Node A. It's unclear whether the update would:

  • Succeed and add the version field to the record value
  • Fail because version doesn't exist in the value
  • Create a new record with all three fields (losing the old value's identity)

@potatosalad
Copy link
Copy Markdown

Question: Field renaming across nodes

Consider this scenario where a field is renamed:

Node A (old code):

-record #user{id, username, city}.
User = #user{id = 1, username = "alice", city = "Stockholm"}

Node B (new code - username renamed to name):

-record #user{id, name, city}.

When Node A sends User to Node B:

  1. The record value still contains username: The field names are captured when the record is created, so the value has fields [id, username, city].
  2. Node B cannot read the new field: User#user.name raises {badfield, name} because the value doesn't have a name field.
  3. Node B CAN still read the old field: User#user.username should work because field access doesn't consult the definition—only the value. But is this problematic
    because username isn't in Node B's definition?
  4. Pattern matching breaks: case User of #user{name = N} -> N end fails because name doesn't exist in the value.

Issue: Field renames appear to be breaking changes in distributed systems.

The EEP states:

to perform read operations on native-record values — accessing native-record fields and pattern matching over native-record values — the runtime does not consult the current native-record definition.

This means reading works purely based on the value's fields, not the definition. So:

  • Old nodes can't read renamed fields (they use the old name)
  • New nodes can't read renamed fields (the value has the old name, pattern matching fails)
  • Updating is impossible (the definition has the new name, the value has the old name)

@potatosalad
Copy link
Copy Markdown

Question: Removing a field

Consider this scenario:

Node A (old code):

-record #config{host, port, legacy_timeout}.
Config = #config{host = "localhost", port = 8080, legacy_timeout = 5000}

Node B (new code - legacy_timeout removed):

-record #config{host, port}.

When Node A sends Config to Node B:

  1. The removed field still exists in the value: The record value contains [host, port, legacy_timeout] because that's what Node A created.
  2. Reading removed fields: Based on the spec, Config#config.legacy_timeout should still work on Node B because field access doesn't consult the definition—it only checks if the field exists in the value. This means:
  • Code on Node B can accidentally read fields that "don't exist" in its definition
  • Linters/dialyzer on Node B would flag legacy_timeout as an error, but it works at runtime
  • Might this create a confusing situation for developers?
  1. Pattern matching works: case Config of #config{host = H, port = P} -> ... end works fine (only matching fields that exist in both value and definition).

Issue: The interaction between field access (which ignores definition) and pattern matching (which may or may not check definition) is unclear.

eeps/eep-0079.md Outdated

but guaranteed to always work and be more efficient.

TODO: What should the name of the BIF be?
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like is_subtype or is_compatible, maybe.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think subtype and compatible would make me think the type of the fields are the same and are also checked. Perhaps has_record_fields but there are no guards starting with has_. I am not sure how to phrase it using is_.

Note, however, that it is possible to match on the name of a
non-exported record. Thus, if the `match_name/1` function in the
following example is called with an instance of record `r` defined in
`some_module`, it will succeed even if the record is not exported:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the motivation for this matching on non exported records by name?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To have the possibility to use them in an api as socket for example where you want to be able to match it but not the content. Today with opauge types, you have to wrap them in a two tuple {socket, OpagueStaff}.

@bjorng bjorng force-pushed the bjorn/native-records branch 2 times, most recently from 4024026 to dc31588 Compare November 17, 2025 08:05
@bjorng bjorng force-pushed the bjorn/native-records branch 2 times, most recently from 94aa076 to f73b48b Compare February 8, 2026 05:50
eeps/eep-0079.md Outdated
Comment on lines +231 to +232
of the module will be used. There is no way to create a native record
based of an old code generation.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that this code:

#user{name = ~"John", city = ~"Stockholm"}

when run in old code, would create a record using the new definition?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that really a good idea? That would mean that a process running old code could create a record without a certain field and then crash when trying to match on that field. IMO local creation of native records should capture the definition of the running module and not the latest running module.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would mean that a process running old code could create a record without a certain field and then crash when trying to match on that field

That would be pretty surprising behavior and make upgrades difficult. Old code shouldn't have to account for what the new code will look like. Only new code should handle what old code produces.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that really a good idea?

No, probably not.

That would mean that a process running old code could create a record without a certain field and then crash when trying to match on that field.

I'm not sure that deleting a field that is used in the previous version of the code is a good idea anyway but I agree the currently implemented behavior can complicate code update.

IMO local creation of native records should capture the definition of the running module and not the latest running module.

Yes, that seems to be safer. I think I could probably do that that change in time for RC2.

no value is provided for a field and there is no default field value,
a native record creation fails with a `{novalue,FieldName}` error.

In OTP 29, the default value can only be an expression that can be
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the goal to have arbitrary expression later on? IMO it should be a goal.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We haven't decided yet. Supporting arbitrary expressions is tricky because record creating will have to execute expression in another module.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've always imagined record creation to be similar to a function call in how it works. That is #foo:bar{} calls the foo module to create a bar record. Using this mechanism you can kind of also build "opaque" fields as an exported record could have defaults that are non-exported records.

-export_record([bar]).
-record #bar{ private_data = #private{} }.
-record #private{ for_my_eyes_only = ~"hello" }.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One argument against allowing arbitrary expressions is that it would make it impossible to create a native record with such default values within a NIF.

* If there is no corresponding native-record definition, creation
fails with a `{badrecord,Name}` exception.

* If the native-record definition is not visible at the call site (it
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that this would include this example:

-module(foo).
-record #bar{baz = 1}.
foo() -> ?MODULE:bar{}.

That is a creation of a non-exported record using the global syntax?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current implementation, this creation succeeds. Currently, specifying the ?MODULE is exactly equivalent to omitting the module name. Should we treat them differently, and if so why?

I will add your example to EEP so that we can discuss at the next OTB meeting.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we treat them differently, and if so why?

Because it aligns with how function calls work, this making it more idiomatic Erlang IMO.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will change the EEP to make native-records align with function calls.

eeps/eep-0079.md Outdated

* If the native record create expression references the field FN that
is not defined (in the native-record definition), creation fails
with a `{badfield,FN}` exception.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the record name also be part of this exception?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, I think we should keep the exceptions simple but I'll add this as question to the EEP.

eeps/eep-0079.md Outdated
Comment on lines +386 to +388
* The native-record value's captured definition states it as
from a different module than the calling module, and it was
*not exported* a the time of capture.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use ?MODULE:Name.Field on a non-exported record in the same module?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... That's a good question. You can't do that with a non-exported function. Why should the behave differently?

}.
```

`Name` and `FieldN` need to be atoms. `Name` is allowed to be used without quotes
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe list some examples of the things that you do need to quote, i.e. _, 1, = etc?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do.

@bjorng bjorng force-pushed the bjorn/native-records branch from 4184479 to 7df600b Compare February 10, 2026 14:15
@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Feb 10, 2026

I've pushed an updated version where I've adopted the view that native-records operations behave similar to calls (as suggested by @garazdawi).

  • Operations that don't specify a module name (neither explicitly nor implicitly using -import_record()), references records in the current module. Exported or not doesn't matter.
  • Operations that specify a module name (either explicitly or implicitly using -import_record() will only work if the record is exported (for creation), or was exported at the time the record value was created. That also applies to ?MODULE; record operations (operating on the elements) with a module name will only work for exported records.

I've implemented most of these changes but will not make a PR until the EEP has been approved by the OTB.

-record #bar{baz = 1}.
-export([foo/0]).
foo() -> #?MODULE:bar{}.
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bar record is not exported, but bar is defined in its defining module,
https://github.com/erlang/eep/pull/81/changes#diff-6fbdcf9ebdcf73842e54c4a8c9e012724c2b3cb22b6816d07e1f65220501d078R139
and when the EEP says

foo/0 function will fail

is this a linter error, a runtime error if module qux calls foo:foo/0 or an error if module foo calls internally function foo/0?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will clarify that it will fail at runtime.

eeps/eep-0079.md Outdated
Comment on lines +926 to +927
External term format is extended to support serialization of
native-record values.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this section be expanded a bit on how the scheme will look like?
In particular given efficiency is a big part of the feature consideration, IMO compactness/efficiency of the encoding on-the-wire for distributed programming scenarios should be considered

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add some details about the encoding.

@bjorng bjorng force-pushed the bjorn/native-records branch 2 times, most recently from 1caab1a to a13fcfc Compare February 12, 2026 06:25
@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Feb 12, 2026

I've pushed a version updated after the meeting of the OTP Technical Board. Most of the changes are clarifications of existing content, but OTB also decided that the badfield and novalue exceptions should also include the module namd and name of the record.

I've also added a brief description of the external format for native records.

The parts of the EEP on which OTB didn't reach a decision has been removed and saved in a branch (named in the EEP).

@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Feb 12, 2026

While implementing the things that had changed from the EEP, I realized that all of "Compile-time checking of records" is not implementable. Trying to use a tuple record lacking a definition has always been a compile error. If we don't have a record definition, we can't know whether a tuple record or native record was intended, so that must be an error.

I've pushed a fixup commit, which I hope will be the last change to this EEP. I hope to merge this tomorrow.

@williamthome
Copy link
Copy Markdown

Hi! I noticed the Backward Compatibility section doesn't mention the impact on parse transforms and tools using abstract forms.

Native records will introduce new AST node types. Existing parse transforms that traverse the abstract forms may fail on code containing native record syntax (e.g., function clause errors on unknown nodes). Other EEPs that introduce new syntax typically address this — for example, EEP-0037 notes "Existing parse transforms might well fail on code containing the new form, but would work unchanged on code that does not" and EEP-0078 states "parse transforms or any tools using abstract forms, would need to be updated."

Should a similar note be added to this EEP's Backward Compatibility section?

@bjorng
Copy link
Copy Markdown
Contributor Author

bjorng commented Feb 13, 2026

Thanks, I've added a similar note.

the need to define all fields in a record, it is very hard to accidentally
create a record with one million elements.

### `-import_record()`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One incompatibility of sorts that I've found is that -record(....). can be declared anywhere in the code, while -import_record(...) can only be declared before any function.

There exists code that for whatever reason includes file.hrl in in the middle of the code instead of at the top, so if you want to replace file_info with a native-record then those files break.

Moving the include up is of course a trivial fix, but depending on how many such you have it could be a lot of work.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it should be allowed to be anywhere in the file. I've added a paragraph about that, and will post a message to the OTP team to see whether there any objections.

@bjorng bjorng force-pushed the bjorn/native-records branch from ee96a7c to 47f3304 Compare February 13, 2026 11:09
@bjorng bjorng merged commit 2081366 into erlang:master Feb 13, 2026
1 check passed
@bjorng bjorng deleted the bjorn/native-records branch February 13, 2026 11:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.