Skip to content

Latest commit

 

History

History
669 lines (625 loc) · 54.3 KB

NOTES.md

File metadata and controls

669 lines (625 loc) · 54.3 KB

Problems

  • unified perspective and ortho rays - something wrong with the inverse projection of ray origin and ends
  • hypothesis 1 [x]: coordinates are in different coordinate systems in vertex and fragment shader - NDC, (w, w, w) etc. (indeed, NDC was wrong)
  • hypothesis 2: perspective division is setting up w for GPU to do projection division. Inverse Projection division is not working, because the fragment shader W coordinate is already 1.0. Maybe pass undivided screen pos via texture coordinate (to skip perspective division and get proper coords, but then maybe w param may be different and invalid for projections onto near and far clip planes)
  • hypothesis 3: direction of rays is flipped, as z may be increasing in different direction
  • UNITY is doing something under the hood
  • proper clip space to world space
  • use this working perspective+ortho projection as reference

Troubleshoot

  • when trying to debug the roslyn source generators, in the solution configuration (edit build configuration next to hammer icon in rider), unmark the linked unity project and leave only SourceGenerator project

TODO

  • basic raymarch domain
  • global option instead of a domain, simply use a parent transform as a scene root. Should a root be positioned inside the "portal" (current mesh domain) or should the portal be positioned inside the SDF controller? If so, then a "Portal" component could be added to narrow shader display to a mesh only.
  • gizmos/handles can be drawn using SDFs:
  • Elongate using semi-transparent box with draggable sides. Render face using color of unit vector for corresponding axis (1,0,0). Get pixel color of selected object
  • Round using 2D circle
  • twist using a shifted twisting infinite cylinder
  • bend using torus
  • CTRL for discrete steps, SHIFT for finer control
  • Handles reference
  • passes: gradients reference
  • iteration count
  • depth
  • normal debug
  • example on shadertoy
  • concavity
  • occlusion (subsurf)
  • flat material
  • displacement
  • clock cycles
  • BVH
  • render modes:
  • smooth (default)
  • splats (circles/textures)
  • AO approximation
  • SDF nodes:
  • all primitives
  • blend operators
  • domain operators
  • dye operators (final dye pass that takes an already created SDF and combines it with final dye SDFs (that only evaluate once at the end), without marching them)
  • Tri-planar texture mapping (sample by normal + blend on edge)
  • lengthy tutorial
  • maybe bi-planar projection as well, based on IQ's notes
  • correct tri-plannar mapping with correct normals
  • depth to max ray distance
  • legacy unity but explained depth raymarching
  • forum thread
  • japanese tutorial on uRayMarchToolkit + alpha + depth
  • ortho+perspective generalized ray origin and direction
  • stack question asking for the same
  • Post with graphic about coordinates at different stages of shader
  • write depth of pixel -- to properly mix many domains
  • fix depth calculation in global origin mode
  • display 3D texture holding SDF or voxel data
  • local scale not affecting rays
  • fix normals
  • correct ZTest when inside of a domain
  • tri-planar texture swizzle and mapping
  • limit ray by doing depth prepass over backfaces
  • prepass has unexpected (but logical) behavior while multiple domains overlap
  • Runtime attribute that would transform inputs in editor to toggles/properties etc, but make them constant after compilation
  • Pixelize operator - something like return clamped distance
  • Check out noperspective for generating ray directions out of a vertices in vertex shader (instead of fragment shader). Check here for reference
  • Fiz Z-test keyword based on this response on unity discord - basically use this i.e. UnityEngine.Rendering.CompareFunction
  • support dynamic node visitors
  • fallback evaluators for nodes that are not implemented
  • allow users to define procedural textures
  • debug and release variants of source code generation (for in debug include number of steps made by the shader, while strip it from release)
  • in-editor camera effect shader (without domain) based on this tutorial
  • graph -> dependency tree (with topological sort) -> AST -> code generation
  • think about required keyword for records, the init properties, primary constructors record class vs record struct (C#10) and readonly record
  • required, albeit interesting for constructing syntactically correct trees, prevent from a wide range of lazy-initialized nodes constructed iteratively.
  • and C#11 only...
  • add SyntaxList record that implements IReadOnlyList and is used instead of IReadOnlyList anywhere in syntax, to allow simpler construction of children (without Concat, Append etc.)
  • support with syntax for node creation and creating other nodes from already existing nodes
  • add replace method to graph that handles replacing nodes and reconnecting links (traverses to parents and rebuilds them)
  • apply serialization
  • depth prepass for ray origin and ray
  • maybe do it so that ray starts on face when outside of mesh (isFrontFacing is true) but start on camera otherwise
  • in the graph, pass along a vector space metadata, and warn when two different vector spaces are mixed
  • create AssetImporter
  • fixup init accessors for the syntax packages
  • sample/ray jittering to avoid banding and similar artifacts
  • some info here
  • generate red factories for syntax etc, based on SourceWriter.cs
  • consider containing private fields in the syntax class and generate properties from them which add get and init accessors, where init assigns th parent
  • add <SyntaxElement>.MapWith(Visitor v) which uses dynamic dispatch to update fields recursively
  • test it
  • add common language syntaxes like "InjectedLanguage", Expression, Statement, Literal
  • remove abstract MapWith from syntax and move the logic solely to abstract visitors, utilize double dispatch
  • unit and vector tagging in shaders - so that a world-space vector isn't confused with model-space vector
  • use a simple tree model instead of a graph model to generate a scene. This way there is no need for any graph editor and whatnot, only a game object hierarchy
  • display focused SDF (even if it's in subtract mode)
  • calculate estimated cost of a shader as a heuristic of basic operations * their occurrences. Parser could be needed for that
  • thread group debug view (for debugging number of steps per thread group)
  • fullscreen scene view rendering instead of a domain. Use domain if the SdfScene has a mesh renderer only. Learn from:
  • This video
  • and his code repo where he uses blit shader for camera here
  • implement conemarching:
  • video with demonstration of fractals rendered with it
  • an article mentioning conemarching and how it's made
  • a presentation of cone marching
  • fix an unsafe visitor cast inside generated Accept methods of concrete syntax nodes
  • use zipper instead of anchors and traversing logic
  • If zipper were to be used, some kind of child labels would be needed to avoid linear children traversal
  • use edge labels/indices for children
  • It would allow for easier checking which branch is a syntax node in instead of linearly searching children
  • traversing forward and backward logic would be easier even without a zipper
  • Create sub-shader assets with definitions of shader scene, required properties, includes etc. It would allow for including the generated scene definition in other shaders, because it would define functions like SdfResult SdfScene(float3 p). It could be even used in shadergraph if used properly
  • think about using TreeSitter as a parsing + AST library
  • Contain the raymarching inside a domain by doing backface prepass (for ray limit) and regular pass from the front faces. This simple implementation can work well for convex shapes. for concave shapes (e.g. donut domain -->|xxx| |xxx| where x is the inside of the same mesh) some additional work has to be done to avoid ray marching through "empty space". Possibly multipass depth-peeling can be used for this, in a similar fashion it's used for order-independent transparency. Possibly some new technique than depth peeling can be used as well or a single (two?) pass with an A-buffer (not sure if this is the proper link) or k+ buffer
  • consider using Unity Properties with property bags and property visitors for controllers/nodes
  • shared properties, so that one driver uniform can power multiple
  • Scene picking. For reference try the discord thread I started, decompiled HandleUtility.PickObject, https://docs.unity3d.com/ScriptReference/HandleUtility-pickGameObjectCustomPasses.html, ShaderGraph's scene picking and object ID defines and the following thread: https://forum.unity.com/threads/selection-outline-feature-and-selection-outline-shader-for-multi-selection.1022569/
  • Github shadergraph depth only pass for picking and selection
  • files to reference:
  • Library/PackageCache/[email protected]/Editor/Generation/Targets/BuiltIn/Editor/ShaderGraph/Includes/DepthOnlyPass.hlsl
  • Library/PackageCache/[email protected]/Editor/Generation/Targets/BuiltIn/Editor/ShaderGraph/Targets/BuiltInTarget.cs
  • seems like clayxels support scene picking??? AND it's open source?! (3rd gif from the top on the sidebar)
  • URP and HDRP support: https://blog.unity.com/engine-platform/migrating-built-in-shaders-to-the-universal-render-pipeline
  • material preview (see MaterialEditor class for OnPreviewGUI)
  • DOM-like model and diffing for improved architecture and data flow
  • fortify the AST codebase to prevent nulls, only use structs/record structs (not supported in unity for now), fewer allocations (stack managed data), better immutable structures
  • better syntax generators
  • line/symbol numbers for easier debugging
  • integrate with tokenizer and parser
  • express modifiers with requirements as monads

actions interactivity roadmap

The following table serves as a reference for things that are, should or cannot be implemented in unity for some reason. The table represents the state of my current knowledge.

Status "done" means it was implemented but not stress-tested. Status "done?" means it has been done but as a side effect of other change and should be investigated.

Event Action Status Info
create primitive regenerate done
delete primitive regenerate done
reorder children regenerate1 done via OnTransformParentChanged
object rename regenerate bugged2 no "rename" hook, possibly requires active regeneration. Triggers on save.
component added to stack revalidate3 done?4 5no "componentAdded" hook in editor
component removed from stack revalidate3 done?4 5no "componentRemoved" hook
components reordered revalidate3 done?4 5no "componentOrderChanged" hook
observable property changed update present
domain reload in prefab scene revalidate3 done via domain reload static even hook or something

Important to remember while documenting and while refactoring

  • be wary of combination of ZTest, Cull, ZWrite, Origin on face vs near plane, as that can render confusing geometry when combined in weird ways. E.g.:
  • Cull Off, Origin Face -- camera inside domain won't render the object "in the center of domain", as the origin would lie on a back face.
  • No ZWrite, Depth read, Cull Off, Origin Face - backface marched objects would layer on top of front marched objects
  • globals not declared in the material property block require static, e.g. static float _MAX_RAY_DISTANCE = 10000
  • Source generators:
  • use RoslynAnalyzer and SourceGenerator tags for dlls and do not not referenced directly. They will apply in the assembly
  • annotations will be generated per-assembly
  • updating source generator binary is best done in native system file manager. Otherwise, the dll metadata with unity tags is lost, which results in "generators seemingly not appearing"
  • generator project has symbolic links that are defined in .csproj of the TestGenerators assembly
  • syntax generators generate:
  • syntax partials with ChildrenOrToken getters for each child
  • acceptors of visitors in syntax
  • visitors for each language
  • mapper for a language which is like a rewriter
  • mapping may be problematic if strongly typed nodes are used, for example when a certain Type.Struct will be replaced with Type.Primitive, a parent syntax may expect Type.Struct, not a Type, but incompatible type is returned resulting in runtime error (but regular roslyn also throws runtime errors for syntax trees if kinds don't meet expectations etc.). The method definition would also have to look like Type.Primitive Map(Type.Struct type) where param and return types are different. This would be non-achievable in a statically generated Visitor pattern, because all syntax would have to return Syntax<> or itself so this is a potential advantage of dynamic dispatch pattern. But a question may be asked if:
  1. it is worth it to implement dynamic dispatch pattern for this reason
  2. does it make sense to have strongly typed nodes in the first place, if they are not used for type checking, but only for code generation
  3. are strongly typed nodes useful at all with how limiting type system is in C# (unlike TS for example)
  4. should rewriting of such nodes be possible, or should user be expected to rewrite them in their parent, where the context is actually relevant
  • some syntax nodes have nullable children, but their nullability actually represents validity of the node instead of the tru nullability. If record primary constructors OR required properties OR TRUE init properties were possible, it would be far more correct (currently nullable syntax parts are marked as warnings). This is a C# syntax trying to emulate representing valid and invalid syntax trees on a type system level, albeit quite poorly. Possibly constructors could be generated and calling with keyed arguments with default like new Call(id: ..., body: ...), but defaults in constructors have restrictions to simple types. Also there are considerations of how it works with with immutable initializer syntax.
  • anchors are used instead of red-green tree for simplicity and because they are easier to implement. Also for explicitness of the traversal and to avoid public+internal syntax nodes.

Design notes, decisions and insights

  • separating node logic from node builders was an idea that in theory should allow for creating a graph model without relying on a specific target of code generation (i.e. a Node as a "Shape" holder defining input/output contracts and Builders as processors of such nodes that collect the code for a specific target language)
  • it had the advantage of writing a single model for nodes shared across, for example, build targets, output languages (GLSL/HLSL), or even different parts of shader (for example a multiply node that is common for both HLSL shader code and for Shaderlab code)
  • this, however, seems like a needles repetition of an abstraction and distributing one idea over multiple files. For example changing a model of some SdfNode would also require all the the builders to be changed accordingly (if they somehow succeed in generation without errors, then those are shadow, possibly logically wrong but syntactically correct cases)
  • It is unlikely that anyone will want to write for other target than Shaderlab+HLSL so this abstraction may not be needed in the first place
  • It is harder to keep consistency between data passed through the node links and the output generated by builders for a port.
  • there won't, however, be an easy way to provide an alternative implementation for a node without recreating it or grossly duplicating the code
  • It was supposed to separate the traversal logic from the logical connection logic.
  • With it, non-DAG graphs could be handled, but it adds a lot of unneeded complexity
  • Nodes could take other nodes on the construction state.
  • This way there would be no way to create invalid graphs on the language level. Evey graph would be a DAG (where output nodes would be constructed using input nodes, so that layers are made "first to last").
  • A potential problem would be binding output ports to input ports, because they must reference each other somehow, yet disallow reconnecting an already connected port.
  • with this decision, nodes would be immutable, and creating new connection would essentially mean rebuilding a node.
  • A clear distinction must be made between a node instance and a node template (which in this case could be a class).
  • A node class contains default values, a function for generating code (possibly factory-like, because it should return new instances of syntax nodes).
  • A node instance contains values that are specific to a given node in a graph. It should be able to generate a syntax node that represents it in a graph. A node instance should contain it's unique UUID that would be used for generating properties.
  • syntax parts should have public getters and public init, no setters. The creation of syntax nodes as well as copying is accomplished without factory methods but using record designated constructors. This way selective and default builders are implicitly defined in classes and creating new nodes from old nodes becomes trivial:
var x = new VariableDeclaration { type = ..., id = "x", initializer = ... };
var another = x with {id = "another" };
  • syntax sugar is added using implicit conversion operators, for example assigning string to InitializerName creates new initializer token
  • Unfortunately C# doesn't have support for both sum and product types, so creating closed, compile-time, safe syntax is not possible without compromises. Typescript could do that (for example narrow the types of accepted syntax to produce valid syntax tree). The compile-type safety would be nice for constructing syntactically correct trees, however it could lead to problems with representing incorrect trees. This work doesn't focus on parsing nor handling incorrect syntax trees though, so it's good enough.
  • I came up with another approach, by noticing that tokens and (lists of) trivia form an alternating stream, so maybe it would be useful to represent trivia between two tokens by a single list and just point tokens to left and right trivia, while pointing (red) trivia to left and right tokens. This way, I can essentially create not a tree, but a top-down DAG (it connects on the last trivia layer, where each leaf token points to a previous and next trivia list. The red nodes (dynamic, on demand with parent references) then create special type for trivia list that references previous and next token (or null if it's the first/last token in the tree). With this I don't have to include special EOF token, and what's more, I gain an easy way of iterating token and trivia stream and going back and forward. But it is a bit of a pain to implement, especially when the current syntax tree is not the intended final implementation, so we will see later.
  • possibly a better design would be a loosely typed syntax tree that only has Syntax, Token and Trivia nodes (and similar) with only creational methods that enforce correct syntax types and structure, as well as a facade for a tree that provide typed access. Nodes in such a facade would be bound by Syntax<> and similar as well as marker interfaces for type matching.
  • Visitors/walkers and similar:
  • visitor pattern - problematic with simple pattern matching because matching StructuredTrivia is hard without specifying type in Accept<..>(Trivia), which is not possible without reflection. Adding types makes them recurse and there is no simple way to descend structured trivia without knowing the type.
  • A Visitor interface hierarchy and accept methods sound to be the simplest. base IVisitor and language extensions possibly.
  • There is a problem with this approach - anchors. They have to be somehow passed to Accept. There is also no clear way to do an accept in anchor
  • also how to override accept methods with visitor subtypes e.g. accept hlsl visitor???
  • I don't remember why, but I have made visitor methods accept nullable parameters
  • They might have been for a.Node.Accept(this, a.Parent) calls when a.Parent is null
  • maybe research a Rope or Zipper data structure for navigating syntax trees?
  • some intro and images for zipper
  • some zipper implementation and description
  • I've made a concept idea of a "weave tree" in other notes, that is basically a tree holding a token stream and trivia stream, where there is a trivia list between each pair of tokens. This tree would avoid the complexity of leading/trailing trivia and their attachments, would provide easy way of navigating the token and trivia stream and could give easy access to navigate between tokens and neighboring trivia. It could solve some problems with existing red/green tree and the need for attaching trivia to tokens (e.g. currently some trivia are attached sensibly, like whitespace or comments, but other, like preprocessor trivia, are attached arbitrarily and have nothing to do with an attached token). It could possibly allow for easier syntax rewrites by operating on a set of tokens and syntax nodes as "brackets" over them (or spans).
  • There is one poor thing associated with zippers, namely that we would still need either a type variable, a differently named methods or a set of interfaces extending zipper that override/reintroduce property with their specified data type to make overwriting logic in other interfaces/mappers/rewriters easier. For example how to do something like ... Visit(Anchor<BinaryExpression> aBin) without it?
  • Maybe preprocessor syntax as a trivia is not a good solution. Maybe a layered architecture would be better, where first stage tree represents syntax visible by preprocessor, second stage by shaderlab, third stage by hlsl etc.
  • Generally while developing this I noticed, that the Roslyn's model of typed tree nodes with static properties is hard to work with on a DX level. For example representing things like "Replace certain tree patterns such as leftmost token of a statement with a new token" is hard to do without advanced code-fu. Properties being static also doesn't really help with tree rewrites and updates. Traversal is problematic — having to explicitly keep track of the path and parent references is doing double work that is implicitly done by program under the hood with descending down the call stack.
  • We could use generic syntax node with dynamic runtime Kind property. If the Kind were an enum, we could also create pseudo-algebraic type bounds by creating overlapping enum ranges for syntax nodes. Then we could have for example enum SyntaxKind { UNARY, BINARY, PRINT }, enum ExprKind { UNARY, BINARY }, enum UnaryKind {} and Syntax<Lang, ExprKind>
  • on the other hand there would also be problems with type casts e.g. Syntax<Lang, ExprKind> is not a valid tree for Syntax<Lang, UnaryKind>.
  • and problems with making sure that, say, Syntax<Lang, AddExpr> is a binary expression AND it's token is plus.
  • Rewrite tree node to support generic get operation for indexed child and replacement for index with new child
  • use type casting and depend on cast exceptions when children are incompatible
  • rewrite visitor and acceptor pattern with a simple pattern matching over node kind in a hierarchy of more and more specialized methods
  • don't use typed anchor, use differently named methods and untyped zipper
  • zipper should be just a wrapper around a child with the path to parent and knowledge of the child index in the parent
  • alternatively a zipper could be just a parent + child label, and the Node property would be a derived value of the parent with child
  • this could make multiple calls to get child at index though, possibly 1 for each pattern match.
  • by making Node inherit from IReadOnlyList we gain access making reverse iterator fast, but at the same time we have to think what exactly should it enumerate? Branches? if so, is null valid? Imo yes. Any post-processing should be done manually by appending index and filtering nulls
  • Sometimes ReferenceEquals checks could be improper when we would want to share common syntax nodes. For example an indentation whitespace node could be shared to save space, but at the same time some other mechanism (maybe tagging?) should be used to determine which child is the node (and is it really the same node or not)
  • If syntax tree validity cannot be fully enforced, then what is the point of partial validity? For example If the type of statement in top-level scope is restricted, but the types of tokens aren't, then what benefit does it bring? Well, for me, a bit of DX simplicity. Complex syntax structures are not that easy to understand, and it may be hard to know at times "what kind of syntax do I put at this child?". This makes compiler and IDE help when writing the trees. Ideally the whole tree would be checked, along with tokens etc. But without proper ADTs it's not yet possible.
  • Currently, the syntax api uses new() and record initializers for AST nodes creation. The big con of this approach is possibly big number of allocations. What could be tried instead is some kind of "node pool", but then the DX could become quite ugly again...

Comparisons

Comparison to other software. The list includes advantages and

Other similar tools:

Feature Mine Shadergraph Womp [Unbound](https://www.unbound.io/ uRaymarching
status new, experimental, maintained stable (in unity sense), active active not released as of yet discontinued
source availability open, MIT semi-open (native bindings to closed-source libraries) proprietary not clear, freemium? open-source
tech C#, Unity, but not that tightly coupled Unity-only Web-only Web + addons for other programs, possibly NOT standalone Unity
UX experimental, poor OK but still lacking Intuitive Intuitive, but a bit "kiddy" Partially intuitive, requires manually constructing scenes
performance quite good actually good exponentially worse with increased scene complexity Unknown, seems to be ok? fine, but may not work in modern unity
price free free paid, maybe freemium but pro plan required for any real uses freemium, seems like most essential features are free free
extensibility high low very low? possibly high moderate but difficult
preview live, realtime live, realtime live, depends on connection realtime, in web semi-realtime
documentation partial scarce only user guide partial
  • Free
  • Open source
  • Can serve as an intermediate layer and the language support can be freely extended
  • More flexible and powerful API for generating shader code with higher potential.
  • Syntax trees can be manipulated and formatted in any way
  • Data flow between partials is not restricted to fixed data types and evaluation model. For example a function can be a fully suppoerted output in a certain context.
  • Support for custom "master stacks" would be far easier. Same goes for custom inputs, interpolators (with interpolation modes)
  • Many features that are "Under consideration" in Shadergraph could be implemented in user-land without modifying the package
  • custom struct types
  • static branching
  • dynamic branching with keywords
  • console interop
  • multiple passes trivially supported
  • light mode tag, blending mode
  • portals
  • easy access for estimated shader performance (estimated instruction count, sampler count)
  • target many targets with one universal shader graph
  • easier support for more advanced nodes like loops, gradients etc.
  • 8 coordinates per texture. Damn, whatever you want actually. Packed, unpacked.
  • safety for context-sensitive operations, for example avoiding operations on vectors in different reference frames, like dot between object and world space direction vectors.

My cons

  • mine is experimental and a lot has to be yet done
  • for now, BIRP support only, URP and HDRP can be added via the shader template/partial mechanism.

Womp

  • Free
  • Open Source

Threads and forum posts on problems with raymarching:

Unity docs

Notes

Additionally: Depth interpolation

Unity internals:

  • UnityEngine.Rendering.BlendOp : blend operation enum
  • UnityEngine.Rendering.BlendMode : blend mode enum
  • UnityEngine.Rendering.StencilOp : stencil operation enum
  • UnityEngine.Rendering.CompareFunction : compare function enum
  • UnityEngine.Rendering.CullMode : cull mode enum
  • UnityEngine.Rendering.ColorWriteMask : color write mask enum

ComputeScreenPos takes vector in clip space and produces vector that has to be perspective-divided by its(?) w to properly represent screen position (0->1)

Discord:

Cyan — 12.08.2022 15:47 ZTest is the same thing as Early-Z, kinda. It's a stage in the rendering that occurs after the vertex shader. The depth of the fragment (basically a pixel) is tested against the depth buffer, base on the ZTest compare function. If it passes, the value would also be written to the depth buffer (assuming ZWrite On) If it doesn't pass, that fragment/pixel isn't drawn.

Except, if you use clip(), discard; or alter SV_Depth in the fragment shader, that early-Z can't occur as the depth of the fragment isn't really known. So the depth test then occurs after the fragment stage instead.

"Depth or Z Prepass" is also something different, where you render opaque geometry first to the depth buffer only. Then render again normally. I'm not too familiar with it. I guess it saves time rendering pixels from overlapping objects. In URP there's a "Depth Priming" mode on the Universal Renderer asset which does this.

costs of shader operations:

scalar add,sub,mult,div	1
vec4 add/mul/div/sub	4*1
scalar fract()	        2
scalar sqrt()	        2
vec3 dot()	        3
vec3 normalize()	5	(dot + sqrt)
scalar sin()	        2+3	(fract + dot)
vec4 sin()	        4*(3+2)	(4 * sin)
mat4 * vec3	        4*3	(4 * dot)
mat4 * mat4	        4*4*3	(4 * 4 * dot)

transform matrix layout inside HLSL while passed via uniform:

_m00, _m01, _m02, _m03
_m10, _m11, _m12, _m13
_m20, _m21, _m22, _m23
_m30, _m31, _m32, _m33
^
transform

// so extracting transform is
o.color.xyz = _BoxFrame1_Transform._m03_m13_m23;
REEGEX for replacing syntax to generate partial classes with parent binders:
from: (^\s+public partial record (\w+)((?:\s|.)*)\n([^\S]*))public ((?:\S|, )+)\s+(\w+)\s+\{ get; init; \}(.*)
to:   $1readonly $5 _$6; /*parent_binder*/ public partial record $2 { public $5 $6 { get => _$6; set => _$6 = value with { Parent = this } } public $2() { _$6$7 } }
requires multiple steps
but using grep the names of classes can be extracted later

Footnotes

  1. regenerate as in "perform always"; may depend on the commutativity of the parent operator (e.g. smoothmin), it is important to handle.

  2. for some reason saving after renaming refreshes the prefab and restores previous names, but existing and entering prefab stage shows new state

  3. revalidate as in regenerate if needed; e.g. only components derived from Controller should be affected. Revalidate compares changes and issues a regenerate if needed 2 3 4

  4. research ObjectChangeEvents and ChangeGameObjectStructure* events 2 3

  5. works thanks to OnEnable or OnValidate or what??? 2 3