The First Hundred Lines - #1
A sheet of vellum stretches across the desk—blank, but not empty. You feel it resisting haste. It makes you slow down: you’re not just writing—you’re committing. Every mark counts. You begin with nothing, and already you are shaping everything.
This article follows the first sessions of building Vellum, a C# library for symbolic mathematical reasoning. It begins not with features or evaluation, but with design: type systems, generics, and the quiet pleasure of getting abstractions just right.
Vellum was always meant to grow: I wanted a library with a very specific purpose, and yet limitless possibilities. Something I could use as a playground to try out the most complex—even ridiculous—design ideas I could dream up. And honestly? What better sandbox than math? Really, this session lit a spark. Reminded me just how much I love the first phase of a project: laying the foundation brick-by-brick. The goal wasn’t to present the "correct" or even the "best" way to model mathematical concepts. It was simply to celebrate the beauty of carefully and purposefully shaping a complex inheritance chain, if only for the satisfaction of getting it right.
The guiding principle was always this: everything must work with everything. An expression cannot be limited to atomic terms. It must be able to contain other expressions—just as naturally as it contains variables or numbers. This mirrors the way parentheses work in math, and it paves the way for future constructs: expressions as factors, as exponents, as denominators. Because of the structure built here, each of those will become almost trivial to implement. That’s what gets me every time: designing something so tight it works on its own. It's why I keep coming back to C#: you fight the type system up front—but once it clicks, it holds everything in place.
1. Expressions Within Expressions
At the start, an expression looked something like this:
public class Expression<T> : Term where T : Term
{
public List<T> Terms { get; }
}
public abstract class Term { }
Using generics lets you define a scope for the expression: only terms of type T
are allowed. For example, a NumericExpression : Expression<NumericTerm>
can only contain instances of NumericTerm
.
But there was an immediate problem: NumericExpression
couldn’t itself be a NumericTerm
(because you can't inherit from generic types), and so expressions couldn’t nest. You could not, in other words, represent parentheses—one of the most basic features of symbolic mathematics. Although Expression
inherits from Term
, its Terms
list expects items specifically of type T
, not just any Term
.
What I needed was a way for both expressions and atomic terms to be represented by the same type, while keeping the specificity of what kind of term we're working with: an Expression<NumericTerm>
should be able to nest other Expression<NumericTerm>
, but it shouldn't be able to contain an Expression<AlgebraicTerm>
, for example.
First, I tried using an interface:
public interface IComposableTerm<T> where T : Term { }
Each Term
subclass could derive from IComposableTerm
, specifying the kind of term it's compatible with:
public class NumericTerm : Term, IComposableTerm<NumericTerm> { }
Now expressions could also declare themselves compatible, like so:
public class Expression<T> : Term, IComposableTerm<T> where T : Term { }
And with both Expression<T>
and Term
subclasses deriving from the same IComposableTerm<T>
, both atomic terms and expressions could be contained in a single List<IComposableTerm<T>>
.
It allowed expressions and terms to be treated similarly, but only by introducing redundancy. Each term had to explicitly inherit from both Term
and IComposableTerm<T>
, which felt inelegant.
The insight came from flipping the inheritance model. What if each term could declare itself compatible with itself?
public abstract class Term<T> : IComposableTerm<T> where T : Term<T>
And now:
public class NumericTerm : Term<NumericTerm> { }
This required a shift in perspective: instead of asking what terms an expression can contain, I started asking what terms declare themselves compatible. This led me to something borrowed from C++: CRTP, the Curiously Recurring Template Pattern, which solved the nesting issue completely and naturally. An Expression<T>
could still contain any IComposableTerm<T>
, either an atomic term or another expression, without casting or reflection, and without additional boilerplate in class definitions.
It didn’t feel like a hack—more like a structure I’d been trying to find without realizing it. A separate article will explore CRTP in depth—what it is, how it works in C#, and why it's such a surprisingly elegant fit here.
2. Operator Overloads and the Delegated Add
Symbolic math relies on composability. You should be able to say a + b
, and have it mean something—not necessarily evaluation, but at least combination.
But in C#, operator overloads can’t be abstract. So even though it made conceptual sense that all terms should support +
, the compiler wouldn’t let you force it through inheritance.
The compromise became a strength:
public abstract class Term<T>
{
protected abstract Expression Add(Term<T> other);
public static Expression operator +(Term<T> left, Term<T> right) =>
left.Add(right);
}
The workaround was subtle, but powerful: delegate +
to an abstract Add()
method. That way, you get the flexibility of polymorphism without violating C#’s operator rules. I should note, this isn’t a new trick—it’s the usual way to simulate polymorphic operators in C#. But here, it didn’t feel like a workaround at all. More like a design choice that reinforced the system’s core logic, because this did the following two things:
- It made the required override explicit:
Add()
is the only method subclasses must define—as opposed to negation or other operators, which are handled internally. - It allowed the
+
operator to still work seamlessly on any term.
In a concrete term like NumericTerm
, this led to beautifully simple implementations:
protected override Expression<NumericTerm> Add(Term<NumericTerm> other) =>
new([new NumericTerm(coefficient: CoefficientWithSign + other.CoefficientWithSign)]);
// CoefficientWithSign is just Sign * Coefficient
That's the beauty of this Add()
method: sometimes it’s symbolic, just gluing things together, as with expressions (more on that later). Sometimes it’s math. Depends on what the term’s doing. That flexibility is built into the design: everything’s delegated to the lowest level possible—but you still get clean, readable code.
And perhaps best of all, no casts.
3. The Sign: Constraint, Behavior, Elegance
Originally, Sign
was just an enum:
public enum Sign
{
Positive = 1,
Negative = -1
}
It made sense at first—after all, a sign is just +
or -
. But in practice, it wasn’t enough: enums in C# can’t support operator overloading or implicit conversions, which meant constant casting—especially when treating the sign numerically with something like Sign * Coefficient
. That friction felt wrong. A sign should feel implicit: it’s a property of every term, and it should be effortless to work with.
What emerged instead was a Sign
struct, built around a private enum:
public struct Sign
{
public const BinarySign Positive = BinarySign.Positive;
public const BinarySign Negative = BinarySign.Negative;
private BinarySign _value;
private Sign(BinarySign value) => _value = value;
public static implicit operator int(Sign sign) =>
sign._value == Positive ? 1 : -1;
public static implicit operator BinarySign(Sign sign) => sign._value;
public static implicit operator Sign(BinarySign value) => new(value);
public static Sign operator -(Sign sign) =>
new(sign._value == Positive ? Negative : Positive);
}
You could model sign as an int
or a bool
, but neither captures the intent—and both invite bugs. I wanted something you could hold in the type system. Something that just felt right to work with.
The shift to a struct was subtle, but surprisingly powerful. First, it constrained sign creation: Sign
could only be initialized using the public constants Positive
and Negative
, ensuring no invalid values could slip in. Implicit conversions let the struct behave naturally in both numeric and symbolic contexts. You could multiply it with coefficients, flip it with -Sign
, and store it safely without ever worrying about illegal states.
One final detail made everything click: setting Positive = 0
in the enum. This allowed Sign
to work with default
values in constructors (since, in C#, default
sets all fields to 0 for structs)—no edge cases, no extra boilerplate.
Overkill? Maybe on paper. But in use, it was exactly what I needed. It’s a precise fit for the role sign plays in this system: foundational, implicit, and always correct.
4. Lossy Addition and ExpressionComponent
In the early phase of building out Expression<T>
, there was one unresolved question: what should Add()
return?
If an expression was being added to a term—or to another expression—it had to return a result that was valid, even if the types involved weren’t fully aligned. But there wasn’t yet a safe, universal base type for “anything that can go inside an expression.”
So I introduced:
public abstract class ExpressionComponent { }
I now had a non-generic base type Term<T>
could inherit from, and with that, Add()
could safely return:
protected override Expression<ExpressionComponent> Add(Term<T> other) =>
return new([this, other]);
This gives a new Expression
without a specific type that includes both this
and other
as separate terms. It was a form of graceful fallback: type-erased, but structurally sound. No reflection. No type gymnastics. Just clean composition. What I hadn't yet realized is Add(Term<T> other)
already requires that other
is the same type as this
(both are Term<T>
), so there was no need to lose type specificity at all. At that point, I wasn't even sure if every component needed a Sign
at all—which is why ExpressionComponent
sat slightly apart from Term<T>
, where Sign
is included. Later, I realized they all did—and the scaffolding could be removed. It wasn’t elegant, but it did the job, and that gave the system space to grow.
5. Coefficients and Conceptual Clarity
I wanted an intuitive representation of terms—either as sign + coefficient + body
, or just coefficient + body
. There was no reason not to allow both. But internally, the structure should remain consistent: sign as a discrete binary value, and coefficient as a separate continuous quantity.
To make this possible, coefficient normalization was added to the Term<T>
constructor:
public Term<T>(double coefficient = 1, Sign sign = Positive)
{
Sign = sign;
Coefficient = NormalizeCoefficient(coefficient);
}
private double NormalizeCoefficient(double coefficient)
{
if (coefficient < 0)
{
coefficient = Math.Abs(coefficient);
FlipSign();
}
return coefficient;
}
private Term<T> FlipSign() =>
Sign = -Sign;
If a coefficient came in negative, its sign would be extracted and stored separately. This preserved clarity within the object model, while allowing users to write the code that felt most natural. Still, one bit of tension lingered: FlipSign()
modified the internal state of the object, and that violated the intended immutability of Term<T>
. But in this case, it felt like a fair trade. FlipSign()
was now used only within the constructor during normalization: its scope was tight. And soon, deep-copy methods would be introduced, restoring a sense of purity in how terms were duplicated and transformed. Not perfect, not pure—but expressive. And that was the point.
The Lantern Moment: Designing With Constraint
At the core of Vellum is a belief: constraints can be beautiful.
C# doesn’t allow multiple inheritance. It doesn’t allow abstract operator overloads. Enums can’t carry behavior. And yet—because of these limitations, something more refined can emerge: generics, when used carefully, offer a kind of quiet power. They create systems that are narrow by design, but wide in potential. The CRTP pattern, once discovered, didn’t feel like a compromise—weirdly, it just made the whole structure exhale. Like, yeah, that’s it.
This first phase of work confirmed something I keep rediscovering: no matter what C# throws at you, you almost always end up grateful for the safety. It’s a language that pushes you to be deliberate. You don’t get full freedom—and that’s what keeps the design focused, clean, purposeful.
I’m reminded again that coding is meant to be a challenge—not in the sense of fighting syntax, but in staying true to the scope and meaning of your system. In the end, it’s the boundaries that define the structure.
The Rest Will Follow...
Vellum will grow. Numeric terms will give way to polynomials, fractions, exponents, radicals. New structures will emerge from old ones without needing to be rewritten. The groundwork is already there. But for now, this was enough. A hundred lines, maybe a little more. A carefully drawn system. A scaffold for meaning. And the quiet satisfaction of knowing that—if you’ve done it right—it will all just work. That’s the feeling I chase every time I open a new file.