why are templates bad

so i'm absolutely not disagreeing, i simply want to understand what is so wrong with them.

casey mentions a few reasons templates are suboptimal:
- bad debugging
- slow compiles

whenever I use templates to - for example - make a set of vectors of different types / size. i can usually step through the code no problem. can someone give a example of how templates hinder debugging.

i guess slow compiles make sense, but wouldn't that apply to all meta-programming then? is there a reason casey's method of meta-programming is significantly faster than templates.

i would also appreciate it if someone could explain any other flaws with templates that I missed. i use them pretty extensively in my own projects, so I want to understand what they're flaws are so I can improve!

Edited by drexler on Reason: Initial post
Here is why debugging compilation issue is a problem with templates: https://codegolf.stackexchange.com/a/10470

i guess slow compiles make sense, but wouldn't that apply to all meta-programming then?


No, metaprogramming can be and is fast. Except when implemented badly (C++ templates)
It it fast because Caseys metaprogramming parses code and and generates text, that's it. It does not try to evaluate turing-complete language with metaprogramming constructions (basically super slow VM). All logic is regular C code which is compiled - thus fast.

Edited by Mārtiņš Možeiko on
mmmm, so i'm still a bit confused on the slow compilation point. say I have a Stack that I decided to templatize, so I could create Stacks of various different types. Wouldn't a Meta-Program and Templates effectively have to take the same steps? Don't they both just need to parse the program and output some code?

1
Stack<int> MyStack; 


I don't quite understand why this would be slower than Casey's methodology. Would it not just use the template parameter (int) to output a stack of the proper type, just like a C metaprogram would?

I mean at the end of the day isn't the C compiler just parsing the template and executing some CPU instructions based on what was parsed, just like we would be doing in our own metaprograms anyway?

is it just a matter of our code needs to just read text and execute its own compiled logic, whereas the compiler needs to evaluate each template expression, like a interpreted language does?

Edited by drexler on
drexler
mmmm, so i'm still a bit confused on the slow compilation point. say I have a Stack that I decided to templatize, so I could create Stacks of various different types. Wouldn't a Meta-Program and Templates effectively have to take the same steps? Don't they both just need to parse the program and output some code?

1
Stack<int> MyStack; 


I don't quite understand why this would be slower than Casey's methodology. Would it not just use the template parameter (int) to output a stack of the proper type, just like a C metaprogram would?

I mean at the end of the day isn't the C compiler just parsing the template and executing some CPU instructions based on what was parsed, just like we would be doing in our own metaprograms anyway?


Because over time the template libraries got complicated.

For example std::stack<T> is a wrapper that wraps a std::deque<T, std::allocator<T>>.

Also every member function called means checking whether the function itself has been generated and then doing the generation instead of simply creating the call.

Also in every translation unit the template needs to be reparsed and regenerated (including error checking). I don't believe there is any compiler that caches that accross compilation units. There are projects with tens of thousands of translation units with most using the same templated types. With custom meta programming it's done once.

As another factor each templated function is marked inline (which means the linker should ignore the One Definition Rule for that function). So there is additional burden on the linker to eliminate all that duplicated bloat.
mmozeiko
Here is why debugging compilation issue is a problem with templates: https://codegolf.stackexchange.com/a/10470


Some of that is attributable to the standard library. For example, recreating that error with my own array list and find function only gives the following:
1
2
3
4
5
src/main.cpp: In instantiation of 'ITER find(ITER, ITER, TYPE&) [with TYPE = int; ITER = List<int>*]':
src/main.cpp:211:54:   required from here
src/main.cpp:203:35: error: no match for 'operator!=' (operand types are 'List<int>' and 'int')
     while (begin != end && *begin != item)
                            ~~~~~~~^~~~~~~


Some of the problems with templates are inherent - they're slower than they should be for some fairly esoteric and stupid reasons, due to how they are implemented - but a lot are with how they're used. It's tempting to go overboard once you start using them, and end up with a hopelessly generic monstrosity like glm, or boost, or the standard template library (which keeps accreting bits and pieces from boost to become a little more monstrous with every new version).

Once you start using templates for complex metaprogramming, instead of just simple type generics, that's when things start getting truly nasty. Templates weren't originally designed with complex metaprogramming in mind, and it really shows.
all amazing points. i'm really starting to understand now, but could someone possibly explain this line:

It it fast because Caseys metaprogramming parses code and and generates text, that's it. It does not try to evaluate turing-complete language with metaprogramming constructions (basically super slow VM). All logic is regular C code which is compiled - thus fast.


VM, i assume refers to a VM like the Java Virtual Machine. So, I guess this is saying that evaluating a complex meta-program written via templates, means you have to evaluate template expressions, convert them to the appropriate machine code, then execute that code. Whereas, Casey's metaprograms simply read text and spit out C code; no weird language in a language sort of insanity. Is this correct?
Yes, almost correct. When evaluating templates they are not converted to machine code. It's not a real VM, it just behaves similarly to VM. Template compilation is more like "interpreted" language - all "execution" happens by resolving types / specializations / etc in compiler via logic in compiler itself. It searches available types, tries to match whats needed - usually quite an overhead.

Edited by Mārtiņš Možeiko on
I do wonder though. Doesn't Casey on Day 206 compile his simple_preprocessor.cpp every time he compiles the game? Doesn't this offset any compilation savings we make vs templates? Because, wouldn't compiling and starting a entire program executing on every build have a cost associated with it also? Like, the compiler already has compiled code that deals with templates, whereas Casey compiles his from scratch every time.

Edited by drexler on
Another factor is that a lot of complex template metaprogramming is very similar to functional programming.

But to do execute that fast you need a few specific optimizations to avoid falling of common performance cliffs. Lazy execution is in some cases impossible because of error requirements.

There is talk that modules will fix a lot of the slow compile issues because it fixes the macro issue and allows the included module to be precompiled by default. However that won't fix all that much because parsing isn't the slow part of the process.

Another issue I have with templates is that they encourage ducktyping. But enforcement of the interface the type you pass isn't enforced in the signature of the template, instead it's buried in the body. That makes creating a template tricky because you cannot be sure you didn't accidentally rely on a feature property of the type that you didn't document the type needed.
yeah. i guess i am just so surprised that it's literally faster to compile a entire meta-program then run it, than it is to compile a program with templates. that blows my mind.
Yes, Casey is compiling meta-programming generator and running it on every build. But this is not requirement for meta-programming. This is just they way how Casey is doing "build-system". It is completely orthogonal to his meta-programming.

If you would use some kind of build-system like make or Visual Studio project, you could easily set up rules, that simple_preprocessor.exe depends on simple_preprocessor.cpp. So build will recompile exe file only if .cpp file changed. Same for output of preprocessor.exe - you can set rule that handmade_generated.h depends on preprocessor input .h files, and then build system won't run it as long as input won't change.

But hey, if you code can be compiled & executed in less that a second then why complicate things? Just compile & run it every time :) This often is not possible with templates - they require many seconds of compilation due to complexity mentioned in previous posts. That's why its hard to do unity builds with large "C++ OOP with templates" code on every change.
On the plus side, one major advantage that templates have over code generation programs, like the one Casey uses, is a standard implementation that is already built into the language.

Although to be fair at the point you find yourself thinking "I wish C had this feature / capability, I should roll my own code generator for it!" its a better idea to switch to a proper language with built-in support for code generation.

I remember Casey once mentioning on a video something to the effect of if he didn't use C he would roll his own tools for working directly with Assembly, and it seems like Lisp would be a perfect fit for that, and it even comes with meta-programming out of the box.

- https://www.pvk.ca/Blog/2014/03/1...ltimate-assembly-code-breadboard/
- https://ahefner.livejournal.com/20528.html
I don't understand this argument about lisp (and similar languages). How they are better at generating assembly than any other C like language. I know lisp in some level, and I understand how it is better at manipulating code as data and is good for creating DSL. But I totally don't understand using it as machine code generator.
> How they are better at generating assembly than any other C like language.

If all you want from your language is to go fast without worrying about assembly or the underlying architecture or putting much effort into custom tools, C is more than good enough and its simple and old enough that multiple developers can work on the same codebase without issue.

If you want fine-grained control over the assembly you're generating, and/or access to the full capabilities of your hardware (AFAIK certain instructions on x86-64 aren't documented even in the giant Intel reference or don't have assembly mnemonics at all) the best way to do so is with custom tooling (not counting hand-written assembly because its tedious, or assembly injection directives in C compilers because its awful and not portable between compilers). You can write your assembler in any language you want of course, but the reason I think Lisp is an especially good choice is because of the powerful abstraction mechanisms available to it that you don't see in most modern programming languages.

In practical terms, you would start by writing an assembler that lets you write lispy assembly and outputs a binary. Once you have that foundation, you have the full power of lisps' meta-programming facilities to customise it to fit your problem perfectly. As far as I remember, its not related directly to lisp, but an interesting application of this idea can be found here (http://www.moserware.com/2008/04/...res-law-software-part-3-of-3.html)

This type of thing (Assemblers written in Lisp that take advantage of Lisp meta-programming) has been done in the past, with games like Crash Bandicoot, but implementing something similar today for x86-64 might be too tedious, and it may have problems with scalability as more developers join the project and need to figure out your custom made DSLs and abstraction mechanics.

Edited by yumisen-yamasen on
That lisp talk makes me wonder how many people praising lisp have actually used lisp in a real way (as in a proper large scale project that isn't a one-off toy).

I did a quick search on github and found 14k projects using lisp as main language. C and C++ were both (independently) an order of magnitude above that. Haskell, a similar academically praised language, scores 62k repos

A lot of language features that look awesome at first glance just don't scale well in large projects. Often the only way to figure out whether something can scale is to use it in a large scale project.