I have been working on HMH videos (I'm 40 hours in now), but I did not come here with the goal of doing video game programming. I came here to learn how to do graphics programming without having to write to the html+css api or another api like QT. At this point in the HMH series I see the topics starting to veer off in the direction of simulation of physical/mechanical systems which fall squarely into the domain of video games.
I would appreciate any thoughts about programming the kinds of "entities" that are used in 2D GUI applications. Like buttons, layout containers, images, text etc. I think I could have much more control over a web application if I coded it in webassembley. And the way HMH allocates everything in the start and then goes to a game layer seems like a good fit to how the webassembley spec is defined.
I do not know if this is a good idea or not. What I do know that in my experience as a web developer, I have always worked at too high a level of abstraction. Not only am I forced to recreate every piece of graphics that a designer makes in figma or sketch using html + css again, but after I'm done with it, its usually in a state where the designer making even a small change means I have to rewrite a lot of stuff. This stuff which invariably is html/css/jsx invariably turns out to be brittle. In fact I think in the entire frontend web works, the amount of work that we are forced to duplicate every time is at a factor of 2 times. Once the designer makes it, then the frontend developer makes it again.
Additionally, the amount of tuning that I can do to make a program faster on the web is very limited. The v8 engine blog shows so many different kinds of ways a piece of code can be optimized or deoptimized on the fly that a developer has little control over. And the way the chrome profiler that shows memory usage or cpu usage does not make it very easy for me to think about my programs in the same way that Casey Muratori shows how he optimizes his programs (for example the grass layout algorithm video).
I guess my question here is, "Am I on a fool's errand in trying to write an application using only the base api of the canvas element that allows me to put pixels on the screen and webassembly?"
I am scared because it implies that I have to code a lot of stuff on my own that even HMH will not be going through (graphics wise) , like how to layout text or how to render boxes of ui elements on the screen. Has anyone attempted anything similar aside from ImGUI for implementing actual business applications? ImGUI gives me a lot of heart because it shows plainly that this is definitely possible.
Eventually I would like to make something like figma or sketch that allows me to have some sort of a "level editor" for building business applications. In my imagination, this allows artists to update or revamp the ui without me having to do a lot of work, it also allows me to program in the C model so I can work on optimizing stuff. Ideally, I would like to program on the web like an engine developer will program in a game studio.
So, am I talking nonsense here or can it go somewhere?