Writing a Code Editor

Kladdehelvete
But basically each instruction is a box that can be zoomed in and out. You could zoom out to see the entire structure of the code. And you can zoom in to view an individual module, class, function, struct, dataobject or instruction.


Hm, reminds me of one commercial hardware emulator I saw at palmsource years ago. They'd simulate actual hardware and you could click on chips to dive down to gate level. I don't remember the name though.. they ran prototype palmos on it (took foverever to boot =)

Anyway, when it comes to UIs, you may also give my logic sim a spin, even though it's not exactly what you're after: atanua.org
sol_hsa
Kladdehelvete
But basically each instruction is a box that can be zoomed in and out. You could zoom out to see the entire structure of the code. And you can zoom in to view an individual module, class, function, struct, dataobject or instruction.


Hm, reminds me of one commercial hardware emulator I saw at palmsource years ago. They'd simulate actual hardware and you could click on chips to dive down to gate level. I don't remember the name though.. they ran prototype palmos on it (took foverever to boot =)

Anyway, when it comes to UIs, you may also give my logic sim a spin, even though it's not exactly what you're after: atanua.org


I take a look.
sol_hsa

Anyway, when it comes to UIs, you may also give my logic sim a spin, even though it's not exactly what you're after: atanua.org


I am on Ubuntu. (and i am new to linux). I select "run" but nothing happens.
Kladdehelvete
sol_hsa

Anyway, when it comes to UIs, you may also give my logic sim a spin, even though it's not exactly what you're after: atanua.org


I am on Ubuntu. (and i am new to linux). I select "run" but nothing happens.


Oh well, *shrug*. You can look at the screenshots and the flash animation at least. Assuming flash works for you. Or if you want to compile it yourself, grab it from github.

Edited by Jari Komppa on
I'm working on a visual programming tool myself, but not for generating stand-alone executables, but more as the "glue" in a control system that has to coordinate multiple inhomogeneous devices via network or other interfaces. Workflow-wise, it's really similar to the blueprints in UE4.

In my experience, visual programming lets users get off the ground quickly, but tends to limit their flexibility. That's similar to what Casey said about using libraries or whole tools like Unreal Engine / Unity: quick startup, but eventually, you'll run into a problem that the tool wasn't designed for and you start fighting the limitations instead of making progress.
Kladdehelvete
I even think, to my self, in private, I must have gotten some inspiration from them, somehow. Even though I was disconnected for 4 month, at the time I wrote my prototype. I am unable to express what that feels like, but I guess some will get it, and to others it will sound crazy.


I think I can relate.. it's like when you come up with a melody and then later hear it in a song that you think you've never heard before.. it's like, "did I really come up with that music or did I just hear this song somewhere and don't remember it?"


Kladdehelvete
But basically each instruction is a box that can be zoomed in and out. You could zoom out to see the entire structure of the code. And you can zoom in to view an individual module, class, function, struct, dataobject or instruction.


This is super cool, it's similar to what I was imagining.. code that sort of exists in space rather than a text buffer.

-

But still, we have that interface / efficient input problem. I'll let it stew on the backburner of the mind for a bit. If I get any worthwhile ideas I'll drop them here.
midnight_mero

I think I can relate.. it's like when you come up with a melody and then later hear it in a song that you think you've never heard before.. it's like, "did I really come up with that music or did I just hear this song somewhere and don't remember it?"


Yes I guess, something like that:)

midnight_mero

But still, we have that interface / efficient input problem. I'll let it stew on the backburner of the mind for a bit. If I get any worthwhile ideas I'll drop them here.


Thanks for feedback.
Pseudonym73

Obligatory Bret Victor:

https://www.youtube.com/watch?v=8pTEmbeENF4


TL;DR: "horse manure"


I almost forgot this. I was very excited for watching it, but came away a little disappointed.

I really liked the way he said the "we don't know what we are doing.." and so on. Because that is particularly true. Even after you had a brilliant idea, you won't know it, until it is tested. So you may say, the even when you know you don't know. In addition, the lies that rides "our" industry are so hard to break that even universities fails to expose them. In fact noone are more victim to them than the universities themselves. Which is kind of sad.

We are aware of only a few percentages of the code we produce. The rest is a black hole of zeroes and ones, ripe for anyone with a little curiosity to find a place to stick his hook in and break it. Which they do, constantly. This is physics infact. Yet you can't sell that.

I think the talk failed, in some very important areas. Moore's "law" for instance, was mentioned. Which by now is dead. And has been so for at least 10 years. This is now more like a myth, than part of reality. And this is the reason that we see parallel computing trying to make up for it.

(i realize moores law isnt about performance, exactly, but I also tend to think, that to the extent that it is not, it's also largely irrelevant, at least by now).

But parallel computing does not scale, in the way he claims. It fails to scale for the same reason that adding more coders at the end of a project also doesn't scale. For certain tasks, yes. But for an entire program. No. For the future? No. A little bit. Yes. A lot? No.

The cores we run now, are designed to be mostly idle too. And if they aren't kept idle, they will overheat. And will simply not work well. In other words, they are overclocked chips, that must be cooled like constantly, or they will burn. The only thing new about them, is that they internally approaches physical limits to their size. But the bang you get back isn't all that great.

Now, just remember back 15 years. What CPU did you run then? Was it a 500Mhz computer? 15 doublings (today they say it's 11 months) would give you 16 384 000 mhz. (16 Thz) And if we say only 8 doublings, the number would be 128 000 mhz or 128Ghz. (well not quite, but you get my drift).

Today, each core should run at 128 Ghz if moores law included performance. In the next 20 years, it will not get any better, unless there's a significant breakthrough, for instance in Quantum computers. But well, it could come from somewhere else, or as a byproduct of trying to make QC work, or something else. Who knows.

The only thing I know, is that Michio Kaku laughs at Google spending 100s of millions in QC research. Or more specifically, at the current "results". And he is not the only one. Just the fact that he does that, and that other scholars do as well, should tell you something. But I still believe more in people who is doing actually experiments to try to do it, than academia. So here my money is on Google in fact. But not a lot of money.

But it is for SURE not something you can count on. Same as you can't count on Oil lasting for ever.

But if there is a breakthrough, it will not come from parallel computing, by itself. Parallel computing will someday, maybe become part of the future, but it will not happen on sillicon, using these kind of chips. I mean, it already happend, but you need to understand that what we got is a joke, compared to what we need. We need parallel computing, (always will) but on MUCH faster cores.

What you really have, in your PC today, is an overclocked cpu, where the "technology" is the same as it was 15 years ago, only we have become better at the manufactoring process. Better at tweaking. That at most, will give you 4.4ghz ... for a couple of hours a day, if you're lucky. So don't talk to me about "moores law", because it's been dead since about the time of the Jurassic Park. Yes I know about pipelines, caches and so on. It just doesn't make up for it, by a long shot! They too are just a kind of tweaking the technology into performing a little more. For cache, it's a LOT more, and if you disable it, those modern cpu's become as slow as 10-15 years ago. Not that it isn't cute and smart, the caches, and the pipeline. It certainly is. But compared to a *working* moore' law, also in the realm of performance, it's peanuts.

Most of the last decades progress seems to have happened in GPUS, but these are very costly beasts, and as I said somewhere else, they cost about the same today as 4 years ago. In the past 4 years you got a maximum of 4 times more capable, but now have to pay 4 times as much for it. It should be 16 times more capable, and cost about the same. And even my estimates maybe somewhat wrong, they are not significantly wrong. And I am speaking of the range of GPUs that a normal person can expect to buy. If you are ready to fork up 50K or 200 millions, then be my guest. But this pricing says more about the tech than any paper could.

Another thing I reacted to, is his point regarding manipulating memory directly. This is a particularly vague point, when we know that the CPU is mostly idle, when doing this, and that the bottleneck is nowhere near being the CPU, but the memory chips/datalines themselves.

Another reason why this point is very weak, is that we of course, are already doing this. We are writing the data directly. And it is dogslow.

And when it comes to his talk about aliens, and computers negotiating a protocol, well that is what the web is doing, and we all know how amazingly mind blowingly fast that is....

I want to add a few things that may strike you as odd, but that I think is extremely important to grasp. It as important as understanding the process of evolution, and how everything that happens is dependant on that process, and a subset of it, even our "own" very research.

Intelligence, is, more often than not, what we call a long string of trivial(even dumb) steps/data. And as I said above, even when you managed to produce something intelligent, it often needs to be rigorously tested, before you can be sure that it deserves to be called that. Which basically means that intelligence is an oxymoron. Which again also means AGI is an oxymoron.

Now. What intelligence always is, are very specific facts. Not only in space, but also in time. Always details. Details, details details. Details that needs to correlate to other details. Overwhelmingly many for such a poor brain nature has given us. We haven't even got the intelligence, to calculate the curve of a ball in realtime. But have to go to school to learn a simplification of it, created by a famous monkey long time ago. While we are breaking sweat. Well. Some of us are.

Yet, our subconscious does a pretty decent guess, after a little training does it not? To your subconscious it's like nothing. We have a genius inside. But we are in charge of giving it the right training, and access to details. Something to chew over the next 20-30 years or so.

And if we do that, once it has worked a little, it can pour designs out on "paper" faster than you can think. Like a boss! No, not a boss, gawd no. I mean something else.

And you, the socalled "conscious" idiot, then, maybe need months, to years at understanding your own designs, completely. That's the difference, and the potential you are missing out on, if you don't invest time working with details.

So it is no mystery to me, why all of my browsers used to be brought to knees fall in front of me, when I loaded a significantly large text file into them, (40 mega), some months back. And my own texteditor loads and index it faster then you can say booo. That is because I have specifically written it to do it quickly, because I found the reason for it to be slow, and then of course solved for it. And, at the time I learned a new technic for speeding up similar problems later. Even I am an OOP programmer, I would never create an Object for every char of a document would I? Unless I didn't know the first thing of what I was trying to do.

If you do not program at the low level, ever, you maybe never discover things like that. And by "low level ever", I mean considerable time with that. So much so that it becomes second nature, and your prefered "language". And believe me. I have nothing against high level. I program my apps always to a highlevel, and I prefer much rather to say "Loadbitmap" or even "LoadPicture", then to have to do the lowlevel, if it can be avoided. Because I want things done.

But if they are slow, then they need to be rewritten. And for that, assembly is the best way. Because it shows you the way in which to do it. And much much simpler too, than doing it in C. MUCH simpler. Like 100 times simpler. You cannot believe this unless you did it exactly that much. And this knowledge is practically ignored today. Which is very sad. The lie is that this takes too much time. Or is more error prone. The reverse becomes true, when you actually are doing it at that level, often. Every day for 10 years. There are a few counter arguments that are valid. But they are very very few. And these "problems" are a problem with the tool used to write in, and not a problem of assembly itself.

What is today biggest problem in computing, is that we require development to go fast. And then are unable to produce even one, or at most one or two, applications per decade, that actually don't have tons of bugs. And irritating peculiarities. We are producing massive amounts of completely ridiculous, slow and useless software, to "solve" problems that there is also a ton of other, similar software already written for solving, which are ALSO full of bugs, 100% worthless and 100% useless.

We should slow down, and take the time to make it so that it runs fast, and is robust. Instead of making development go fast at producing a lot of irrelevant shit. I see that time is comming now. As the hardware evolution is slowing down.

Most programs today are just fads. They exist in brief moment of time, and everybody just accepts that they are shit, just waiting to happen, because noone want to learn how to do it right. Don't even try to tell me otherwise. Until software comes with a guarantie written on it, for how it's supposed to work, it will continue to be largely worthless.

What we need is NOT a new microsoft certification for how to get a Windows-ready bumber-sticker on your plastic. We need for the industry to require guaranties for how software is supposed to work, and for them to pay for damages they cause, before they are allowed to take money for it.

Only by having such requirements, will we ever be able to produce scientifically sound software. And that would help all of us. Software would become worth something again. And it would also force software writers to write functional software, and not just another me too, see how cool I am, in your taskbar, even if you now hate me for it - kind of software. And the OS must of course be the very first software to come with these guaranties. This will make software as real and worthy as it should have been, in the first place.

Such requirements would focus development to solving the problems they are supposed to solve, and not 1000 problems noone ever had. Yes, Visual Studio. I am looking at you. I am amazed and surprised you managed to restrict yourself from putting a talking animated wizard in there. Just to completely fuck-up my day.

By imposing a requirement for putting guaranties on software you want to be payed for, you could also let go a little of the protection imposed on software, by the OS, that severely limits performance, and in particular; development time.

But like this man says, in that video, it takes some time to recalibrate for another kind of thinking. So in that sense, he has an important point.

In addition. I could be wrong, but I am not sure that it's needed to teach our kids how to program. I think that real talent, will transcend whatever needs to be transcended, in order to learn anyway. What we see Casey do, and others. In fact the less we teach them, the better; is one of the thoughts I currently hold. What we should instead do, is stand back, and give them the chance to learn on their own. And the TIME to learn. Teach them, if needed, to learn by themselves. And not put obstructions in their way.

This has been proven too, by real experiments in the jungle of India where children have taught themself to university degrees, in quantum physics, iirc, with nothing more than a computer and the internet. At the age of like 10 or 12 years old or something.

And we see countless examples of that. Kids that in 5 seconds completely break the false "security" some wise-ass university professor spent his career researching. It's so beautiful to see stories like that. People think they are intelligent, and then they cant see things like that, coming from a mile away.

Pseudonym73

There are a lot of radical approaches, but most of them are not as new as you might think.


This seems true, to me. But fewer people work at those things, perhaps, or are less visible.

I wish we would teach children to doubt more. To question more. Question everything! I wish that we would teach the children to not be afraid to make mistakes. That the more mistakes they make the better it is. Some mistakes are of course lethal and should be avoided, for the longest time ;-) But when it comes to learning stuff like science, and the unknown, it's the only way to go.

Fact is that we are very clever to hide this. That many of the things we discover, are done by stupid hacks, and pure luck. In unawareness. That we then spend the aftermath to understand. That's how physics is done too. We think of those people as geniuses, but most of them are stupid as fuck, just like everybody else. They see some experiment, and try to explain it. It is not, and has NEVER, been the other way around. That you first learn how to do it, and then you do it.

We tell each other a lie. We hide our mistakes. We pretend we are so advanced now. It is a very comfortable lie.

For well established facts, learning or at least to have access to them is good, of course. But for discovery of new science, especially in computers, the fastest way to go is by making mistakes. It is good that your program crashes, if it means you made an error. Sooner the better.

That's also, I bet, the reason why quantum theory holds, yet noone understands it, because even the experts are dumb as sauerkraut. And I don't really mean that as a joke. It is literally true. And a wise man knows it too, about himself.

For a computer science student to be afraid of pointers and program crashes, is pathetic. It's like a chemistry science degree student being afraid of H20! It's fucking ridiculous!

But instead of embracing knowledge, we accept: SLOW software. 10000 known, and 1000000 unknown security breaches. Constant updates, insane restrictions, years wasted reading retarded information for how to do the simplest of things. And so on.

I read somewhere, that universities now teaches Javascript? I taught myself this in less then a week. That's the only thing good about it. That it is simple. But it's not worthy to code in. You will never become a good coder from JS alone. In fact, you will be a disaster if you don't go deeper!

There has never been a time in history, where programming in javascript could possibly be a worse mistake. Unless you get really lucky. There are exceptions to every rule. But being a parot, in computing, is not the place you want to be. Javascript is parroting everything done in C for the past 30 years.

No performant library that Javascript depends on, in order to run, is coded in javascript. Because if it was, javascript would be even worse than it is.

Javascript is elegant though, even from the perspective of an assemblyprogrammer. I could for instance vividly imagine implementing it in assembly. Since it's simple and well defined, and thus has a certain beauty, it would be interesting to watch the result. But if I did, I would break the rules, and take it down to the assembly level, and still keep it just as clean. I could make a javeimplementation that was just as capable as assembly, while being just as elegant. Because asm *is* that elegant. Javascript is in fact, the elegance of assembly in a nutshell, but then they sawed off both hand and legs, and kicked it in the groin.

This is actually one of the things I try to achieve in my implementation, of an near instant compiler. The simplicity of javascript, but extended to registers, and direct memory accesses. And of course without any performance penalties. If you can access an array of bytes (a string) in javascript, then why not an array of registers, or an array of anything else. Javascript is trash. But it's elegant trash.

Other than that, it is a toy.

My phone, that costed like 1K dollars, is amusing to observe, as it tries to take a picture, before the photolight goes out, and fails every time! It is the same with the autozoom. It zooms beautifully, and then you click to take the picture, and it misses the zoom, LOL. It happens every single time, maybe I am using it wrong, haha, and I know it's not just my phone, because I traded it in, for the same model, and it is exactly the same. It's very fun to watch. I hope that isn't the rule with these things, but I rather not pay another $1K to find out.

Believe me or not. But the PC era is far from over. The PC is here to stay for a *considerable* time. A few days ago, I needed a flashlight for re-assembling my mainboard, after overhauling it, and I used my phon as a flashlight.

The battery was full when I started, but it failed within like an hour. For 100 other reasons, it's a very very nice looking, but utterly pathetic "computer".

So when you read that Google is going to replace all of the worlds workers, what you should read is "horse manure". In 5 years, you gonna need to hire a person just for changing your batteries.
Hi martin,

I was also thinking about a "handmade editor", for learning portable GUI programming and having a substantial project to try "compression programming" and what not.

Intype was a beautiful and promising editor, and I was sad to see development stop at some point. Why aren't you using it as your base? I remember there was a C++ expert working with you, or maybe you're the C++ expert I'm thinking about.

IIRC it actually had the main thing I'd want in an editor "of my own" , because that is the part I never wrote, and I love in Sublime Text: the graphical front-end right. I feel like Sublime excels in this regard, feeling native in all platforms. I'm not sure you could achieve that using Qt or some other toolkit.


If you have updates about your project, please let me know where to look/follow you.
Vis also looks great, and I opened the source code at random and saw very clean code, but is text only, so another kind of editor. In Go there is also a cool emacs clone: https://github.com/nsf/godit

Cheers,
hugo

Edited by hugo on
Hello Hugo!

Sorry for my late reply. There's a lot that happened with Intype. I'm using it as a reference implementation. I had a C++ expert in the project for 2 years, I learned a lot from him and then worked on it alone for 3 or 4 years. There's a lot of code I just don't want to deal with yet, so I'm pulling parts that make sense. I'm using Qt as a base foundation, as it's generic enough.

Since Intype I've made several prototypes for a new editor and currently I'm working on the best of them. I also stream, though it's not very educational and I cannot compare with code quality and strong opinions Casey has. ;) The editor will be 100% open source, though it's not a perfect. I'm mostly roughly sketching and prototyping. Don't want to go too deep before I actually find out how each idea I have fits.

Please, contact me on Twitter @martin_cohen I'll be happy to to chat more on this topic. If you are Hugo Schmitt that contacted me a few days ago, just ping me again. :)

Cheers!