Jonathan Blow's HN comments
I am 42-year-old very successful programmer who has been through a lot of situations in my career so far, many of them highly demotivating. And the best advice I have for you is to get out of what you are doing. Really. Even though you state that you are not in a position to do that, you really are. It is okay. You are free. Okay, you are helping your boyfriend's startup but what is the appropriate cost for this? Would he have you do it if he knew it was crushing your soul?
I don't use the phrase "crushing your soul" lightly. When it happens slowly, as it does in these cases, it is hard to see the scale of what is happening. But this is a very serious situation and if left unchecked it may damage the potential for you to do good work for the rest of your life. Reasons:
* The commenters who are warning about burnout are right. Burnout is a very serious situation. If you burn yourself out hard, it will be difficult to be effective at any future job you go to, even if it is ostensibly a wonderful job. Treat burnout like a physical injury. I burned myself out once and it took at least 12 years to regain full productivity. Don't do it.
* More broadly, the best and most creative work comes from a root of joy and excitement. If you lose your ability to feel joy and excitement about programming-related things, you'll be unable to do the best work. That this issue is separate from and parallel to burnout! If you are burned out, you might still be able to feel the joy and excitement briefly at the start of a project/idea, but they will fade quickly as the reality of day-to-day work sets in. Alternatively, if you are not burned out but also do not have a sense of wonder, it is likely you will never get yourself started on the good work.
* The earlier in your career it is now, the more important this time is for your development. Programmers learn by doing. If you put yourself into an environment where you are constantly challenged and are working at the top threshold of your ability, then after a few years have gone by, your skills will have increased tremendously. It is like going to intensively learn kung fu for a few years, or going into Navy SEAL training or something. But this isn't just a one-time constant increase. The faster you get things done, and the more thorough and error-free they are, the more ideas you can execute on, which means you will learn faster in the future too. Over the long term, programming skill is like compound interest. More now means a LOT more later. Less now means a LOT less later.
So if you are putting yourself into a position that is not really challenging, that is a bummer day in and day out, and you get things done slowly, you aren't just having a slow time now. You are bringing down that compound interest curve for the rest of your career. It is a serious problem.
If I could go back to my early career I would mercilessly cut out all the shitty jobs I did (and there were many of them).
One more thing, about personal identity. Early on as a programmer, I was often in situations like you describe. I didn't like what I was doing, I thought the management was dumb, I just didn't think my work was very important. I would be very depressed on projects, make slow progress, at times get into a mode where I was much of the time pretending progress simply because I could not bring myself to do the work. I just didn't have the spirit to do it. (I know many people here know what I am talking about.) Over time I got depressed about this: Do I have a terrible work ethic? Am I really just a bad programmer? A bad person? But these questions were not so verbalized or intellectualized, they were just more like an ambient malaise and a disappointment in where life was going.
What I learned, later on, is that I do not at all have a bad work ethic and I am not a bad person. In fact I am quite fierce and get huge amounts of good work done, when I believe that what I am doing is important. It turns out that, for me, to capture this feeling of importance, I had to work on my own projects (and even then it took a long time to find the ideas that really moved me). But once I found this, it basically turned me into a different person. If this is how it works for you, the difference between these two modes of life is HUGE.
Okay, this has been long and rambling. I'll cut it off here. Good luck.
The main reason is that debugging is terrible on Linux. gdb is just bad to use, and all these IDEs that try to interface with gdb to "improve" it do it badly (mainly because gdb itself is not good at being interfaced with). Someone needs to nuke this site from orbit and build a new debugger from scratch, and provide a library-style API that IDEs can use to inspect executables in rich and subtle ways.
Productivity is crucial. If the lack of a reasonable debugging environment costs me even 5% of my productivity, that is too much, because games take so much work to make. At the end of a project, I just don't have 5% effort left any more. It requires everything. (But the current Linux situation is way more than a 5% productivity drain. I don't know exactly what it is, but if I were to guess, I would say it is something like 20%.)
That said, Windows / Visual Studio is, itself, not particularly great. There are lots of problems, and if someone who really understood what large-program developers really care about were to step in and develop a new system on Linux, it could be really appealing. But the problem is that this is largely about (a) user experience, and (b) getting a large number of serious technical details bang-on correct, both of which are weak spots of the open-source community.
Secondary reasons are all the flakiness and instability of the operating system generally. Every time I try to install a popular, supposedly-stable Linux distribution (e.g. an Ubuntu long-term support distro), I have basic problems with wifi, or audio, or whatever. Audio on Linux is terrible (!!!!!!), but is very important for games. I need my network to work, always. etc, etc. On Windows these things are not a problem.
OpenGL / Direct3D used to be an issue, but now this is sort of a red herring, and I think the answers in the linked thread about graphics APIs are mostly a diversion. If you are doing a modern game engine and want to launch on Windows, Mac, iOS, and next-generation consoles, you are going to be implementing both Direct3D and OpenGL, most likely. So it wouldn't be too big a deal to develop primarily on an OpenGL-based platform, if that platform were conducive to game development in other ways.
I would be very happy to switch to an open-source operating system. I really dislike what Microsoft does, especially what they are doing now with Windows 8. But today, the cost of switching to Linux is too high. I have a lot of things to do with the number of years of life I have remaining, and I can't afford to cut 20% off the number of years in my life.
This is not a turnstile, because turnstiles do not detain you or trap you; you can always move freely on one side or the other. This device detains you and then lets you move on. What are you going to do when it decides not to let you move on?
This article doesn't even come close. Elon Musk is in charge of the design of rockets that have successfully delivered payloads to the ISS. It is just basic competence of a reasoning mind to presume Elon knows some things about thermal expansion (rockets get very hot!).
So someone who is trying to have a reasonable argument would say, okay, he understands this issue, so I wonder what the answer is and why he doesn't think it is a big enough deal to go into detail on this point. Or perhaps I misunderstand something about the design (always a reasonable assumption!)
This article is about as far in bearing from that as can be. I don't find it to be worth reading.
Translation: "We tried to serve ads in a way that broke basic functionality for many people. But we didn't make that much money, so we are going to stop being malicious actors, and we're going to start following the protocols we're supposed to follow."
This is the wrong solution, in that it makes things more complicated and results in a generally poor experience even after the change. But that is what Google does all the time in UI, so I guess I have no reason to be surprised by this.
The proper solution: Do not make double-tap be a UI action. Done.
Or, if you insist on double-tap, make it only be an action that stacks transparently with single-tap. For example, in touch controls for The Witness, single-tap makes you walk toward the target. Double-tap makes you run. So as soon as we read a single tap, we can start doing the action without delay, and if we see the second tap we just kick up your target speed. It works great.
In short, hey Google, please stop doing band-aid solutions that make things worse, and hire some people who really have solid design vision, and give them the power to get things done.
However, it becomes substantially less impressive when you notice that you're using 2x or 4x the amount of processor hardware (2 or 4 cores) and only getting a 15% speedup. In a by-hand implementation that would be very disappointing.
If it were fully automated that would still be pretty valuable, but it appears that this isn't. So it seems to be of questionable utility.
This example turns the code into something that appears more airy but in fact is much harder to understand due to extensive use of ? :.
I find that one of my own major steps toward programming maturity happened when I stopped doing goofy things like this and started writing code that was as simple as possible to logically follow, and that was as un-special-cased as possible. (By this latter I mean, if you change the code a little bit, you don't have to rewrite it; it looks basically the same. Imagine you want to do more than just assign one variable inside the clauses of the 'if'. In the Carmack version you just add more code there. In the proposed substitute, you have to rewrite the whole thing.)
The idea is: suppose there is a universe that has elements that are fundamentally random. It is nondeterministic. Well, let that universe run for its lifetime, and record everything that happens. Then make a deterministic universe that just plays back the recording (this is the "second time around").
From the viewpoint of someone living inside the universe, there is no way to tell whether it is the first time around or the second time around.
But the other thing to point out is that this question presumes an old idea about the passage of time, which is that things happen in a sequence A, B, C, D, ... and that if you are at C then D "has not happened yet". But if you look at relativity, this appears to be a nave viewpoint. In relativity, the time at a faraway point in space that you would consider "simultaneous" with your own clock depends on the relative speed between you and that point. As you speed up and slow down, you can make a faraway point "go forward or backward in time" with regard to which moment there you would consider "now". The crazy thing is that for angular movements the relative speed is amplified by distance, so when you are moving around at everyday speeds the "now" on planets across the galaxy is going back and forth by thousands or millions of years. (This is hard to observe because you are viewing tiny amounts of light from very very far away that have been traveling for a very long time, and the light that you are about to see was very close to you when you did the angular movement so it will not be much affected, etc, but hey, the math says what it says, you either believe what physics tells you or you don't.)
So when you make a distant "now" go forward, then backward, then forward again, do you expect the two forwards to be the same, or not?
The only problem that I have with my Roadster is that if I want to take a long road trip, there has to be a big stopover in the middle, which makes trips much longer. The Model S solves that. Therefore, EVs have won. They are here now, they are real, and they work. All that needs to happen is for the cost to come down, but there is nothing preventing that.
If you have parking at home, an EV is way better than gas or hydrogen because you never need to take your car somewhere to fuel it, ever (unless on a very long road trip). You just come home, park, and plug it in. It is hard to communicate how good this feels until you've had an EV for a few weeks and you're driving past all these gas stations and kind of laughing because you don't need them.
Just think about your proposition for even ten seconds.
In the absence of leaks and whistleblowing, how do you propose that this kind of power be reined in?
If he were as good in academia as his rhetoric claims (building software that "revolutionized" a field) he should have no problems. He should not even need a job, as he ought to be able to just start something. He should have no shortage of strong ideas about what he could be doing.
Instead he is aimlessly searching for a job.
So, I have no choice but to disbelieve his rhetoric. He probably isn't particularly good at anything, and just stumbled through the PhD system. Well, surprise, that isn't worth much!
To be clear: None of the APIs you listed are based on OpenGL. PSGL was an OpenGL implementation, but nobody writing a high-performance game used it because it was too slow and unreliable. The APIs used by almost all shipping games you can think of are substantially lower-level.
Mike Abrash still worked at Valve last week.
I am hoping they have made substantial improvements that are not mentioned in this blog post.
In video games specifically, where I work, it is well-known that you would rather have a game that runs slower, but with a solid frame rate, than a game that has a highly-variable frame rate that is faster on average. (This is more of a continuum situation than a discrete action like a tap, but the basic principle holds).
P.S./edit: Double-tap only means zoom because that is what they happened to implement. It is a convention easily changed. Look at how heavily Apple just revamped their entire interface, for an audience that is arguably much less savvy than the Android audience.
Using double-tap in this way was a mistake; it is an easy mistake to fix if you have enough design vision to know where you are going.
What you seem to assume is normal is some kind of career hell I would never want to be in (nor have ever been in).
The state of Windows laptops these days is really kind of embarrassing. Nobody knows how to build something of quality. Everything appears to be driven by bullet-point features with a goal of providing USPs, but nobody really cares if the features work or if the overall product is good.
This extends to everything (keyboard and trackpad design, screen, preloaded software, function key mappings, etc).
I buy 2 or 3 laptops a year; traditionally the case has been that most laptops were kind of bad but if you looked hard you could find something good. Now it has gotten to the point that the something good no longer seems to exist at all.
I dread the idea of buying a new laptop now. This can't be the high-level result that these OEMs really want.
There isn't anything close to unanimous agreement, but the dominant view is that something like single inheritance is a useful tool to have in your language. But all the high-end OO philosophy stuff is flat-out held in distaste by the majority of high-end engine programmers. (In many cases because they bought into it and tried it and it made a big mess.)
If you are someone who provides a lot of value, other people will go out of their way to meet you, and then you don't have to go to networking events. So the fact that you are doing networking implies that you are someone who does not provide a lot of value (or else that people don't know what value you provide).
Do you think Elon Musk goes to a lot of networking events? Do you think Steve Jobs went to a lot of networking events?
If you are early in your career and legitimately aren't providing a lot of value yet, because it's early, then I would offer that your time is much better spent cloistered away becoming excellent at what you do, than it is networking. Because if the arc of your career involves you being excellent at what you do, then very quickly you will find that people you meet randomly at events like this are not in your league -- that's just how things are everywhere all the time.
Yes, nobody thinks of it in such an outright simplistic way, but I do think this is the basic appeal.
Subways that I have seen that require fare cards on exit do not actually prevent you from leaving; you can jump the turnstile or go through a gate on the side that possibly alarms. This is not the greatest thing, but at least while you are inside you can move freely inside a large subway system; being encased in plexiglass until a light turns green is a whole different degree of trapped.
Edit: And I guess this is an important part of the point: degree matters. The old boiling-a-frog story is just about slowly increasing the degree of something.
Do the math for how much a razor "should" cost, and ask why you can't buy one for remotely that much at your local Walgreens. Ask why stuff like Dollar Shave Club exists, and then look at their prices and figure out what their margins are.
However, I find this article to be very Cargo Cult and am disturbed that nowhere in this entire thread has it been called out as such.
"Look! Meditation must do things because we can make these colored charts telling you about beta waves. What are beta waves? Well, it doesn't really matter, just think of them as bad, because look, meditation does things to your brain, okay??"
The benefits of meditation to mood, creativity, etc are pretty easy to verify for yourself, subjectively. It disturbs me that we feel that adding scientismic mumbo-jumbo gives it credibility somehow. What is presented in this article is not actual science.
There is actual science involving meditation and the brain, but it is in extremely early stages and is hard to draw conclusions from. Our understanding of the brain, in general, is very early! Please be suspicious of pretty colored charts showing brain activity.
In The Witness (http://the-witness.net/news) we have 20GB of data checked into svn. Try that with git.
It is nice to make your game available in many languages, but getting translations that aren't terrible is hard, and I have never seen a clear business case for it. So I think the proper attitude is "there is not an obvious business case, but we are doing it because we want to."
When I was in college (1989-1993), people did CS stuff because they thought computers were cool and could do things that were kind of amazing. Some were more visionary than others, of course. There was a vague idea that you could make a lot of money doing it, but it wasn't why one did things.
Now it seems the opposite of that. Especially here. Everyone is all startup, startup, startup. Honestly it makes me feel a bit ill, because there is very little talk of why one might do things and what is ultimately important.
I do think the support network built up by YC is great, and it is really exciting to see that young people just out of school can find support to go and do something new and interesting. But I think the actual projects coming out of this process are usually kind of bankrupt when it comes to things that I value.
I went to YC Demo Day a year ago with the intention to definitely invest, but I got so disheartened by the projects being presented that I never ended up investing anything. (Actually I wanted to write a check to Leftronic but they never returned my email.)
This thread is 100% nerdy dudes feeling offended by this event, plus other dudes attempting to counter this.
Is not this thread itself indicative of a giant problem?
As a meta-comment, I find the condescension in your comment unnecessary. Why is he "spewing" and why is it "quackery"? Did he not publish a testable, falsifiable theory? Isn't that what science is?
For example: It is an easily observable phenomenon (e.g. by anyone who meditates) that you can be conscious without remembering anything. In other words, consciousness is independent of something like memory.
Yet Max's list on page 3 has stuff like "independence", "utility", "integration" which have nothing to do with observations of what consciousness is actually like. Rather, they are more like high-level ideas of what human beings are like.
But we don't need to explain human beings (complex biological organisms that walk around and do stuff). Science has got that covered already, at least kind of. So if you are going to clearly think about consciousness, you need to factor out what consciousness really is and look at the properties of that.
This is supposed to be a foundational principle of science: that your hypotheses are attempts to explain things that are actually observed. The first step is to observe things carefully! You don't just go making up hypotheses.
So it's a giant red flag any time a scientist writes a paper about consciousness where they conflate it with memory in some way (which is almost every time). It's a red flag because it indicates that the scientist has not actually spent any time observing consciousness, because they aren't noticing things that are obvious to people who have done that.
(You might think that because we are all walking around conscious every day, there would be no need to observe consciousness, but this isn't true. We walk around in a space governed by Newtonian physics, but it took until Newton to figure out this thing called inertia and that a frictional force is required to make things stop, etc, because if you don't look carefully and make careful measurements, most of the everyday world doesn't appear that way at all. Same thing with consciousness.)
If one doesn't even know what uops are, and doesn't have a mental estimate of what percentage of i7 silicon is devoted to the instruction decoder, one doesn't get to write articles comparing Intel to ARM chips.
Edit: Going back to the article I note the author is Jean-Louis Gassee, so this is just bizarre. He is kind of just talking out his ass and I would hope he'd know better than that, because whereas making stuff up is a survival skill in exec-land, being blatantly and demonstrably wrong about said make-ups is not.
Does Ruby not provide a facility to use shared memory? I guess you don't get it by default in a GC'd language because the GC thinks it owns the world.
I recommend you build a game of similar scope before saying stuff like this.
Seriously, there are tons of studies on this. I am not making it up.
When an entire field needs to stand behind a euphemism in order to prevent seeming ugly, that is certainly some kind of a sign.
Believe that all of us working in video games want program execution on mobile devices to be as fast as possible. My brand-new desktop PC is not fast enough for what I want to do, so an Android phone running a bytecode interpreter is that much further.
I also don't think you know what "vaporware" means. If it is actually running on physical hardware it's not vaporware; it is just not in consumer hands yet. (Since Engadget has played with it on a physical phone... it is known to be real.)
A problem I get into at the end of the second article is that gamma-correction is very important for good image scaling results. However, almost nobody gamma corrects during scaling, even today.
They have been losing money for years.
This is just another example of fighting through a gross kludgey mess to do something relatively trivial that we already knew how to do, much more robustly and performantly, in other systems.
Any time you see an article titled "X in HTML5" you know you are about to ride the short bus. I recommend learning some computer science.
See also Alan Kay's comments about the Web vs the Internet, recently linked on HN...
I am not explicitly saying that you would for sure agree with him 10 years from now, but I want to at least suggest it's a possibility.
For what it's worth, I run a software company, in which I sign paychecks, and I think what is said in this posting is pretty smart.
And yes, it's a problem.
See the other comments here about VR. With VR you want to render at 90 frames per second, in other words, you get 11 milliseconds to draw the scene twice (once for each eye). That is 5.5 milliseconds to draw the scene. If you pause and miss the frame deadline, it induces nausea in the user.
But this comment drives me up the wall:
"GC doesn't seem to be a show stopper for them, you just have to be smart about allocations..."
The whole point of GC is to create a situation where you don't have to think about allocations! If you have to think about allocations, GC has failed to do its job. This is obvious, yet there are all these people walking around with some kind of GC Stockholm Syndrome.
So now you are trapped in a situation where not only do you have to think about allocations, and optimize them down, etc, etc, but you have also lost the low-level control you get in a non-GC'd language, and have given up the ability to deliver a solid experience.
Come on, this is obvious.
Lamborghini sells under 2500 a year.
Chalk this article up to careless punditry, i.e. what you get from an ad-driven internet news model.
Edit: just for kicks, I looked up Rolls Royce, which is kind of the canonical "luxury car". Under 4000 cars a year.
The bit about Walmart was especially badly done. If the author of this article thinks Walmart employees are poor on the same scale as the poor that the foundation targets, he really has no clue about world poverty. If the minimum standard of living throughout the world were equal to that of a typical Walmart employee, the world would be tremendously better off.
If you believe that evolution generates behavior to maximize survival value, this fact about babies seems relevant.
This even works if the states are known! This is how quantum cryptography works. You put two particles into Bell state with each other, give one to someone else, and then decide later what message you want to send.
I recommend learning the math. It is pretty simple if you already have linear algebra.
What I am saying is not specific to test and branch, though test and branch is great because it gives you these big code blocks into which you can insert more code and it's clear where that code lives and under what circumstances it runs. Which is something you don't get in assembly language, which is part of why the assembly language reply is a goofy straw-man argument.
Yes, my reply was a bit irritable; I would definitely prefer to have a reasonable discussion, but the assembly-language thing was the first volley in being unreasonable. Putting up a straw man like that is an attempt to win the argument, not an attempt to understand the other person's position. I detected this and decided, well, if that's the position, then it is useless trying to make further / deeper rational arguments, so I am just going to say, this comes from a lot of experience, so take it or leave it.
As fatbird replied, "This is shitty." (I can't reply to his reply yet because of the timed reply thing, so I am including it here.) Maybe it is shitty, I don't know, but it's true and sometimes you just have to say the true thing to be expedient and get on with life.
I don't have time to teach people on the internet how to program. I work my ass off for 4 years at a time to build and ship games that are critically acclaimed and played by millions of people. These are the kinds of things most programmers wish they had the opportunity to work on, and wish that they knew how to build. (Often programmers think they know how to build these things, and then they go try, and they fail. It is a lot harder than one thinks). I am not saying this to brag, because I honestly don't feel braggy about it right now. It's just fact. I am pretty good at programming (probably not as good as Carmack) and I have worked really hard for a long time to be as good as I am. Meanwhile I am also trying to be pretty good at game design, and oh yeah, running a software company.
So when I give advice like this, and someone retorts, and it seems to be coming from a place of lesser experience, it is not really worth my time to get into a serious argument. I am not going to learn anything. I have been in the place where I had that kind of opinion, many years ago, and then I learned more. Fine. I can either be polite and quiet about it, or say something a little bit blunt and rude, in the hope that the other person (and maybe any bystanders to the conversation) will seriously re-consider what was said in light of the new information that it comes from someone who is maybe not a joker. I can't spend a lot more time than that teaching everyone in the internet how to program, because it takes almost all the energy I can muster just to build software. (Though occasionally I do write up stuff about how to program, and give lectures bearing on that subject, like this one: http://the-witness.net/news/2011/06/how-to-program-independe...).
Of course this don't-get-into-the-argument strategy of mine has at least partially failed, since here I am typing out this really long reply. I don't know.
I do think that "I don't have a garage / dedicated place to park my car" is in fact the only current anti-EV argument that has any basis in reality. But it's addressable.
"I lack capital" is a very silly excuse given that the HN community are among the richest people in the world... by definition, if you have the luxury of even thinking about moving to Mountain View and starting some web site, you are among the richest people in the world. A lot of people wake up in the morning and their first task is to figure out how they are going to eat today.
The reason most developers of $60 games sign with publishers is because they do not have the money to develop games of that size themselves.
Whether it is a good idea for them to be doing this is really a different question (and it is complex to answer).
I came to Demo Day in 2010 (as an investor) but left without investing in anything, because I was so demoralized by the way it seemed everyone was trying to start lame web sites doing relatively trivial things.
If Demo Day looked like the stuff on this list, I'd be banging down the door to get in again.
You mention GPUs but did you know that GPUs already do a lot of approximate math, for example, fast reciprocal and fast reciprocal square root?
You mention how approximation must be impossible in all these applications (because REASONS) but all methods that numerically integrate some desired function are doing refined approximation anyway. If you have another source of error, that lives inside the integration step, it may be fine so long as your refinement is still able to bring the error to zero as the number of steps increases.
Your diagnosis of "utter computational tripe" and the accompanying vitriol seem completely inappropriate.
You are right, it doesn't mean that this is the seat of consciousness, but it doesn't mean it isn't, either. Further investigation would be required. But saying it's "off base" just because you don't like the thesis is exactly the opposite of what science is supposed to be. It is scientism, not science.
Addressing some of the other replies: I have done certainly a lot of speaking engagements, and yes, these have been very helpful for becoming more known and whatever, but I never do them for that reason; I always do a speech because I have something specific that I really want to say. Any publicity is a by-product (and sometimes publicity is highly aggravating and undesired). I certainly don't try to meet people via speaking events, parties, dinners, whatever. Sometimes I do end up meeting people, but not that often really, and again, it is a by-product.
In my experience, successful people almost always go to a party just to go to a party and relax or see what's up. They aren't going to a party for ulterior motives like maybe meeting someone who they might be able to get something out of and blah blah blah. Actually, successful people often just don't go to parties because they have other things to do and parties where you don't have a strong peer group are not going to be very interesting.
If you have a specific business objective, you are not going to solve that by randomly going to an event and having random conversations. You are going to solve it by calling someone on the phone or emailing them. If you don't have a specific business objective, you probably won't find much traction with whatever you are doing unless you get a specific business objective.
Don't tell me that your users don't have an ambient level of uncomfortability with the UI, because we all know that is the case (it is even the case on iOS, now much more than it was with iOS 6!)
If I were the UI lead for Android, I would be scouring the system from top to bottom, looking for ways to simplify and streamline, to increase predictability. All kinds of current UI actions would get thrown away (hint to Google Maps people: Shake is not an intentional UI action, it is what happens accidentally 5 times per minute when I am using your software. It should not be bound to anything, ever. [Apple is just as bad for binding shake to Undo, but at least theirs has a higher threshold now]. It is nice that at least now you can finally turn it off after it activates a few times, but haven't you noticed how everyone turns it off all the time, and nobody ever uses it for the intended action ["leave feedback"]? Get rid of it.)
Anyway, all I can say to this response: "This is why we have the beta, to test this stuff."
Is that if this attitude were actually solving all the problems that need to be solved, Android would have an amazing UI that everybody raves about. That is not the case! Why is it not the case?
When you are passing through such an area, you are free to move unless you are detained, which in theory would not happen without good reason. The idea is that you are free to exit and just cannot go back in.
When you enter one of these chambers, you are detained by default until you are released. You are not free to go in any direction. It is very different.
I own a Roadster, and after 2.5 years of daily driving, my battery capacity is down by 6%. That's not 0%, but it's also not a big deal.
I agree with this in principle. But in practice... I invite you to develop large and complicated projects (like non-small games) and see if you retain this opinion. I find that work environment matters, a lot.
The thing that's a little sad is that developing on Linux could be great, if only open source developers had a culture of working a little harder and taking their projects past the 'basically working' stage and toward hardened completion. When things are solid and just magically work without you having to figure out how to install them or what config file to tweak, it drastically reduces friction on your productivity. So there's a productivity multiplier that open source developers are not getting, thus making all their work harder; because hardly anyone works on the basic tools, everyone else's development is more painful and takes longer, but nobody realizes this because everyone is used to it.
If someone made a debugger that let you inspect/understand programs 2x as efficiently as gdb does (I think this is not even an ambitious goal), how much would that help every open source developer? How much more powerful and robust and effective would all open source software become? (At least all software written in the languages addressed by said debugger...)
I don't know, man. I have 31 years of programming experience. I am not detecting from your argument that you have anywhere near this level of experience, so I am inclined not to get into this discussion. But I will say that your code example at the end of your comment is exactly what I am talking about. It happens all the time that I want to put something in front of 'return b' (or, in fact, I just want to put a breakpoint on that line in the debugger! Not going to happen in your second example...)
Sounds a little less efficient than the electricity thing when you look at the full sequence. And this is even in the case where electricity comes from dirty sources; some states, like California, have an energy mix where electricity comes mostly from renewables. (And this is something that can be improved as time goes on).
One obvious example: the San Francisco Bay Bridge. The original bridge was built in 1933; it took them less than 3.5 years to build it.
Now, they are just trying to replace the eastern span (the bigger one, but hey, less than the full bridge), with technology from nearly a century later. They have been "working on it" for 9 years. It is currently scheduled to run another 2 years from now (if it somehow gets done on time) and is 6 times more expensive than originally projected.
For a bridge. That doesn't have any more traffic capacity than the bridge it is replacing.
Who do we have to harangue to get Rust to rename "Vector"? It is kind of embarrassing and confusing terminology, and there is no reason to propagate it. Just call an array an array and an immutable array an immutable array.
There's no good reason to perpetuate the mistakes of the C++ people.
(Note to those not understanding the objection: A vector is defined an element of a group that is closed under addition and multiplication. This is a very specific and very widely-useful meaning, and any program that does stuff with math is going to use vectors. So when you come along and put into your standard the idea that 'vector' means an arbitrary collection of elements that probably are not even scalars, you not only show that you don't know what vector means, but you confuse the programs of many many of your users, because now they have two totally different things both of which are called Vector and that are both used very heavily. [There is no way in hell anyone is going to call a math vector anything but vector, since that would be insanely confusing.])
In the context of the article it makes sense. But out of context it appears to be saying that all games should cost a lot less than $60, which really isn't the point.
In fact to justify a luxury price point you just have to give people something they really want. For example, right now lots of people are paying $150 to get into the beta for Elite: Dangerous, a game that I presume will be substantially cheaper in full release:
If $60 is insane then $150 is totally nutballs, yet that is what a nontrivial slice of people are paying, because this beta is giving them something they want that they can't get any other way.
(Note: In the alpha, people were paying $295!)
I mean, if "managing office firewalls" is on someone's list of things that are impressive, maybe that person does not have a clear view of the problem space.
But I should not even be giving the article so much attention as this. It's clear the author is confused. At first he talks about how rare and crazy it is to find someone who can do this, but then, contradictorily, he laments that PA will get tons of applicants who can do the job. Well, guy, which is it? They can't both be true.
All that is going on here is that someone had a negative reaction to the job posting and is trying to express and rationalize their reaction, regardless of how that rationalization really matches up to reality. Happens all the time, why am I even replying?
The simplest kind of mipmapping is a box filter, where you are just averaging 4 pixel values at once into a new pixel value. Thinking just of grayscale pixels, if you add 4 pixels that are each 1.0 (if you are thinking in 8 bits, 1.0 == 255), and divide by 4, you get 1.0 again. If you add two pixels that are 1.0, and two that are 0, you get a value of 0.5. Which would be fine if your bitmap were stored in units that are linear with the actual intensity of light; but they are not, because they are in a weird gamma! What you are going to get is something like pow(0.5, 2.2) which is way too dark.
Thus when you don't gamma-correct during mipmapping, bright things lose definition and smudge into dim things way more than actually happens in real-life when you see objects from far away.
You can't look at one decision and use it as a barometer to know whether a CEO is good or bad. That is just silly. It makes more sense to look at the body of decisions.
It seems that such heavy ad bombardment has been chasing their regular users to block ads. It only makes sense.
When you do this, you want that old code to be like putty. You want to bend it into a new shape without having to break it and start over. Sometimes it really is better to break it, if bending would be too messy or cause problems later or otherwise sets off a red flag in your head. But if you have to break and re-make everything all the time, you won't be a very fast programmer. So you learn how to bend things, elegantly.
And after a while of this, you learn how to write code that is more amenable to elegant bending in the first place. When you type code, you are not just implementing a specific piece of functionality; you are implementing that functionality plus provision for unknown future times when you will need to come back and make the code different.
(To link this more thoroughly to the previous comment: it happens all the time that you write code that is not really about doing stuff, but then you later need to make that code be about doing stuff. Sometimes this is for shipping functionality reasons, sometimes it is just to temporarily insert hooks for debugging. Declaring in advance that this code shall never be about doing stuff is usually a mistake.)
By your logic, everyone who buys a BMW M3 or a Porsche Boxster ... or whatever ... is dumb because they should have bought a Toyota truck because it is only $19,000 over 5 years.
I don't really like the idea of battery swapping. It seems like a big hassle. The current Supercharger seems totally fine. I guess it remains to be seen how well it works in practice, once a lot of people have the cars, but right now things look really good.
One of the most common pieces of anti-EV rhetoric is "the infrastructure doesn't exist". This is actually FUD. The infrastructure is everywhere: we have electricity pretty much everywhere. (Consider: all these gas stations have electricity, so the penetration of electricity is a superset of the penetration of gasoline). What we don't often have is the right plug, but that is a relatively small problem.
But if you start talking about installing a widespread battery swapping paradigm, then that is a huge infrastructure problem, because you need to have stocks of all these physical things all over the place.
On the other hand, with Supercharger, you don't need that. You just need some electricity. It is much simpler.
60fps was fine pre-VR, but now you want to do 90fps times 2 eyes.
That's a lot of rendering.
All I can say is, try meditating sometime and you will see. (I don't recommend mantra meditation, but rather something more like Vipassana or "mindfulness" or any meditation that is not about distracting your mind by keeping it busy).
When you become comfortable with meditation, you become very aware of what your consciousness is doing. You gain a palpable sense for the present moment. Once you have that, it makes a lot of these kinds of questions unnecessary (or at least the questions become very different in nature). If you don't have this taste for the present, then asking/answering questions like this is like trying to explain colors to a blind person. It just doesn't work because most of the questions are about things that don't really have anything to do with consciousness.
If the person who CTO'd that worked for me, they would not be CTO any more.
Removing the delay 100% of the time, always, would be a big win.
What you are hearing now is not ignorance, it is experience. I am a tremendously better programmer than I was in those days, and the way I got better was not by getting excited about wacky ideas; it was by really noticing what really works, and what doesn't; by noticing what are the real problems that I encounter in complicated programming projects, rather than what inexperienced / pundit / academic programmers tell me the problems are.
Clearly you didn't really read my comment, though, since you are saying "If callbacks work for you in your job..." and my entire point is that callbacks are terrible.
Tesla just released a really big car with a heavy battery that goes 265 miles on a charge (under the very strict new EPA rating; at older methods of measuring range this number would be much higher). So I ask the batteries are too heavy for what exactly?
Also, if you are paying attention, you know that Tesla has just opened to the public a number of Supercharger stations that will charge the Model S battery 50% of the way in about 20 minutes. So your complaint of "wait hours for a charge" is already solved, today, in 2012, at least if you live in California (and Tesla plans to expand the Supercharger network rapidly).
There have been anti-EV arguments for years, but the arguments keep changing, which is how you know that EVs have won. The main argument used to be that EVs would never go far enough, that people would have too much range anxiety. That has been solved. Then the argument was that the cars are too expensive. That is in the process of being solved right now, as you see from Roadster->Model S. There are other problems but they are much smaller. "I park on the street so I don't have a place to charge my car" is not very hard to solve: it is obviously just a matter of will.
All magazine-style writing is like this to some degree, but this one is just too extreme. It is basically junk writing.
Look, for example, at what Vinod Khosla says about the Starbucks deal, all of which is totally reasonable, but he is turned into some kind of guilty-person-in-detective-fiction with phrases like "he says defensively", which strengthen the desired narrative but have no relation to verifiable facts.
I don't even know what you are talking about wrt network congestion. What are you talking about??
Video game programmers desperately need a new language to replace C++, but I think this is not it, because the amount of friction added for safety reasons feels very high.
I'll keep an eye on the language as it evolves, though.
For example, iOS7 totally broke the keyboard so that fast touch-typists just can't type on it any more. This doesn't matter on something small like a phone, but on an iPad it is super-frustrating. Apple fixed it somewhat in recent updates but it's still broken in a few basic ways, all having to do with the fact that the person who programmed the new keyboard doesn't know how typing works.
It is confusing enough that a non-power-user probably doesn't have a clear idea why typing sucks now, they just know that letters don't come out with the right capitals any more, sometimes extra spaces show up, and damn that shift key is confusing. etc. It is not so much "Android is better" as it is "iOS is not a nice experience any more".
There are similar things to do with web browsing. Web browsing is supposed to be one of the few things these devices specialize in, but on my iPad Air it is terrible. If I go to a web site with images on it and scroll down, most of the images don't load for a LONG time, leaving me with mainly a blank page. When iOS7 came out the browser crashed all the damn time. Now that is mostly fixed but it still crashes sometimes.
When you can't even scroll down a web page reliably, and yet that is one of the main use cases you are selling your product for, you can't claim it is a luxury product. You aren't delivering a luxury experience so you can't charge a premium.
Apple has gotten away with this in the past, in similar situations, though, because of newness and shininess. As Marc Andreesen pointed out recently, for the first few years you could barely even make a call on an iPhone, and when you did it was super-frustrating. But still it caught on. I think this is just because it was so new and exciting and there weren't real competitors yet. Once the bloom is off that rose, in order to be perceived as a premium brand, you have to actually deliver quality. But Apple is not delivering quality with the OS, they are just delivering some kind of skin-deep attempt of an appearance of quality.
I consider iOS7 to be a huge misstep and a giant missed opportunity, much bigger even than Siri or Maps. I am not sure if fixing iOS7 would solve all Apple's problems, but it is where I would start.
However, I don't think the thesis of this article -- lack of UI features -- would be my first step. Because I don't agree with the article that iOS 7 is simple. In reality it's a mess, it just tries to appear simple. So the first step is making it really, actually simple, and make it deliver a solid, quality experience. Then you can think about adding UI features, which I would claim people don't care about as much.
I think this would be vastly more interesting to many more people if the interface were just a line that says: "Tell me when [text field]" and I type something into the text field like "Tesla announces anything."
That is all I want to know, so why do you make me do a ton of stuff to implement a query? (Or more basically, why do i need to set this up on my own server? Isn't it way better if it is a service that you just run?)
For example, the manifesto confuses ends with means. It states a desired end, but then claims that certain means are required to get there (for example, "event-driven"). Maybe event-drivenness can come into play in a given system, maybe it shouldn't; across a broad set of domains this is orthogonal to the concept of responsiveness.
In video games, for example, we do things that are extremely responsive compared to web stuff (last week I worked on something that had to run at 200 frames per second in order to meet requirements). Interactive 3D rendering systems are most certainly not event-driven; they derive their responsiveness from cranking through everything as quickly as possible all the time.
There are lots of different domains of software out there and they all have found different local attractors with regard to what techniques work and produce the best result. Web software is just one of these domains, and frankly, it isn't doing so well in terms of quality compared to some of the other ones. So I think if one wants to write a manifesto like this, step one should be to get out of the Web bubble for a while and work hard in some other domains in order to get some breadth and find some real solutions to return with.
If you think of a universe as "everything I could see/touch/etc given infinite time", someone outside the universe who was in control of it might record it with no problem because they are not subject to the constraints of the universe. e.g. imagine the universe as being like a computer simulation that you programmed. You can probably pause it in a debugger any time you want, look at whatever state variables you want, etc.
So in this case the definition cannot be "everything that exists" because you have to define "exists" in the case where different sets of things may have different levels of "reality".
Of course, that is the opposite of what I as a software developer want.
You set the price high enough that sites will want to actively switch to your network, since the premium users are worth a little more than the free users. And you require as a condition of membership that 1 article in 50 (or whatever) is premium-only content.
Right now, peak electricity usage times are during the day. This is why in most urban areas electricity is cheaper at night: they want to encourage you to distribute your usage more evenly throughout the day.
The NEMA 14-50 (a.k.a. standard appliance outlet that most people plug their dryers into) is a totally fine plug for an EV. It will charge the car up fully overnight. This plug is going to deliver you, at maximum, 40 amps at 220 volts.
Electric dryers often use something like 25 amps at 220 volts (of course it varies by machine). This is not far from the 40 amps we are talking about. So this whole "grid can't handle it" pseudo-panic is sort of like worrying that everyone is going to run their dryer at the same time, times 1.5, at off-peak hours. It is just not a big deal. FUD.
Society doesn't hold together by magic. It doesn't evolve in a mutually beneficial direction by magic. People have to make it happen.
Is it okay for some people to make little games, or for some people to want to selfishly make a little money? Sure; a robust society can tolerate that. It's when these attitudes become prevalent that the problems start to happen.
iOS icons are terrible. All the time on my iPhone I just find myself staring at the screen not knowing where anything is. This never happens to me on the PC.
It is just a bad style decision and I wish they would change it.
It is interesting that the design priorities are so library-centric. I agree that library quality is important, but think about this: For every library that someone writes, N people are going to use that library in leaf programs, where if the library is useful N is much much greater than 1. So by definition the vast majority of code is not library code.
In video games we write programs that are between 200k and 2M lines, say. That is big enough that you do want to think about what API you are presenting from part of that program to another, but stability of that (internal) API is almost never a concern, and in fact this is one of the big boons of statically-typed languages: you can change a function and you instantly get errors at all the calling sites, allowing you to be sure you updated them (except in some bad ambiguous situations).
This fluid ability to change how parts of your program talk to each other is very important in large programs. It is one of the major ways you avoid code rot. The more friction is introduced into this process, the lower the ceiling on how large and complex a program you can build.
The other thing about games is that the problems are inherently very interconnected. Yes, we kind of partition things into different modules, but these modules all want to talk to each other very heavily (partly for inherent problem space reasons, partly for performance reasons). So again, friction introduced here hurts us more than it hurts most people.
I understand that private-by-default seems like the right idea because it supports a certain paradigm of programming that is generally considered the right thing (encapsulate your implementation details, modularize as much as possible, etc). But what I have found, in my field at least, is that this is less true than many people think. Yes, it's important to keep things clean, but fluid ability to change code is very important, and overmodularization hurts this. I think that some day this will become more widely understood, the same way that today most good programmers understand that OOP is not some kind of One True Panacea the way folks in the 90s seemed to think.
Good ideas can be carried too far and I think for my field private-by-default is way too far. Having to put 'pub' on every member of every struct and on well more than half my functions is kind of bananas.
(As someone who does not buy into the paradigm of OOP, I want to write functions that operate on data. In C++ sometimes I encapsulate these functions as struct members but this is just an aid to understanding when it is convenient; most functions are declared at file scope. In order to have a functions-operating-on-data mentality in Rust, I guess I need to put a pub in front of every member of every struct, which feels very distasteful; it feels like the language is pushing me toward paradigms that many in my field have tried and subsequently rejected after decades of pain.)
Well, this certainly went a few places, but that is where I am on these issues.
If you are building a high-end video game (my field), garbage collection simply makes the user experience poorer than it would be otherwise. It is not some invisible implementation factor, but in fact affects the output in the form of stuttering frame times all over the place.
Good luck trying to do a VR game with a reasonable amount of stuff on screen with a GC running (you need to be between 75-90fps times 2 eyes. And every time you miss the frame deadline in VR, it is not just that the game feels worse, but it disorients people and gives them headaches and nausea.)
Every time I pick up an Android phone and try to use it, I want to throw it out a window, because it is all stuttery and janky. I presume at least part of this is because of GC.
I think this is another occurrence of people improperly weighing obvious benefits versus hidden costs. The obvious benefit of GC is you don't have to manage your memory so much, maybe. (I say maybe because when you start saying that GC performance is not good, the answer is always thinking about memory management in some way in order to reduce load on the GC. "It's a really great car as long as you don't drive it very much.") The obvious benefit is there, but there is also a hidden cost in terms of performance and solid-feeling-ness, and that cost is really pretty large actually. The theory has always been "pretty soon now GCs will get better and this will go away", but this has never happened. I went to college in 1989-94 and used to design GC'd languages as a hobby, so I have witnessed a couple of decades of this.
As a productive working programmer who writes a lot of code that does complicated things, I do not find the memory management to be a large part of what takes my time. If I were to pull a not-scientifically-derived number out of a hat, I would say it takes less than 2% of my time. To get a 2% improvement in productivity, but to pay for it so heavily ("well, there's a class of results that are simply impossible for me to achieve now because the GC might go off at any time"), is just a really bad tradeoff.
I am sympathetic to the idea that some paradigms of programming (functional, etc) are harder to do without GC. Exploring those ways of programming is a good reason to like GC, but given that functional programming is not really quite here yet for most classes of large and demanding problems, well, it's just a very different world from the one I need to build software in today.
There may have been some jobs out there where I could happily and productively work, but I don't know, because I never encountered one.
Oh and what is that higher resolution going to do for latency and frame rate? Hmmmmmm.
It kind of shocks me how uncritically positive this article is. The situation to me reads differently, more like "any joker can plug a higher-res LCD into an Oculus DK1 spray painted white."
Under these kinds of conditions, if someone in an appropriate branch of government wants to nail you for any reason, they can. Especially now that widespread spying makes it much easier to identify specific transgressions.
So I am not so sure why you would take such a hard line on legality when in fact such a stance is just waiting to come back and bite you (and everyone).
... In fact, now it is the government's position that there are SECRET LAWS that you can be violating but not even know why you are violating them; they can arrest you and not tell you exactly why they arrested you, because the reason is secret. How are you supposed to engage in strictly legal behavior when you don't even know what is legal and what is illegal?
Sure, there are a lot of people out there who exaggerate on their resume, but this has nothing to do with the existence of people who actually do know things.
If you don't believe that highly-productive programmers exist, it is only because you haven't yet met one.
Agreed, they did this in a clumsy way, but hey, they are trying something.
So the concept of a photon 'picking' a destination point, or not, is mired in an assumption that isn't true (that there would even be anything to pick).
(And this is not to deny that Phil Fish tends to have a lot of drama. I am just saying that to anyone in the game industry this kind of armchair quarterbacking is obviously uninformed, and then seeing someone attacked / blamed / whatever due to the conclusions of said armchair quarterbacking is just pretty sad. Speaking as someone who has been through this himself on multiple occasions.)
Some posters here have very weird perspectives. Yes, if someone wants to extrapolate some straw man, based not on statements in the article or evidence from the real world, but built from whatever feels easy to criticize thoughtlessly, then sure, it is easy to knock that straw man down. Whatever.
For what it's worth, I liked The Stanley Parable and had a nice chat with the author of the game at PAX last year. Why would anyone assume that something like this is not the case?
You guys do know that the subjects of articles you read on the internet are other real people also on the internet, right? Why would a poster here assume that I am some kind of inert punching bag rather than, you know, someone who's been on HN for a couple of years and involved in discussions?
The first story is "Luminous" from 1995, but I don't have a link to a free copy.
The second is "Dark Integers" which you can find here: http://www.asimovs.com/_issue_0805/DarkINtegers.shtml
This is not unique; I often feel this way when reading philosophy.
So I kind of understand where Hawking / Tyson / et al are coming from. I do think it is a mistake to single-handedly dismiss all of philosophy, and I agree with the sentiment in this article that physicists are following an implicit philosophy that they do not understand, so there's a contradiction there. At the same time, most philosophy is honestly pretty bad.
You can get memory safety without GC, and a number of GC'd systems do not provide memory safety.
If you think that, for concurrent systems, it is a good idea to let deallocations pile up until some future time at which a lot of work has to be done to rediscover them, and during this discovery process ALL THREADS ARE FROZEN, then you have a different definition of concurrency than I do. Or something.
If you want to know about code clarity, then understanding what your program does with memory, and expressing that clearly in code rather than trying to sweep it under the rug, is really good for clarity. Try it sometime.
Example: Many, many naysayers said the idea that Tesla could begin shipping the Model S to customers in 2012 was absurd, that they were naive and didn't understand the complexities of building a vehicle like Detroit does, etc. Well, they just did it.
Before that, everyone said nobody would buy electric vehicles because (a) you can't get them over 100 miles in range, so nobody will want them, and (b) because there is no charging infrastructure. So Tesla just built bigger/better batteries and built a charging infrastructure.
The problem with articles like this is they are just some random guy saying stuff and it doesn't matter to him ultimately if what he is saying is correct.
In light of this, the size argument being made in this article is not really meaningful in the medium-to-long term, unless these kinds of flexible screens never happen. (But they are being actively worked on, so.)
[There is the small factor that the company usually owns some of the stock and demand for the stock usually pushes the price up slightly, the company's stock assets do benefit slightly in a case like this, but this doesn't matter unless they sell, which is a rare activity compared to general volume of investment.]
Since the article does not differentiate between stocks, bonds, and loans, it is essentially useless. I guess they didn't care enough to differentiate, and just wanted to say bad stuff about the foundation; or, maybe they just don't understand finance much at all, and didn't think about it.
a|+-> + b|-+> is a very special-case entanglement. Yes, you'll see that one mentioned in Wikipedia (and quantum cryptography and etc) because it's very simple.
The general form for the states of two photons is exactly as you have listed for s_n. For some coefficient values of a, b, c, d, the photons are "entangled", for some, they are not "entangled". How do you know which is which? It is whether you can factor the polynomial into two separate expressions of the form (x|+> + y|->). If you can do this, the photons are not entangled; if you cannot do it, they are entangled.
Since most sets of (a, b, c, d) represent unfactorable expressions, you would expect almost any expression chosen at random to represent an entangled pair. Clean un-entanglement is the rare exception.
Programming skill, at least, is like compound interest. The more you program, the better you will be in the future, which means you will learn faster in the future, etc. At a big company or in an undemanding situation, your pace of learning is pretty slow, being limited by the circumstances around you; and like any compound interest, if you get behind, it becomes pretty hard to catch up to where you would have been.
Whereas in a small-company situation or any situation where you are limited only by what you are physically and mentally capable of doing, you are learning as fast as possible. It is very good.
You could say that the Penny Arcade job is not as good as starting your own startup and working that hard, and maybe that is true; but my company shut down and left me $100k in debt (back in 2000 when $100k was real money!) whereas when you get paid, you are not taking the same risk. Maybe this also means it is psychologically difficult to work as hard. I don't know.
The angular movement thing is this: Imagine you have a frame of reference on the tip of your nose. The X axis points straight away from your face, the Y axis is to your left, Z is up. Now start turning your head to the left. To a tiny observer living on the tip of your nose, the relative speed along the Y axis of a faraway planet has suddenly become very high. The further the planet, the faster the speed (this part is just grade-school geometry).
But I think this is a little bit beside the point, because as I mentioned in my first post, all of this is predicated on us adopting the nave view of time in our own universe, which is maybe not the best idea, given that we have plenty of evidence otherwise.
(And anyway, if you believe that "everything that exists" can be infinite, then there is no sense in saying our universe is being produced by some kind of simulation controlled in one particular other place, because of course it is, but actually this is probably happening in an infinite number of different ways from different places such that there is not really point any more in claiming that this is happening at all; the situation becomes such that you could just draw a relation between situation X and situation Y and could state that one could be causal of the other from a certain point of view.)
Pro Tip: If an entire industry of experienced people finds something very hard, and you don't know anything about the topic but you don't see why it would be hard, maybe the relevant factor here is the "you don't know."
It reminds me of my mom who said on multiple occasions "All these rockets are dangerous and they explode; I don't see why the scientists don't just use the majestic forces that keep the planets in their orbits to move the rocket."
Your response is a little bit ill-formed because Fermat's principle is about the time to travel between two points. You assert that light is not "taking the shorter path", but in doing so you are changing the destination point or else leaving it undefined. Instead, pick a start point, pick an end point, and see how light travels between those two points, with respect to your cube of glass.
As for the new Thinkpad described in this article... lack of physical mouse buttons = instant fail. This seems like a weird style-oriented consumer move, not a business-user kind of move. I don't understand why everyone is in such a hurry to try and copy the MacBook (badly).
At this point, availability of physical mouse buttons is very high in my list of selling points for a laptop. I doubt I am alone.
Which is to say, if everyone works 25 hours a week, rents will go down (because if they don't, you have most properties sitting empty). If instead everyone works 60 hours a week, rents go up.
Of course, economics is really complicated and rarely works out this simply. But that's the idea. It's a little silly to post here presuming that someone in Mr. Vaupel's position has somehow not thought of the fact that when you work fewer hours your nominal earnings go down. (I think this qualifies for what pg was calling Middlebrow Dismissal). It's a more reasonable response to say, well, of course he has thought of that, but I wonder what the answer is?
Sure, putting const in parameter declarations is easy to do. It may even buy you a little bit of speed because the compiler is a little bit clearer about pointer aliasing and whatever. But it's not going to make a difference in the equivalence classes of slow code / fast code / Really Fast Code.
Serious hardcore optimization usually involves changing the way the problem is solved to something different than the way the old code thought about it: either constraining the problem space further, or attacking it from a different direction. This usually involves rewriting everything since there are so many cross-cutting concerns. Sometimes one has to do this several times to figure out which way is really fastest. Microoptimization things, like whether you used const somewhere or not, are much smaller details that have correspondingly small effects.
For code that one isn't specifically optimizing, speed probably doesn't matter. There was an exception to this, where we hit a little bit of a bump in the late-2000s on platforms with in-order CPUs like the PlayStation 3 and Xbox 360, because they have such a high penalty on cache misses; this tended to make general-purpose code slower and result in much flatter profiles. But now we are pretty much out of that era.
In general, const is more of a protection than an optimization. This is especially true heading into the massively parallel future, where const just sort of tells you whether some code is known for sure to run safely in parallel or not... and running safely in parallel matters tremendously more to overall speed than the number of instructions in that bit of code, or whatever. (Anyway, C++ is not at all a viable language in the massively parallel future... so that is going to be interesting.)
I think naming and shaming is a good tactic. People should not be signing contracts with clauses like that in the first place, and if there is potential for future shame, maybe it'll make them think a bit more about it.
But if I were making a replacement language that runs in the browser, among the highest priorities would be to make it not work via callbacks.
So I don't understand why you are saying "the only way they can possibly make sense" is if the batteries are swappable. There is already sort of an existence proof otherwise, in California, right now.
What happens is that we invent crazy math that is not supposed to have applicability, then some years go by and it's like, oh my God, quantum mechanics is somehow exactly all about the operation of unit Hermitian matrices... how crazy is that?? etc.
If it were some kind of model that we are able to successively refine, the progress of discovery would look something like a Taylor series, and it would be no surprise that we are eventually able to model phenomena within some tolerance epsilon.
But that is not what is happening! Rather, it's that we discover that some large and sophisticated piece of math, for which we had not thought of any particular applicability, turns out to exactly represent specific advanced physical phenomena. This happens over and over.
The surprising thing is that the models generate predictions far beyond the domains they were designed for (and far beyond the original knowledge of the people making the models), and that the predictions are so mindbogglingly accurate that there seems to be Something Else going on.
See the Unreasonable Effectiveness of Mathematics link below.
If the defining property of your activity is that you are trying to negotiate messes that other people have made in order to make things happen, where the things you are making happen are not novel in themselves, that is pretty much what working in a bureaucracy is like. So you can think of it as "working in a vast decentralized computer bureaucracy" rather than "working in tech".
Of course this is a mechanism that goes horribly wrong in some societies, but it's our job to make ours as good as we are able.
That doesn't make sense. There are people in the world who are worth listening to. The "democratic" nature of the web means that a lot of people post a lot of crap, and maybe some people are so used to reading crap that they have just forgotten what it's like when people really know what they are talking about. I don't know.
There's a huge difference between Rails Bros and smart guys who attack the hardest problems they can, whenever they can. Charles is one of the latter people and has been for, I don't know, 16 years? I don't want to live in a world where someone like that is not permitted to speak in an uncushioned way.
If you look at the equations for the volumes of spheres in n dimensions (with 2D being just one of them), tau shows a clean pattern. pi leaves you with a mess.
But that is totally different than the GC situation. With a STW GC you cannot even run, how do you expect to be able to display anything on the screen? Even with a non-STW GC, the reason the GC has to collect is because you are running out of memory (unless you are massively over-provisioned), and if you are out of memory for the moment how are you going to compute things in order to put stuff on the screen?
Accessing disk/network/etc induces latency, yes, but that is why you write your program to be asynchronous to those operations! But this is a totally different case than with GC. To be totally asynchronous to GC, you would need to be asynchronous to your own memory accesses, which is a logical impossibility. I do not see how you even remotely think you can get away with drawing an analogy between these two situations.
And self-driving cars. And medical equipment.
etc, etc. You can list lots of fields for which this is unacceptable, and they are a lot of the really interesting fields.
The demo is not consumer-product-ready, as it is currently expensive and the tracking requires you to paper your room. But those things are all solvable (inevitably with a bit of time, which is why Mike said 'within two years').
A number of the key people responsible for the Valve work are now at Oculus with a mandate to consumerize the best tech possible. So yeah, it is going to happen. Do not believe this article. Oculus CV1 may not get all the way there but it will be better than is necessary to show clearly the writing on the wall (the main factor in its quality being cost, which the Facebook deal helped tremendously with). CV2 will, I expect, be pretty badass.
(I have tried the Valve demo and then spent a few days working with my game on their hardware.)
Analogy: Suppose the Earth is covered with clouds and we have never seen the sky, we have not invented space ships yet, etc. Nobody knows why tides happen. Someday someone predicts there is a GIANT rock orbiting the Earth not too far away, and everyone says that is crazy. But eventually you send a rocket up with a camera, and you see this giant rock there! Whoa. Since the tides caused you to look for the rock, and you found the rock, you have reason to suspect the rock does cause the tides. Maybe it doesn't -- further verification is required. But the big rock is evidence, it moves the needle. That is what science is, is making testable predictions and then testing them and then letting the results of those tests help you understand what is going on in the world.
Invoking the FSM or discounting evidence, because it doesn't match preconceived notions, is in fact the kind of thing that is the bane of science and always looks embarrassing / shameful in retrospect. I would hope that people at HN understand science well enough to see this pattern and not participate.
P.S. Re the "according to the article's narrative" snipe, uhh, some of us have been following this issue since the 90s when the idea was proposed. "You guys are crazy" is an accurate description of the majority consensus.
If you put money into stocks, and get a return on that, that return means very concrete things: more people who get polio vaccines, more people who get dewormed, etc, etc. Very concrete results.
If you don't put money into stocks, because they are unethical ... well, how exactly? As we have mentioned the stock has already been sold. So, you don't do it because by doing so you are helping support a market system that may provide a setting for future unethical corporations to arise in and go public? (Along with many ethical corporations?) Or because by providing demand for the stock now, you retrocausally made it possible for that company to have IPO'd years or decades ago?
These are very abstract and weird concerns when your job is very directly to help people.
You say "if unethical corporations faced divestiture", but even a large fund like this one has no say in that. If they don't invest in McDonald's or Walmart or whatever, it doesn't matter in the grand scheme of things.
So seriously, if you were running the fund, what would you do? For real. If you make less money, actual human beings die who would not have died, in volume.
(All of this said, I would feel pretty dirty about buying the stock of a prison company, anyway. But railing on them for stuff like McDonald's and Walmart just ... doesn't make sense.)
That is a pretty straightforward next step from what is there now. I am not sure how likely it is, but if you claim you can't possibly think of abuses, then you just aren't thinking very hard.
So if you don't believe the extreme crazy case, think of the standard example: you get into a space ship or something and zip around really fast. And you are thinking about things closer to you. The math says the same thing: as your light cone changes, the set of spacetime intervals, that have time distance 0 from you, changes as well. So from your relative position the "now" at these faraway points goes back and forth. This is basic, basic relativity.
In general, games only use light-linear texture maps when they also need HDR in the source texture, which is not that often. Ideally it is "purest" to use HDR for all source textures, but nobody does this because of the big impact on data size. (And even for the few textures that are stored as HDR source, often the data will not be integer, but some other slightly-more-complex encoding.)
[Claimer: I have been making 3D games professionally for 17 years, so this information is accurate.]
Dude seriously, I have been doing this a long time. I am going to stop replying after saying just one more thing:
If you are running on an OS like Windows (which this product is targeted at), you do realize that the OS can just preempt you at any time and not let you run? How do you predict if you are going to finish a frame if you don't even know how much you will be able to run between now and the end of the frame?
3D rendering is so deeply pipelined that it is difficult or even impossible for the program to know if a frame render is going to finish on time. It takes a long time to get information about completed results back from a GPU; on PCs you almost certainly can't get that info during the same frame you are rendering, unless you are rendering tremendously slowly.
In order to make an estimate about whether the frame is going to be done in time, you would have to guess. Okay, then, so now you decided to stop rendering this frame, what do you do? Leave a giant hole in the scene? Turn off the postprocess? Draw low-detail versions of some things (hint: still very slow)?
Your program does not even really know for sure which pieces of the scene are fast to render and which are slow. It does not know if specific textures are going to be paged out of VRAM by the time you get to a specific mesh, or not. etc etc
Article is alarmist and weird. Not recommended.
The inbuilt assumption is that people care enough about the data to want to search it frequently and thoroughly. I don't think that is true. Facebook is mostly ephemeral junk data that you don't care about; this has been true ever since they changed their UI to the Twitter-style "what are you thinking right now?" input / streaming.
In order for search to be useful they first have to backtrack heavily on what their entire platform is about. Which would be hard.
"Plus, I mean, what if I revealed myself to you, and then you were like, oh shit, I better take what he says a little bit more seriously, wouldn't that just be embarrassing? I don't want to do that to you."
No, by all means, go ahead. I am interested in having a productive discussion about programming, so if you can share your experience in a way that convinces me, I am totally open to it. If it turns out I am wrong, I won't be embarrassed, I will just change my opinion so that I am not wrong any more. This is how one becomes a good programmer in the first place: by paying attention to what is empirically true, rather than what one is originally taught or what seems exciting or what is in theory better.
Optimized code is just a different thing from general code (if one is a productive programmer).
Yes, you can understand it, but it takes more brainpower to do so than it should, especially once you get beyond trivial cases. It is much better just to write it the long way.
If a GUI is your example of something that is difficult, we are just living in different worlds and it's a challenge to have a productive conversation. I think a difficult task is something like "make this ambitious AAA game run on the PlayStation 3 performantly". That is pretty hard.
That said, what the article proposes as a solution is bananas. You don't need to do crazy functional acronym things; just don't use callbacks. Good C/C++ programmers in the field where I work (video games) do this all the time. It's not hard except that it requires a little bit of discipline toward simplicity (which is not something exhibited by this article!)
40 amps at 200v will completely charge a Roadster in something like 7 hours. I don't know how long for a Model S, but it is probably longer. I am only familiar with the day-in, day-out of the Roadster so I will stick to that mostly.
So the "charge the car all night (7-8 hours) scenario" only makes sense if you need to charge up the battery 100%, i.e. you were on fumes before you plugged it in, which means you drove 200-240 miles the day before. This may be true for some people but it is not going to be true for most people most of the time.
I drive from SF to Berkeley and back most work days, a 25-minute commute each way, and I like to drive it like a sports car, so I use relatively a lot of power for that length of trip. Generally I use about 20% of the Roadster's battery on such a day, so that's probably about 1.5 hours to charge in a 220V outlet. If everyone did that, you would probably want to stagger the charging times, but it is totally doable even with current setups.
I do think that there would be some increased power draw and that we would want to beef up our electricity infrastructure a little. But that is pretty different than what I see as a Republican talking point, which is something like "there is no way that the USA can support everyone plugging in their EV, it's impossible." That is not my experience as an EV driver.
When I was coming up with the dryer analogy I was just using numbers I pulled off the Internet about what people were measuring their dryer's pull at. It's possible dryers just are not very efficient, I don't know! (Though I thought the whole Energy Star thing was supposed to put pressure on that).
The first dot-com bubble was all about user-acquisition and loss-leading. Look where that ended up.
Look, if you can't get someone to open your app more than 3 times, maybe it's because you are not making interesting things.
If you build something that is high-quality and that makes a difference in peoples' lives, they will flock to you, because there is so little of that. On the other hand, there is an overabundance of 99-cent and free-to-play software that has no real reason to exist other than to try and make the developers money. But this is the mindset that this article comes from (evidence: the author's first three pieces of advice are "cross-promote", "market well", and "internationalize", things that have nothing whatsoever to do with what is being made or how good it is.
These are not useful tactics, at least not in isolation, because they don't address the core problem: that a developer in this situation is a dime-a-dozen. The solution is to stop being a dime-a-dozen.
You don't need to differentiate your particular piece of software so much; work on differentiating yourself as a developer, in terms of the quality and interestingness of what you produce; be a thought-leader rather than a follower who mainly thinks about cross-promoting; and once you manage these things, you will automatically be doing okay. (And you are much more likely to be satisfied with your life, which is quite a nice side-benefit).
Of course, most people will not follow this advice, because it requires effort, introspection, course-changing, all that stuff. Most iOS developers will continue going as they are and continue suffering the consequences. Not pretty but that is just how it is!
You can drive from SF to Santa Cruz and back on one charge.
You can drive from SF to Mountain View and back, and then there and back again, on one charge.
I only need to think about recharging if I am going on long road trips -- say, SF to Lake Tahoe. This is exactly where the Supercharger comes in and Tesla's announcement is way huger than I expected. It makes me want a Model S even though I have been thoroughly delighted with the Roadster for two years.
The reason is: it shows a worldview where everything is about your personal advantage and not making things bad for yourself. It comes across as selfish and petty. That final paragraph is just kind of gross.
I want to do business with / hire / socialize with people who care about the good of the world, hopefully more than they care about their own small situation. It's hard to describe what that looks like -- it is different for everyone -- but it almost certainly does not look like this.
I am a game developer, actually, and I believe games can have great social value. So I support you in pursuing your idea.
"Rewards" systems like the one described here, though, are not about giving anything to the audience. They are purely about taking money away from people, and doing it as manipulatively and sneakily as possible. I believe the net social value for things like this is deeply negative.
What I am talking about is what I perceive to be the mindset of a great many young people entering the working world today. This has nothing to do with where I hang out (indeed, I avoid stuff like that, actively!) YC Demo Day, I went to because I didn't have the full picture of what it was like. Now that I know, I have not gone back.
Thekla, Inc is hiring good video game programmers to work on unusual and very interesting games.
"Go, the language with zero nines of availability."
How many nodes are there? Let's presume there are a lot, otherwise the problem is trivial and speed doesn't matter anyway. So now you have an N-long immutable array that you are copying every time you want to change a node pointer? So you have changed the operation of pointer-changing from O(1) to O(N)? What does that do to the run time of the algorithm?
Also, your garbage velocity just went WAY up. What does this do for the runtime of your program generally?
See for example the factoid that 0.15% of players spend over 50% of the money in f2p games (and these games make tons of money):
It reminds me of the crappy articles talking about how TSLA was down and doing terrible, when the price was around 25, months after it IPO'd at 17.
I wish I could downvote this article.
(Yes, at the level of the macroscopic world we can make consistently accurate predictions, but physicists would say this is because we are operating at a scale where the statistical nature of quantum mechanics averages out, etc. Unless you are Carver Mead or one of the other wave guide kinds of guys who believe there actually is no randomness in QM and it is deterministic all the way down.)
My one Airbnb experience was massively negative because I value my time.
Hotels are not remotely doomed. Demand for them may go down, though.
The math does not really know the difference between upscaling and downscaling.
Maybe he doesn't think it would ever be fast enough, or maybe he thinks it can only apply at really small scales (i.e. nanotech) and there's no smooth incremental path there from where we are now?
Besides, do you know that a typical gas station can't readily supply 100kW? How do you know? Has anyone even thought about this seriously (except Tesla)?
I am a little bit shocked by the amount of specious naysaying that is happening in this thread. This is supposed to be Hacker News, where people are motivated to really think about problems, to build solutions, etc, etc. I don't see any of that attitude in some of these replies.
I certainly wouldn't make that argument for cars of today. But I see how rapidly the price is coming down, and I see what price targets Tesla is aiming for, and that is pretty interesting.
(And if you really want to compare price, it's probably a good idea to try to account for externalities. It's hard to estimate those for gas cars because so much is kept secret, but they are substantial.)
Tesla is attacking every single anti-EV argument in a very deliberate way and most of the attacks are strong successes.
It is true that if you want to drive to Mountain View at 85mph you probably can't do two round trips, You could certainly still do one round-trip without thinking about it, especially if you charged in range mode before heading down there. Actually from your figure of 200mi range, maybe you didn't know about charging in range mode? That gives you about 240mi range (maybe a little less with an older battery).
The issue about driving speed is just that, as with any car, wind resistance applies dramatically more force to your car as speed increases. If you are driving 60 (which is at or above the speed limit on 101 and 280), it is not bad since you are about at the efficiency the car's published numbers are based on, but if you are driving 70-75 range is very much decreased.
Here is a range vs. speed graph for the Roadster:
(There are more-detailed charts on the internet if you search for them). Notice that if you are driving 55, you get about 70 more miles of range than if driving 75!
On the other hand if you want to drive 45 and feel like a total ass then the car will go really far!
So this is definitely one thing that will improve as electric cars and infrastructure get better: less dependency on speed or weather conditions (driving through heavy rain will also decrease range by quite a bit). In a Model S with the supercharger network already in place, you could probably drive from SF to LA like a bat out of hell and not worry about it, which is cool.
But, I am just saying, I drive round-trip to the south bay all the time and do not think twice about it. That is the kind of range that is trivial for the Roadster. Unless you commute at 90mph and aren't using range mode and/or didn't charge the car all the way up.
When it comes to this Bloomberg report, I read it and think, this very small production delay, and the reasons behind it, are all signs of a smart company doing the right things. The delay seems to have very little impact on the long-term success of the company. What really matters there is just whether people want to buy the cars, which this news does not have much bearing on (except maybe that people are more likely to buy the cars if they are perceived as paragons of quality... like the public perceives iPhones, etc).
The supercharger announcement was way above and beyond anything I expected. I own a Roadster and now I want a Model S because, as someone living in California, it fixes the one substantial issue with the Roadster: inconvenience of long road trips. The existence of the superchargers is way more of an upside than delayed production is a downside.... yet the stock goes down.
So, yeah... this looks like full-on market irrationality, just people being spooked.
Hey, make your own choice about what you want to do with your life and what you want to consider cool.
In short, I can't tell what value the site is supposed to offer (it certainly doesn't seem to be helping me find an office).
So I think they have bigger problems than the way they open their blog postings. I would say that shit isn't real enough yet...
It is just straightforward extrapolation.
Usually this kind of announcement is followed by another one, 9 months-2 years later, of the service being shut down.
This is just the first step of that pattern again.
Yeah, maybe there was a case when a couple peoples' lives were at stake but nothing happened.
The real issue is that tens of thousands or hundreds of thousands of peoples' lives are at stake RIGHT NOW, under conditions that are much less controlled than what people are deriding as uncontrolled conditions. But people are griping about the 1-2 instead of the 10,000-400,000.
How is this not dead simple to understand? I don't get it.
I just realized what the problem is: this is bikeshedding. Everyone knows about people driving around and feels qualified to have moral indignation in that area, whereas few people know anything about actual cars.
I see lots of people arguing about the safety of how these guys conducted the hack. Okay, sure, there is probably an issue there of some degree.
But it's a very small issue compared to the fact that hundreds of thousands of vehicles are arbitrarily hackable right now, with more rolling off the assembly line all the time, and people are driving these around right now.
Why is most of the discussion here about the minor issue? Why is everyone so eager to derail discussion from the major issue? I thought HN was trying to be a reasonable place.
This has been their playbook about everything for a long time so I don't know why you think it would be different in this case.
If you talk to auto manufacturers in a way that they understand, they will understand.
But the very important difference here is that in your case you have a choice and it is possible to optimize the cost away and to otherwise control the characteristics of wheyou pay this cost. In GC systems it is never possible to do this completely. You can only sort of kind of try to prevent GC. It's not just a difference in magnitude, it's a categorical difference.
If you want your rendering to have nice lighting, there are all kinds of preprocessed global illumination techniques you can use. Yeah, these are not great for animated objects, and SSAO people will tell you SSAO is good for animated objects ... except it isn't, because again, the result does not look anything like actual light and shadow (I don't consider speed to be a great virtue if the thing that you produced quickly is not very good... Unless there is no way to do anything better because you can't affrod it.) There are other techniques you can apply in the case of animated objects to produce much better output (baking a per-vertex occlusion representation and evaluating it on the GPU, for example.)
There was a brief window in time when SSAO maybe seemed like a good idea but we are well past that.
The reason I say SSAO makes games look "kind of cheap" is because it usually gets used by Unity games that just turn on the SSAO flag. These games are instantly recognizable.
Now the rhetoric is "waaah, the charging stations are kind of inconvenient."
Anyone who writes this kind of article without looking at dsituation/dt is being dumb.
It is true that we should care about conditions in factories such as these and work to improve them. It's also true that if you insisted that Chinese workers get paid what American workers would, there would have been almost no factories and the entire country would still be in desolate poverty. Yes if there were no factories then probably the environment would be in a better condition.
It's a very complex situation. Just picking one angle to it and insisting that angle holds the whole truth does not help anyone.
I had a bunch of money. I could have kept it locked in a box somewhere. Instead I decided to pay a bunch of people to help make something cool. Yes, that's a transaction, but for my part, I didn't have to engage in it. (In fact some days I kind of regret it, given all the garbage one has to put up with when running a company.) I could have kept the money locked in a box and felt happy that I feel rich. Or something.
I mean it's fine if you are competing with Python or something ,but if you actually care about perf this is never going to work. But of course anyone wanting to coin a "Rule of X" doesn't want to see a problem in its full context, they just want to be able to pretend to have a solution well enough that they can at least fool themselves. Etc, etc.
The most important skill in software development, by far, is managing your own psychology.
There is some kind of a point here, but it's ruined by the author's lack of perspective and desire just to land a gotcha.
The week I learned to treat vectors as abstract objects, rather than arrays of coordinates, I experienced a drastic phase shift in my ability to program geometric operations effectively and clearly. The coordinates are still there, of course, but you have a lot more power over them.
The book "Linear Algebra Done Right" is all about this, and I absolutely recommend reading it if you haven't.
Of course, for this to be done well they need to pay the drivers enough that tipping is not necessary.
It is even kind of reasonable to just substitute a call to some black-box is_divisible_by() if you can't figure out how to do that test ...
For example, your data structure is on the GPU and your data is in a texture in a certain specific format because of other reasons.
If you wrote the above reply without considering this kind of case, it probably means you haven't been exposed to very much of this kind of case ... ... which was my original point.
But my point is this is not a bureaucratic gotcha question. If you can't do this task, you don't really know how to program well. Sorry but that's just how it is. It's like failing FizzBuzz.
There is this culture of crappy software that has happened lately, especially in the Web world, and it is really quite lamentable. I believe that a very large positive impact would be made on the world -- due to the extreme prevalence of software these days -- if more people would take seriously the idea of software creation as a craft with a very high skill ceiling, and work diligently to improve their understanding and their skills.
If you're writing scripts, or JS code for web pages or something like that, then maybe you don't use CS stuff, but ... are you able to write a web browser if you had to? Are you able to write an operating system or navigational software for a spacecraft? If not, then maybe just see this as revealing sectors of your skill set that could be beefed up, rather than presuming that none of that stuff is important.
Inverting a binary tree is pretty easy. It is not quite as trivial as FizzBuzz, but it is something any programmer should be able to do. If you can't do it, you probably don't understand recursion, which is a very basic programming concept.
This isn't one of those much-maligned trick interview questions. This is exactly the kind of problem one may have to solve when writing real software, and though you may never have to do this specific thing, it is very related to a lot of other things you might do.
I run a small software company and I very likely would not hire a programmer who was not able to step through this problem and express a pseudocode solution on a whiteboard.
This is obviously very different from the PTSD-only meaning you are talking about, but I would bet that the new age version of the word has been in use for much longer and by many more people.
So ... I sense some presumption and lack of exposure here, is all I am saying.
Actually predicting where data is really going to go involves solving the halting problem. So by necessity any static analysis of ownership is going to be conservative, in the sense that it has to err on the side of safety.
So there's a process of structuring things so that it's not just the programmer who understands, but the compiler who understands. Structuring the code in alternative ways so that ownership is made clear and/or ambiguous cases are resolved. Sometimes this could be a small amount of work, but sometimes it could be a very large amount of work (analogous to the simpler situation in C++ where you are using const everywhere but need to change something deep down in the call tree and now either everyone has to lose their consts or you have to do unsavory things).
At points, it might be possible to structure things so that the compiler would understand them and let you do it, but it would take a large amount of refactoring that one doesn't want to do right now (especially if one has had a few experiences of starting that refactor and having it fail), so instead one might punt and just say "this parameter is owned, problem solved". And that's great, you can stop refactoring, but you just took a performance hit.
Now, in some cases it is probably the case that this is in reality an ambiguous and dangerous ownership situation and the language just did you a favor. But there are also going to be cases where it's not a favor, it's just the understanding of ownership being conservative (because it has to be), and therefore diverging from reality. But I want to get work done today so I make the parameter owned, now the compiler is happy, but there is a performance cost there. If I were not so eager on getting work done today, I might be able to avoid this performance hit by wrestling with the factoring. But I might deem the cost of that prohibitive.
That's all I mean. But like I said, I have never written a large program in Rust so I am not speaking from experience.
In world 4, though, where you can walk to the beginning of time just by walking to the leftmost part of the level, I actually kick the player out of the level when there's no more memory. I have never heard of anyone noticing this.
It is more expensive in terms of the amount of memory required, but it is much less expensive in terms of the amount of CPU required, and CPU was ultimately the biggest problem, so it seems I made the right decisions. Even on a limited-memory console like the Xbox 360 you can rewind most levels for 30-45 minutes before running out of buffer. That is more than anyone ever wants to do as a practical gameplay interaction.
Working on The Witness... it will be done ... someday not too long from now.
There may be exceptions to this; it is not a subject I keep current with.
There are a lot of problems with global state in games, but entity values are not one of them, mostly. Problems do crop up but you just deal when they do.
Event recording has a fair bit of history in games, especially as a debugging technique, but I did not want to use it for rewind, considering it too fragile and annoying, and probably too expensive and complicated (you would have had to store world state anyway, to have something nearby to delta from so that you don't start from the beginning of time every frame, so now you have TWO systems: world state recording and event recording. Better to stick with one.)
The author is accurate that Rust may be a substantial improvement.
I don't necessarily want to program in Rust, which is why I am designing my own alternative, but I think if you are building safety-critical software than something Rust-like is a pretty good idea.
For my own experiments I am going more in the direction of giving the programmer auditing power, rather than just having the programmer have to silently worry about what code may or may not be doing.
I know many high-performance programmers and all of them profile because profiling is how you test your mental model against reality. Yes, as the author says, having a mental model of machine performance is important. But you need to test that against reality or you are guaranteed to be surprised in a big way, eventually.
Example: How does he even know that his div optimization matters? If he is even reading through one pointer in that time, he is probably taking a cache miss on that read, the latency of which is going to completely hide an integer divide. The author seems generally to not understand this, since he spends most of his time talking about instruction counts. Performance on modern processors is mostly determined by memory patterns, and you can have all kinds of extra instructions in there and they mostly don't matter.
Which this guy would know if he profiled his code.
Or, you know, maybe someone updated the whatever.io installer to make it 'better'. But you are trying to debug some problem and you made one image last month and another one this month and you're pulling your hair out trying to figure out why they are different. Oh, it's because some text changed on some web site somewhere.
You've taken a mandatory step and put it outside your sphere of control.
It is true that games tend to be overly simplistic in this way, but it is really a design problem, not a technical problem.
Technically, we can litter the ground with stuff and have plenty of permanent-world changes. Design-wise, it is usually unclear how to make that a playable game. Using someone's middleware is not going to help this.
The premise of this article is another example of what Frank Lantz calls the Immersive Fallacy: https://www.youtube.com/watch?v=6JzNt1bSk_U
Yes, of course you can go out of your way to get connected to the people making a game. But it's just harder to do that on iOS than on Windows, and this has consequences in terms of the viability of these platforms for small developers. (It is by no means the only factor. The race-to-zero pricing on iOS is probably a bigger factor.)
The whole point is that it is doubtful that they will stay solvent if they put more money into production values on that platform.
Nice graphics are very expensive. (They are more expensive than any other aspect of game development, in fact). If it seems unlikely that enough people will buy their game given that type of investment, then it may be a good choice to stay away from that.
On top of which, maybe they just don't want to spend all their time doing graphics. Maybe they want to work on the story / world / etc.
I don't understand why copies are even relevant: you can make several extra copies and nobody will ever notice. Audio data is trivial in modern systems. Let's say there are two channels coming in; 48000 * 2 * 2 bytes per second is an absolutely trivial amount of data to copy and has been for many years. Building some convoluted (and unreliable) system just to prevent one copy per application, when each application is going to be doing a lot of nontrivial processing on that data, strikes me as foolish. But don't listen to me, look at the fact that Linux audio is still famously unreliable. If the way it's done were a good idea, it would actually work and everyone would be happy with it.
I think you will find, though, that most hardware isn't this way, and to the extent this problem exists, it is usually an API or driver model problem.
If you're talking about a sound card for a PC, probably it is filling a ring buffer and it's the operating system (or application)'s job to DMA the samples before the ring buffer fills up, but how many samples is dependent upon when you do the transfer. But the hardware side of things is not something I know much about.
> If you are writing some sort of synth, as soon as you receive a midi note or a tap, trigger the synth and the note will play in the next audio block
Yeah, and waiting for "the next audio block" to start is additional latency that you shouldn't have to suffer.
> If you are doing some sort of effect, grab the input data, process and have it ready for the next block out. I don't understand why you need a second loop.
The block of audio data you are postulating is the result of one of the loops: the loop in the audio driver that fills the block and then issues the block to user level when the block is full. My whole point is you almost never want to do it that way.
However, even when they are synced, you can still easily see the problem. The software is never going to be able to do its job in zero time, so we always take a delay of at least one buffer-size in the software. If the software is good and amazing (and does not use a garbage collector, for example) we will take only one delay between input and output. So our latency is directly proportional to the buffer size: smaller buffer, less latency. (That delay is actually at least 3x the duration represented by the buffer size, because you have to fill the input buffer, take your 1-buffer's-worth-of-time delay in the software, then fill the output buffer).
So in this specific case you might tend toward an architecture where samples get pushed to the software and the software just acts as an event handler for the samples. That's fine, except if the software also needs to do graphics or complex simulation, that event-handler model falls apart really quickly and it is just better to do it the other way. (If you are not doing complex simulation, maybe your audio happens in one thread and the main program that is doing rendering, etc just pokes occasional control values into that thread as the user presses keys. If you are doing complex simulation like a game, VR, etc, then whatever is producing your audio has to have a much more thorough conversation with the state held by the main thread.)
If you want to tend toward a buffered-chunk-of-samples-architecture, for some particular problem set that may make sense, but it also becomes obvious that you want that size to be very small. Not, for example, 480 samples. (A 10-millisecond buffer in the case discussed above implies at least a 30-millisecond latency).
But I'll drop a few hints. First of all, nobody is talking about running interrupts at 48kHz. That is complete nonsense.
The central problem to solve is that you have two loops running and they need to be coordinated: the hardware is running in a loop generating samples, and the software is running in a (much more complicated) loop consuming samples. The question is how to coordinate the passing of data between these with minimal latency and maximum flexibility.
If you force things to fill fixed-size buffers before letting the software see them (say, 480 samples or whatever), then it is easy to see problems with latency and variance: simply look at a software loop with some ideal fixed frame time T and look at what happens when T is not 100Hz. (Let's say it is a hard 60Hz, such as on a current game console). See what happens in terms of latency and variance when the hardware is passing you packets every 10ms and you are asking for them every 16.7ms.
The key is to remove one of these fixed frequencies so that you don't have this problem. Since the one coming from the hardware is completely fictitious, that is the one to remove. Instead of pushing data to the software every 10ms, you let the software pull data at whatever rate it is ready to handle that data, thus giving you a system with only one coarse-grained component, which minimizes latency.
You are not running interrupts at 48kHz or ten billion terahertz, you are running them exactly when the application needs them, which in this case is 16.7ms (but might be 8.3ms or 10ms or a variable frame rate).
You don't have to recompute any of the filters in your front-end software based on changing amounts of data coming in from the driver. The very suggestion is nonsense; if you are doing that, it is a clear sign that your audio processing is terrible because there is a dependency between chunk size and output data. It should be obvious that your output should be a function of the input waveform only. To achieve this, you just save up old samples after you have played them, and run your filter over those plus the new samples. None of this has anything to do with what comes in from the driver when and how big.
Edit: I should point out, by the way, that this extends to purely software-interface issues. Any audio issue where the paradigm is "give the API a callback and it will get called once in a while with samples" is terrible for multiple reasons, at least one of which is explained above. I talked to the SDL guys about this and to their credit they saw the problem immediately and SDL2 now has an application-pull way to get samples (I don't know how well it is supported on various platforms, or whether it is just a wrapper over the thread thing though, which would be Not Very Good.)
That's programming. It is always possible that anything might need to change. That is how it is. This does not justify calcifying your code by adding extra unnecessary structure, because what that in fact does is make the program harder to change later (while requiring you to do more work up front). Also, as the author of the article notes, it requires one to keep more pieces of information clear in one's head in order to work with code of equivalent complexity, something that is almost always a big lose.
In a good language, if a dependency implementation changes, you know this because your program does not compile. (Well, of course because you are not a noob, you are linking things that are versioned in the first place, so this should not ever even be an issue unless you are actively upgrading outside code and are expecting it.) When your program does not compile, you want the compile error to be at the site that uses the dependency, because that tells you exactly where the thing is that you need to fix. Adding excess verbiage around it, and distancing the site that instantiates the dependency from the site that uses it, only causes more work.
If you are using a language/system that doesn't allow you to program this directly and clearly, then maybe that is the problem...
Since then I keep hearing about "Dependency Injection" so my impression is that it's gaining in popularity. But my kneejerk reaction is always that if someone is talking about this subject, they probably are not a very good programmer, just like if someone is talking about how important UML diagrams are. It is maybe a hasty conclusion but that is where my brain goes.
I mean, are you supposed to be able to just wake up in the morning and say "hi, I am a car company" and manufacture 50k cars?
Keep in mind this was all on computers from the year 2002, which were pretty damn slow compared to computers today. Today you could do a lot of guys.
Where I am sitting, you absolutely have 10 years (or more) to give toward a specific cause, and the path of the software developer is one of lifelong improvement.
Stuff like "the rate at which tools are changing" doesn't matter too much, because that stuff is just surface-level knowledge, not deep knowledge.
I am 43, and have much to do yet; if you are telling me I am due for retirement, I suggest you have a very warped view of the world.
Those guys all interacted with each other and the environment (though the engine was designed to do the interactions in slices, where 1/N of the guys would be checked each frame, but N was not high, like 4 or 5 maybe?)
Basically anyone making interesting games needs source, because at some point you want to do things that the engine didn't exactly anticipate.
Also, if Unity really were "an efficiency avalanche", we'd be seeing a lot of high-polish games in Unity. To date we haven't really.
First thing that happens is opening a new development account per app in an attempt to squeak under the $100k as much as possible.
i.e. these "specific requirements and limitations" you mention have a LOT to do with memory.
Yes, you can make GC work in these situations, but you are going to pay for it. In perf.
I have to say frankly I do not believe any of the words in your last paragraph at all.
This is absolutely, massively untrue. If you try making compilers sometimes, you will see how very hard it is for compilers to be sure about anything.
For example: Are you calling a function anywhere inside that iteration? Is this a copying collector? Could anything that function does (or anyone it calls) possibly cause a GC, or cause us to be confused enough that we can't tell whether a GC might happen or not? Then you need read barriers on all your operations in this function, i.e. your iteration is going to be slow.
"Also, if your data to work on is being streamed in, having to make the choices in managing the allocations & uses of std::vector buffers is much less useful than having the system heuristically balance in a more managed environment."
Also absolutely, massively untrue. Your application knows more about its use cases than the generic systems it is built on (which must handle many many different kinds of programs). Because your application knows what is meant to happen, it can make much better performance decisions.
From my perspective as someone who keeps going back to Linux and trying to use it every 18 months or so, the #1 problem today is that there are WAY too many distros -- and as a result, all of them are broken. What really needs to happen is for the Linux community to put a great deal of elbow grease into a small number of distros.
Because I only try Linux every year or two (and give up on it every time), I see isolated snapshots of how usable the OS is, and from my perspective, it's gotten less stable and less usable over the past 5 years. (Six months ago I had to try 4 different distros before one would even install correctly on one of my two test laptops, for example).
In terms of mainstream distros that are actively trying to appeal to end-users (not counting fringe research projects), how many is enough to provide good variety? I am thinking 3-5 maybe?
Instead, this is the situation: http://en.wikipedia.org/wiki/List_of_Linux_distributions
Does anyone think that is an efficient way to produce quality results?
Edit: It's also worth keeping in mind that the Wikipedia list is sort of the minimal list of versions. For example, if you go to the Linux Mint homepage, you get 4 different versions to choose from: http://www.linuxmint.com/
Nobody has ever built a garbage collector that does not slow your program down or cause it to use vastly more resources than it would otherwise. (Claims to the contrary are always implicitly caveated).
Given that this is the case, it really does start looking like a language issue. Yes you can rearchitect the GC to care more about locality, but you are just pushing the dust around on the floor: you will find a different problem.
The reason is because you don't control the GC and don't even necessarily know what exactly drives the decisions it makes. So once you want to go beyond a certain level of performance, there is no right answer. You are just randomly trying stuff and kind of flailing.
In C++ (or another direct-memory language), there is a right answer. You can always make the memory do exactly what you want it to, and there's always a clear path to get there from wherever you are.
There is always some garbage velocity beyond which any given system is not able to cope.
Usually that limit is kinda small compared to what you'd actually like your program to be able to do.
The green line was the only one still going, and it plateaued about a year ago (you'd see that in a newer graph).
Like, the plane is always landed from the ground, not the air. That idea introduces new problems, but you can start thinking about those too. Come on, people.
Eight years ago, you could have made an equivalent list about electric cars. Well, we have electric cars now and that situation is looking pretty good.
Imagine you are an Elon-Musk-alike who wants to make fast air travel happen. Then this article isn't a list of why it's impossible, it is a list of problems you need to solve in order to make it work. I think we have enough examples in recent years to show that if someone with sufficient inventiveness attacks the problem hard, many of these kinds of things really are solvable.
This should be an obvious and automatic hype-temperer, but for some reason it isn't.
Almost everything that comes through that channel is complete garbage. It has negative utility to read through that stuff because it makes you tired enough that you might actually miss something good if you see it anyway, and it predisposes you toward negativity regarding submissions (which is psychologically unhealthy both for your quality of life and your relationships with potential investees.)
Our hit rate from open submissions was 0.25%, that is, it took 400 submissions to get one company in which we would invest. And that company is one that likely would have come to us through a more-closed submissions process.
One of the most valuable things you can have as a business owner (startup or no) is an understanding of context. Know what the situation is like for the people you are dealing with. Know why they do things the way they do. This author has not built that experience/skill. He is only seeing things from his viewpoint as someone who wants money. Guess what, this makes him isomorphic to every other random startup founder in which a VC is not going to invest.
Use common sense: VCs are financially motivated to find good companies to invest in! If they think something will give them an edge, they are going to try it. The fact that they don't take open submissions should tell you something about the dynamics of the system. You should listen and understand what that something is, because that understanding is valuable.
I think a lot of smart hacker-type people would disagree with that.
In computer lifecycle terms that is a long time and it is a little bit embarrassing that Linux works as poorly as it does in these situations. (Windows ain't so great at it either, which reflects poorly on Microsoft, but it still handles the situation a lot better than Linux does.)
But I think one has a responsibility not to try and sell one's project as something it's not, which especially means being careful with claims. I know this is sometimes hard because a lot of language design stuff comes from the academic community which notoriously overclaims (because it is their job to overclaim), and it is easy for that culture to rub off.
But if you are going to say something like "just by doing this our code becomes 200% better", with a straight face, about something that most practitioners know is going to be terrible in most cases without a tremendous amount of additional work and solving of unsolved problems (solution not shown), you're just telling the reader that they can't take you seriously. It's a bad thing to do.
I kind of stopped reading here:
"Since it is declarative code, update returns a new world w2 instead of merely modifying w1. The funny thing is, just by doing this our code becomes 200% better. For example, you can now modify the code to store all world states in a list!"
Uh huh. Try doing that with a nontrivial game that needs to be performant, and let me know how that works out.
It is trivial to make your test routine log the error but return true so that the compiler doesn't stop.
Rule #1 of programming is that if you didn't test it, it doesn't work. (It may still not work for real after you test it, but at least it's got something.)
You can't claim to anyone, or even yourself, that you have some kind of fault-tolerant system if you don't do this kind of test after every change.
If you want to study grammars in an abstract sense, then think of them this way, and that's fine. If you want to build a parser for a programming language, don't use any of this stuff. Just write code to parse the language in a straightforward way. You'll get a lot more done and the resulting system will be much nicer for you and your users.
I actually did a few more tweets after this describing what I think makes sense.
So you might have fun wondering how to build something that doesn't work like that.
Rather than make you grind through mechanical operations on matrices, as most books do, this book takes a coordinate-free algebraic approach and does an amazing number of things with it, cutting directly to the chief insights. There are ninja proofs in this book, 4 lines long, showing some deep and useful thing about linear algebra that other books would spend pages proving in a very verbose and uninsightful way.
If creative endeavors are profitable, you can use the resulting money to fuel more creative endeavors, thus making the world a better place. Keeping money in a bank account or publicly-traded stock does not particularly make the world a better place.
Once I got approximately into the f-you money level of income, it became crystal clear how fictitious money is in the first place. I wake up one morning, and bam, I am wealthy! Why? Because someone said so and typed a number into a computer. Okay... that's kind of weird.
Given that money is so fictitious and somewhat meaningless, it is a shame to give into primal hoarding impulses, just so one can see the number in one's bank account go up like a high score in a video game. It's much better to make like Elon Musk and use your money for what it is: a way to wield influence to make the world more like you would like it to be.
Yes, money "can" clearly be made investing, if you time the market and are lucky. I am disputing the idea that stocks will always generally go up. I think this used to be true but may have changed. Much like "housing prices always go up", which was shown to be absurd.
Somehow I do not think many people realize this...
Maybe it's not the new normal, and maybe stock markets will start rising again as they have historically, but when you see a lull this long, it at least suggests the strong possibility of a pattern break.
In the current economic climate, it is pretty much a waste "investing" in anything until you have, say, an 8-figure sum in cash laying around doing nothing. I don't have that, so I am not bothering with "investing". I put "investing" in quotes because I feel the word tends to be perversely used; people really mean speculation, that is, gambling with negligible effects in terms of real-world wealth creation, but the gambling happens on such a huge scale that it distorts market prices hugely. Real investing is when you put money directly into something in order to enable the creation of something that wouldn't have been possible without your capital (as the YC folks do).
Stocks are terrible. If you look at market histories, corrected for inflation (actual inflation, not government-reported inflation, which is always understated, as the government benefits by understating it -- so normalize against something like an alternative inflation index or else straight-up commodities) then the S&P, DJIA, etc have actually not grown in 15 years. 15 years!! I know all of the "just buy an index fund" seems like good advice -- and it did used to be -- but in modern conditions that is no longer true. On top of this fact, pile on the risk of another market crash due to the USA's still-precarious economic situation, and stocks are clearly just not worth being in. (People are starting to realize this; there have been net outflows from equities most of the time for the past 40 weeks, and insider-selling-to-buying ratios are consistently huge.)
You can put money in bonds, but then it is locked up and you have a lot of inflation risk, so then you'd be aiming at short-term bonds, which are going to yield less.
Really what has happened is that US economic policy has become very hostile toward people who are responsible and save money, as an incidental effect of the desire to stimulate consumption (which mainly means taking on more debt and keeping rates tremendously low because if they ever become not-low now, debt burden is going to crush the economy.)
The upshot is that you are better off taking the mental energy you would have expended on "investing" and subsequently worrying about your money, and instead funneling it into your creative endeavors. You will make more money that way, especially when you take a long-term view. (Think about Einstein and the story about him having a closet of identical suits; except what I am talking about here is way less extreme and way more obvious.)
I have a rant about how peoples' "investing" according to the modern American model is actively making the world a much worse place than it ought to be, but this post is already long enough.
That said, it’s not that simple. There are threshold effects (more than X amount of time = your company is dead) and intangibles (how do you feel while programming).
But anyone making a blanket statement without considering these things is participating in a dumb internet argument.
I am sure the company is messed up, but come on.
Not the right way to think about it; the connection latency is irrelevant. What is relevant is that you need to play audio in sync with the video, and that audio is coming to you approximately simultaneously with the video it's meant to be synced with.
As for your first example “you may be relying on a 3rd party service that compares things”, has this ever happened in the history of the universe?
Please stop saying this stuff as it just contributes to the general confusion.
But I just wanted to comment that you don't need an "Entity Component System" in a game, and especially not for a very simple not-yet-a-game like shown here. (You also don't need inheritance or composition).
It bothers me that so many people are buying into this hive-mind marketing on ECS, when in reality it is just overengineering + procrastination in almost all cases.
(None of my games have ever had anything as complicated as a component system).
If you want to make a simple game like this, just sit down and program it in the obvious way. It will work. You don't need to be fancy.
Prior to this, the USA did not have the capability to transport astronauts to orbit without working with Russia, and if Russia were to just decide no longer to cooperate for arbitrary diplomatic reasons, we'd be screwed. As you know, the shuttle program was halted some time ago and the vehicle was a total cow the whole time it was in service.
So whereas this has been done before on a technical level, this represents a substantial increase in the USA's actual present space capability, and a major step in reversing the decline of the USA's competence in space.
Why then would you expect humans to be adapted to such a situation?
He ascribed a problem to capitalism that is also a well-known problem of non-capitalism. The same thing happens in both places (actually worse under communism), so the blame is clearly misplaced.
(It's not like I can buy laptops any more, since those are all garbage even when I pay $4000 for a supposedly high-end system, so I certainly have spare device money laying around!)
Given that most new C++ features in the "modern day" are implemented as std::whatever, in the "everything is a library" way, it's extremely relevant.
The actual content of this article seems to be "we polled people who work on software, and they used positive adjectives to describe people they liked, and negative adjectives to describe people they didn't like". Then it ends with an advertisement for the author.
What any of this has to do with being objectively good at the discipline of programming, I have no idea.
If just setting integers is this complicated, what do you expect to happen when you are trying to solve real problems?
It makes sense to try to keep government agencies operating efficiently. Unfortunately I don't trust today's journalism to give me an accurate picture of whether what's going on falls into that category or not.
They do, but this line of argumentation is kind of vacuous if you don't include the actual cost of taxes. If taxes could be 1/3 of what everyone is paying, thus buying the same civilization except much more prosperous since money is used a lot more productively and people have much more choice about where it goes, isn't that better? (This 1/3 isn't meant to be a real number; it's a thought experiment).
Alternative way of looking at it -- if this argument works without cost, then why shouldn't everyone just pay 100% taxes?
So it seems like a lot to me.
Here's an idea, why don't we simplify our Byzantine tax filing processes so that the whole thing doesn't cost so much. I know, I know, all that sweet H&R Block tax lobbyist money is addictive, but it would be better for the country if congress would put down the pipe.
A better way to think of it is as the equivalent of the relativistic block universe. All these different spaces already exist in some superspace, and ‘random’ events take you from one space to a neighboring one. Nothing is manufactured.
Another reason is that the author seems to have put a lot more engineering into the Rust program than the C program. Most of the word-count of the article is devoted to extra engineering on the Rust program! (The whole thing about sorting out lumps, etc). It's reasonable to infer that this also mirrors the situation prior to the first benchmark. If you put more effort into optimizing one program than another, you'd expect the higher-effort program to be faster, all else being equal.
It's embarrassing that HN thinks this article is worth posting. There might be something to write about here, somewhere, but it would have to be framed in a very different way in order to be honest. Also, it would be a much shorter article.
I think given the current state of things it would be irresponsible for compilers to generate heavy instructions unless asked. Forget trying to be smart about it ... we already fail to be smart about things that are much simpler and more visible.
More interestingly, this may be what all CPU behavior looks like in 10 years, because if Intel has to resort to this kind of design now, why would hat change any time soon? Instead of worrying about primarily keeping the execution units full, people trying to write fast code may be primarily concerned with keeping them NOT full so that the chip doesn’t slow down. Which sounds crazy and hard to deal with.
But beyond that ... "smart pointers" and the like make your program slow, because they postulate that your data is a lot of small things allocated far away from each other on the heap. I spoke about this in more depth at my Barcelona talk earlier this year.
Also note that this category of answer is basically saying, "look, if you mostly manage your own memory, then GC takes less time!" That's true, but a large part of the value proposition of GC in the first place was to remove the burden of memory management. Once you are saying actually, GC won't do that for this class of application, then really what you are getting out of GC is memory safety (provided the rest of the language is memory-safe). On the one hand, hey, memory-safety is a benefit. On the other hand, I don't think very many people in game development would trade that much performance just for memory safety.
(And in fact in game development we very often have to do unsafe memory things. So really what ends up being said is "much of the system has memory safety" which, really, does not sound very alluring.)
In a 7ms frame, you are spending most of that frame doing the work of rendering the actual frame (unless your game is so trivial that the GC is going to be easy / fast anyway). An additional millisecond is going to cause you to miss your deadline and drop a frame. Dropped frames feel really bad.
If your player has a 144Hz monitor then your pause time, rounded to the nearest millisecond, has to be 0ms.
This is an obvious way to tamper with the results; it's just more of the same kind of p-hacking that bad researchers are so often doing. They are using "we re-do the study with a larger population" as a way to re-roll the dice if the first die roll doesn't come up the way they want. (Note that if the die roll did come up the way they want, they don't re-do the study with a larger population in order to see if the replication fails).
Nobody should be taking this seriously.
That is not true today ... is it? Any examples?
By the mid-1990s we had a very clear idea of what 80s music was (electric guitar, synth and rap) and even what 90s music was (grunge and all this electronic stuff that I thought was super boring but a lot of people liked so whatever). It’s 2018 ... I have no idea what 2000s music was. Everyone can probably name a bunch of musicians that were big then, but what sound was being pioneered? Messing around with vocoders / harmonizers? That seems very limited and small.
What is the sound of 2010s music? I have simply no idea.
Even software that is supposed to be useful is so terribly slow. Computers are between 2 and 4 orders of magnitude faster than programmers today experientially believe they are, because today’s culture of programming has rotted so thoroughly. Do you really need a quantum computer when 3 orders of magnitude are just sitting there on the table waiting to be picked up?
I am not much educated in this particular issue, but I have to say the language of this blog post led me to doubt that the author is a credible source of information.
> If you can only survive as a corporation that does the bare minimum to take care of employees you have no right to be an employer in the free market. Most corporations _can_ function with organized labor, but would rather not because it cuts into profits.
Think about this from a systems perspective. If you are a country that requires businesses to take a certain amount of overhead, then some percentage of businesses will just not be viable. Therefore your entire economy is x% smaller (at least). You can claim that this is offset by the economic well-being of the employees, but that's not at all obvious and would require a lot of justification. (If it were true, you'd expect countries like France to be more healthy economically than the USA, whereas in fact France is quite stagnant). If your economy is x% smaller, it means you are not competing effectively with other countries and have less leverage when dealing with them. And the shrunken economy has real consequences on the standard of living of people living in the country. etc, etc.
If you think of this only from a lens of "capitalists are evil people who deserve to be taxed to support the good workers" you are going to be missing most of the picture.
This happens in other APIs too (we definitely had it happen with DX11), it's just that OpenGL is a lot more complicated than anything else due to its history, so it has proportionally more bugs.
Look into what it takes to write the minimum viable OpenGL program, written using non-deprecated routines, that puts a textured triangle on the screen. It sucks. On top of that, OpenGL is slow and gives you no way to create programs with smooth performance -- for example, it will randomly recompile shaders behind your back while you are trying to have a smooth frame rate.
1990s-style OpenGL was good for the time. In 2018, OpenGL is a pile of poop.
If you have more EVs running, there is more incentive to upgrade the power generation structure, because it produces more environmental benefit.
Also, come on, the minority of generation comes from gas, but you are calling it "methane powered ICs by proxy". That looks like extremely motivated reasoning (to put it charitably).
All those articles can go burn in a hot, hot fire.
If you’re homeless, why wouldn’t you come to California?
Interventions often don’t do what you wanted them to do.
The inflation rate for a computer with a fixed set of specs is massively negative -- it gets cheaper every year. So it is for many technological devices.
But our whole economy is a mixture of technological stuff (that drops in cost over time, i.e. negative inflation) and non-technological stuff (burritos and health care).
The overall inflation rate is an average across the entire economy. Even if you believe the reporting is not distorted (which is dubious), then the fact that there are so many goods whose prices drop quickly over time, implies that there have to be many goods and services whose prices go up much faster than "inflation" would predict. Because something has to balance that average!
Baumol calls the technological stuff the "progressive sector" and the non-technological stuff the "stagnant sector". As time goes on, prices in the stagnant sector continue to rise until they consume almost all spending.
Baumol made specific predictions based on this model in 1960 that have turned out to be consistently true for 50 years ("the cost of healthcare will continue to rise to degrees that will seem scary" and so forth).
Furthermore, it's not like it is some weird complicated or hard-to-substantiate theory. It is just math, not much more complicated than the definition of the average. Given how big the consequences are, and how hard to argue with, it surprises me that this idea occupies so little of the public conversation.
If it takes 1 person to run 1 microservice, we are all doomed.
And most developers making these kinds of games today fail to make their money back and go out of business.
I was in charge of one of the games you listed, so I know something about this topic.
The game industry is a big place with many different kinds of people in it, all of whom have different motivations.
If you build a picture for yourself wherein everyone is That Guy At EA Who Put Loot Boxes Into Battlefront 2, then that picture will probably get in the way of your understanding what everyone is being said by everyone else in the industry who is not that one guy.
This article is targeted primarily at developers, not at gamers. It is an "oh crap what do we do" article, because believe me, that is a big problem and in general we do not know what to do.
This was not some "cost of games need to go up because blah blah" justification article ... it is a straightforward look at what has been happening and some musings about what we can do about it, with "cost of games will probably rise" thrown in as a 6-word aside near the end (at which time he also points out that nobody wants to do this, which is true).
You're reading the article you want to read, not the article that the author wrote.
The trains do not run often during the day. If you miss one, have fun waiting 20-40 minutes for the next one. They stop at midnight.
It takes a long time to get to your destination, and the destinations available are not a very large subset of where you actually want to go. Most of the time you will have to chain together another form of public transport on one or both ends, and the transfer time between different forms of transport is often long and pads out the length of a commute tremendously.
The stations and trains are unsafe, and people get attacked on them with some regularity.
On top of all these things, BART is very expensive!
I don't know how you could think BART was good unless you had never used any other comparable transport system.
It’s weird that even your reply seems to assume it’s SpaceX’s fault. Why? (“I don’t like to see SpaceX fail...”)
It's like the people on the right saying Edward Snowden is obviously a traitor/spy because he went to Russia. Uhhh... something very obvious is being ignored there.
Also, come on ... Joe Rogan is definitely not right-wing. Jordan Peterson and Dave Rubin do not self-identify as right-wing. And your  just seems a little extreme. "The far right is just a slip away from the reasonable right, therefore the reasonable right is dangerous?" But you don't apply that same idea to the left? Why not?
The real pattern here is that he went to the shows that would have him as a guest for reasonable discussion.
2. See 1.
3. Actually charging from 110v would take more than overnight -- several days at least if you are empty! It's very slow (it's a lot slower than half a 220v because a certain amount of power goes to overhead like cooling the batteries during charging). Even so, I got by on 110v for years. As for having (1) dryer socket ... buy a splitter cord for 5 bucks? I am not sure why you think you need a contractor and permits, unless you are just trying to fabricate reasons why electric cars are a problem.
4. See 1.
5. The market today is tremendously more developed than it was in 2008 when Tesla started selling their first car, and everyone thought electric cars were just golf carts that were completely infeasible. Why do you think this trend would stop now? On the luxury point ... see 1.
My Tesla Roadster does not corner like a Porsche 911, it's true, but it does not matter for everyday driving, because it is so effortlessly faster than anything else on the road, when I pull away from a stoplight, whoever I want to cut in front of is 50 feet behind me.
Instant acceleration is worth a lot, too.
For comparison: My previous car was a BMW M3. I would much, much rather drive my 2010 Tesla Roadster, and that is an old car at this point ... 2020 Tesla Roadster is going to be amazing.
> The range and the charging times are still an issue, especially if you don't live in California.
My roadster goes 340 miles on one charge. The upcoming 2020 Roadster goes 620 miles on one charge.
Admittedly, battery capacity costs money. Price-sensitivity is the reason most of these newly-announced cars don't have as much range. It is not a technical limitation.
And as with any such technology component, battery costs will continue to drop over time.
2. Newer Teslas (Model S, Model X, Model 3) have charger planning built into their standard map program. About other EVs, if I were an EV manufacturer I would do a license deal with Tesla to allow my cars to use their charging network. But maybe they'll build their own. The one Tesla built is pretty good ... if a small company can do that, GM can do way more, and do it faster. So even though "there's no infrastructure" has been raised as this huge problem for years, it does not in actuality look like much of a problem.
3. Good electric cars have their own charging circuitry built in, so you just plug it into the wall. You can even charge off 110 volts, though it takes a long time. A dryer socket is much more reasonable. Just being able to plug in your car, in your garage, is much more convenient than going to a gas station. (This presumes you have a garage to park in. If you park on the street, different story.)
4. Electric motors and drivetrains are much simpler and much more robust than mechanical engines and drivetrains. Electric cars just do not tend to need repair in the same way. (I have had my car for over 7 years and it has never yet needed a repair of this kind).
5. Tesla Roadsters still hold their value very well, considering. But this may be in part due to the fact that it's a rare car. I am not sure about the Leaf, etc.
6. Maybe, yeah.
He ported a text game, that someone else had already designed and implemented, to iOS.
This is not a huge amount of work compared to what most game development people do, and is not particularly challenging compared to what people are doing with 3D graphics, etc.
I am pointing this out not to be mean, but to respond to your point: "those who work as hard as he does and are as smart as him". What he did for A Dark Room was something pretty much any iOS programmer can do, and in those conditions it's natural not to expect to be noticed in a crowded market. i.e. he evidently did not do anything noticeably smarter or work noticeably harder than anyone else.
This doesn't mean he can't do smarter and bigger things, just that a port of ADR is not that, and neither were his other descriptions of projects he did in the meantime (for example, look at the screenshots for Mildly Interesting RTS).
I think when this is the limit of what one is attempting, one has no right to complain that people aren't buying one's stuff (and should not be surprised at that either!)
He seems to think that "rolling the dice" is the primary thing going on, and that he can't influence this substantially by any amount of development skill, proper tactics, and so forth.
I wonder if this kind of mindset is a side-effect of modern-day political correctness -- that if you succeed, it is 100% due to your good fortune (combined privilege and luck) and not because you did anything special to get there. The upside of such an attitude is that we recognize luck and recognize that people who failed did not necessarily fail through fault of their own. One major downside is that we feel a lack of agency, and this lack of agency is almost certain to drastically decrease the chances of future success.
Be careful what you believe, because it matters.
For my part, I think talent and skill and grit are huge factors in success, so if I want to succeed at something, I am able to formulate some kind of a plan. I don't feel that I am "just rolling the dice", even when luck is involved (which it is all the time, in everything).
Yes, good connections are expensive for some people. If you want to do something about that, maybe you include amount of latency in a player’s ELO or something. I am just saying that I do not like Valve’s approach to the problem (which has been inherited by many other games) because I care about games and it makes games worse overall.
P.S. Someone the other day asked why I don’t post much to HN (and don’t put much effort in when I do). This kind of junk is exactly why!
But I only think this because I have had the experience many times in the past!
I feel the best place to focus my creative energy is on making livestreams, doing speeches, and just on day-to-day programming.
Putting effort into a posting here often doesn't take too much energy, but it does take some, and I'd rather put it into something bigger than mostly toss it away.
As you become successful in your field (or wherever), and further internalize the habits that are necessary to be successful, it's clear that many of these things are easy to do, it's just that people don't want to do them.
In other words ... it's obvious that many people don't want to be successful, and if they were to introspect deeply, they would see this clearly. In fact what they want is to be somewhere comfortable in the middle of the herd, not having to do too much work.
Most people want to be comfortable, not 'successful' in a way that requires ambition. But many people are brainwashed enough by the rhetoric of success that they don't realize it's not what they want.
There's also something I haven't figured out yet. Every time I give advice, I get a number of responses from people with self-defeating attitudes, explaining how this advice can't possibly apply to them because blah blah blah. These people build up belief structures that are obviously intended to keep them mired in their current situation, smelling of low self-esteem and defeatism. "Obviously" it's better not to be stuck in these belief structures, yet people will defend them vigorously, and in some cases fiercely. I don't yet fully understand why, except maybe that if someone believes there is a solution to their problem, then it must be their fault that they haven't solved it, and/or that there will be a clear failure that is their fault if they attempt to solve it.
Looking at the way it’s used in this comment, I would say it is ascribing malice where malice is unlikely, and would be implausible in the first place. So maybe my brain’s heuristic is justified.
This whole LSP thing is a mindbogglingly bad idea, brought to you by the same kinds of thought processes that created the disaster that is today’s WWW.
If your mapping is very complex compared to the ostensible simulator, then the mapping is actually doing most of the simulation. So the simulation is not mostly running when the ostensible simulator runs, it is running when you perform the mapping.
If you are inside the universe being simulated and think you can do that mapping, it seems unlikely that there exist enough time and space for it.
Of course the problem here is "competent". On two different occasions people have offered to do the job and they were terrible.
Most proponents of unit tests use horrible programming languages; they are afraid to change code because anything could break at any time. Stop using those languages and most of the problems 'fixed' by unit testing just disappear.
So they wanted to claim to offer a high-quality product, but don't think they can actually do that.
Startup culture is so terrible.
If this were actually true, why wouldn’t we have evolved an extreme aversion to touching our faces?
Also, have you ever seen a baby? What does a baby do if you leave it alone to crawl around?
If you change your thought-experiment to a paid service, good luck being cheaper than CalTrain.
I disagree with that. Most people are not successful, so if your target is the average, you are aiming at a data point that represents lack of success. It is important to understand that most successful people are not normal, and the higher the level of success, the less normal they are.
> "The best time to plant a tree was 20 years ago. The second best time is now."
It is a little bit correct when it says that regret is not a useful reaction to a past you're unhappy with, but even that by itself is misleading. Regret is a useful emotion that helps you shape future actions. What is not useful is paralyzing regret, or any flavor of regret that keeps you wallowing in the past.
When he says "you are not behind", that is mostly wrong. If you're 25 and aren't yet doing anything individual and attemptedly groundbreaking with your life, you probably are behind, if that kind of thing is your goal. Sticking your head in the sand is not going to make this better. Being complacent and saying it's fine, I am only 25, no wait 26, no wait 27 until you are 40 isn't going to help either.
There is a reason the human mind is able to conjure phantasmal pictures of "where we should be" -- because that is useful. If you choose to ignore that in order to have a shallow feel-good time in the short term, you do so to your own detriment.
All that said, if you are genuinely content with where you are today, then everything is fine and you don't need externally-imposed images to tell you where you "should" be. This advice is only for people who deep-down want to build interesting new things.
My last comment is ... this seems like an excerpt from a self-help book written by someone who perhaps should gain further life experience before writing a self-help book. When you decide to write a self-help book you take upon yourself a substantial ethical burden, because if you give the wrong advice, you can affect many peoples' lives in a negative way. So you should make sure you really know what you are talking about.
I can't vote "don't flag this". So if there are approximately two sides to a discussion, and one side wants to flag it to silence the discussion, then the discussion is going to get flagged no matter what.
So the side that wants to silence just selectively silences the opinions they don't agree with, and they win.
Maybe it is due to the users, but if that is so, it feels wrong enough to give me a pretty big loss of faith in the dynamics of the community.
(A) Understands people hardly at all, is constantly confused by hem, but understands computers very well
(B) Understands computers hardly at all, is constantly confused by them, but understands people very well
Which of these two candidates is going to be able to design and build a complex software product?
I am saying that this kind of mob shaming-and-silencing mentality is deplorable. And if you choose to engage in that, then you will alienate a lot of people, including many of the best people.
I am not saying anything about government, and as I said in my previous posting, I find it weird that people keep jumping to this. We're talking about ethics, not law.
I want to live in a society that embraces liberal values like freedom of expression. Preventing the government from encroaching on those values is a good idea. But if we then go and clamp down on those freedoms everywhere else, then it won't matter that the government doesn't do it -- nobody will be able to express themselves freely anyway.
This seems to be the society that the 'progressives' want and it disturbs me enough to have completely alienated me from that movement, and I am far from the only one, so I don't know why they aren't stopping and questioning the efficacy of this philosophy right about now.
If we are really a society that embraces liberal values, then we want those values to be upheld throughout the society, not just in the part explicitly controlled by laws.
This has given pause to some people -- for example, Sam Harris, one of the top artists on Patreon, is evacuating the service (even though this is likely to cost him a fair bit of money) to avoid the future potential to be financially pressured over ideology.
Not so much irrational as not particular enough; this line of reasoning doesn't really work.
The thing is ... at any one instant, the amount of memory you are able to recall / visualize / etc is very small, and if your mind is occupied by that thought, you won't have any thoughts about the present simultaneously. It is only the apparent continuity of time that seems to link these things.
So when you say "thinking about all the other things in my memory", well, you can't experience all the things that are supposedly in your memory. You can only experience one small part of that at any time. If there is only one time, then the other stuff does not exist, you just have confidence that it exists for some reason.
I am of a personality type that I don't think I could be happy without creative success (loosely defined as, having done a good job on creating things that would not exist if I hadn't made them). In a previous phase of life, I was not successful at making things, and I was pretty unhappy. Now I am successful at making things, and am much more happy (though I have also developed several mind-management skills as well).
If you are talking about "1m+" as the sole gauge of success, I don't think that means very much.
It's just, their corporate culture makes them very conservative about spending money. SpaceX they ain't.
Most of the "old languages" did not have this problem. This was a product of the 90s when people built a bunch of stuff really fast without stopping to think about whether what they were building was really a good idea.
So, it's a couple of decades later, but that part of the 90s spirit is still in full force today, but there are many many more programmers. The logical conclusion is that the garbage dump being built today is way worse than the garbage dump built in the past, and that should scare you if you think typo debugging was bad.
Concrete example: I see a lot of complaints from women about being talked over in conversations at conferences. I used to get talked over a lot too. It seems frequent that there are one or two people in a conversation who will talk over anyone with a less-pushing conversation style. I think most of these people are not doing it on purpose, they just really want to say things and don't properly gauge the social balance of the conversation. Also there are other weird human tendencies happening -- for example, someone habitually wanting to display how smart they are -- which, while being mildly negative, are not negative in a way that involves hostility to others.
Why is it "significantly more brittle"? It is a well-specified interface. It is less brittle than talking over a socket because the kinds of points of failure involved with sockets don't exist in this case.
> And it can't be spec'd with a schema that isn't just "read the headers."
What does that even mean? It's a protocol just like any protocol, except you get the added benefit that for many languages it can be typechecked. Why are you claiming it can't be specified or that someone has to "read the headers"? What headers?
No. If your language cannot call into a dynamic library using a well-defined C ABI for your platform, then it is already failing to speak a standard protocol. Building all kinds of crazy, complicated, slow infrastructure in order to get it to successfully speak some other protocol, is a symptom of modern-day clueless programming.
> Particularly since the best API will use the compiler's symbol tables (avoiding implementing syntactic and semantic analysis twice, buggily)
Yes, this is of course a good idea. Why one presumes this requires a separate running process, I have no idea.
The fact that the HN community seems to have jumped aboard this idea, "yeah let's just require a server to do something simple like format text in your editor", is completely flabbergasting. People just seem to have NO IDEA how much complexity they are adding, and don't care.
Maybe in 5 years our machines will be running 10,000 processes at boot because people will want a server for every operation...
Tesla expects to produce and sell 80,000 units of the Model 3 in 2017.
We'll see if they hit that target, but come on ... in what reality can this be described as having "completely missed the ball"?
> You can only concentrate so many hours a day. Without that resting your productivity drops.
This is the kind of thing people tell themselves to justify procrastination. If you are unable to concentrate for long, maybe you have damaged your attention span by too much internet browsing, and the cure is just to stop?
For example, in this post you seem to be setting the upper bound around 3x. But actually, it is trivial to be 3x more productive than average: (a) Don't browse the internet while at work; (b) Sit there and spend your time working on the actual problem, not ratholing on programmer fixations that have nothing to do with the end result. Done. Congratulations, you are now 3x, before any consideration is made of experience level or talent or smartness or unique instinct or whatever else.
It's a more-than-linear improvement over 40 hours, because when pro-rated you have a lower density of context switches, getting-back-up-to-speed, etc, as happen in the morning or when you eat or such.
Maybe too many people have done long-term damage to their attention spans via the internet? I dunno.
At the risk of being mildly provocative ... are any of the people you know, who can't do more than 40 hours of work in a week, world-class in their field? If not, maybe there is a causal link between these two things?
Yeah, that is a much better position to be in than someone who is drowning in useless make-work all day.
But, after having reached this level of good set-up, one is now sort-of in competition with everyone else who has reached a similar good set-up. Well, even just ignoring the competition part, which is maybe a red herring, obviously you are self-gating how good you want to be by working more or fewer hours. So maybe these people decided a certain amount is "good enough" and didn't want to push past that, which is totally fine. But I am just raising the point that you can always push further if you want to.
If you only work a minimum number of hours within your field, you are unlikely to emerge as one of the peak achievers or thought leaders in your field. That's just because you learn more from experience, and working more hours gives you more experience.
You can extrapolate from there what this means for companies and individuals.
I am not at all saying that companies should ask people to work long hours. (I run a software company, and we are super-lax about hours, people showing up at the office, etc). But I am saying that if an individual wants to be an expert in a particular field, that person should probably work a lot (and probably wants to work a lot anyway, due to interest in the subject). This doesn't necessarily have to be at the company; it could be at home, on personal projects, whatever. But the deeper and more challenging the project is, the better you learn, and it's easier to have one project that is deep and challenging than somehow to have two in parallel. And if only one is deep and challenging, then you are sort of idling with half your time. So there are basically two paths to this kind of deep work: work for a company, make sure you get a project that's really good, and then work hard on it; or go do your own thing, make sure you have enough money somehow, and work hard on what interests you.
This also means that "work-life balance" is not a thing for experts the way it is for normal people. But that's fine, because for these kinds of experts their work is a serious part of their life and the two things are inseparable.
Of course if you don't feel this way about what you're working on, that it is a serious part of your life, then this strategy doesn't make sense; and I would not encourage people who don't feel this way (who are the majority of the population) to work that hard. I am just pointing out that there are some of us for whom a different life strategy is best.
The intent of my comment is not to be mean, it's just ... there is a lot of noise on places like Hacker News and this article is part of that noise because, look, just about all compilers have had intrinsics for decades now at least, popcount is a very common one, so it's not surprising to see it turn up. It's not impressive as the title suggests, it's extremely common. And it's nothing specific about Rust because most production-quality languages do it. So both major elements of the article title are pretty much incorrect.
And it's fine not to know that when you're a beginner, I am not knocking that at all. But there's something about writing articles that then get broadcast, that give the wrong impression to other new people who are trying to learn. It's useful information that there is a popcount intrinsic in the Rust compiler, but this would be much more educational coming from someone who understands the context of all this stuff and can explain the real situation. Which may be the author of this article someday, maybe even someday very soon -- I don't wish to be inappropriately negative -- but it's not today.
I never liked going to school, and I think higher education is going to go through an existential crisis pretty soon, if it's not happening already. But one good thing about the old system is that at least there was this idea that you should work hard, and really learn the material, before you go presuming to teach people. And I think that's a very good idea. If you're inexperienced and there's a shortage of teachers and teaching needs to happen, then go for it -- but otherwise I think it is very important to keep in mind what one does and does not understand, and who understands it better, and to not presume to teach until one is in a good position to do so.
I know this goes a little bit against the current philosophy of "programming is great! Anyone can do it! Rah rah," but actually I think on closer inspection it doesn't. There's nothing wrong with participation, and community, and everyone contributing, etc. But it's important to keep an understanding of the difference between beginner contributions and advanced contributions, otherwise it seems possible to suffer a severe degradation of skill in the field over time, because how do people know what to shoot for if people of all expertise levels are teaching them and they can't tell the difference because they themselves are beginners?
It's unclear how well the benchmarks in this linked article generalize to other applications. If you are just popcounting in a tight loop, probably pretty well, but who does that? In reality you have other things going on, so if this method is occupying too many execution units or polluting your cache, you would see the effect of that on the rest of the program. But it's program-dependent, thus unclear.
Oh, Hacker News.
If someone can't solve a problem like this off the top of their head, does it not act as a strong signal that they are a beginner and you should probably look elsewhere for quality information?
But if you find popcount too "magical", the commonly-known fast way to count bits is via masking, shifts and adds, so that you do it in log(n) steps. Which also would perform much better than this solution.
So what you're really saying is "the compiler managed to make a pretty efficient representation of the naive solution" which is fine but it does not mean your code is fast.
I am not claiming that the standard 8-hour day is the maximum; but I think if a shorter day is better, I would guess the situation would peak around 7 or 7.5 hours. But again this depends on what kind of people you are talking about. I personally work 60+ hours a week, most weeks, and I prefer it that way.
The fundamental problem is that he doesn't actually want to be doing what he is doing, despite the rhetoric of "great team and awesome project". Come on, is that really how you feel about it deep in your heart, or is it empty SV rhetoric?
Two things will help this author:
(1) Strike out on your own, following your own motivation only. Yes you have to figure out how to make ends meet financially, but that is your lot in life. Fortunately it is easier to do this with computers than in most other fields.
(2) Meditate, learn to observe your mind and why it does what it does, so that you don't feel powerless or subservient to things like burnout. It's hard to explain the transformation that takes place, but being able to stand next to or outside these mental processes is very powerful.
Let me put it this way ... all "garbage collection is fast" claims are saying the following thing:
"It is faster for the programmer to destroy information about his program's memory use (by not putting that information into the program), and to have the runtime system dynamically rediscover that information via a constantly-running global search and then use what it gleans to somehow be fast, than it is for the programmer to just exploit the information that he already knows."
It sure sounds like nonsense to me.
- guy who used Tcl in like 1992 and helped write a 'compiler' for it, etc.
Forget 16ms, I would be happy to get to an order of magnitude slower than that for much of today's software ... it would be a massive increase in human happiness.
I'm not just saying that people have lost track of the importance of efficiency. I am saying they've lost track of how to actually do it. I think at least 95% of the programmers working in Silicon Valley have no practical idea of how to make code run fast. Of the remaining 5%, a very small number are actually good at making code run fast. It's a certain thing that you either get or don't. (I didn't really get it when I started in games, even though I thought I did ... it took a while to really learn.)
This is what I object to about the rope representation -- it intentionally destroys this information that you actually want available most of the time. I don't think that's nice at all.
It's possible you could make the rope work better in this sense by annotating each piece... I dunno, haven't thought about it.
As for the worst-case performance thing ... I think my scheme would do fine with super-long lines or 10 million line files. But dude, I don't even have an editor today that works okay on 10k-line files, and I don't think it's the internal data representation that's the problem, I think it's because of all the other decisions that get made (or lack thereof).
There is an extensive body of literature on parsing that goes back decades. Most of it I don't think is that useful. But some of it is about parallel parsing. If you are interested, there are quite a number of people with something to say about it. However, the speed wins in practice are not very big.
On the other hand, if you just write the parser so that it's fast to begin with, you don't really have a problem. The language I am working on parses 2.5 million lines of code per second on a laptop, and I have only spent a couple of hours working on parser speed. To do this it does go in parallel, but it goes parallel in the obvious way using ordinary data structures (1 input file at a time as a distinct parallel unit). So it's not "parallel parsing" in the algorithmic sense.
There are, of course, computer science concepts that are very smart. But we don't need these to save us from slow software, because today's slow software problem is just the result of people doing bad things in layer upon layer. We have to stop doing all the bad stuff and dig us out of the hole we're in, just to get back to neutral. Once we are back at neutral, then we can try thinking about some computer science smarty stuff to take us forward.
Sorry for presuming age + experience level, it's how this came across to me. Actually I think Rush is a prime example of "excited about general ideas that turn out not to be right or relevant to much". But we were students, and I guess that is what students often do.
I agree modern editors are too slow and bloated. I would write one if I didn't have way too many other things happening. But I don't think they are slow and bloated due to a lack of computer science concepts. I think they are slow because most of the world, over the last 25 years, has lost the art of writing software that is remotely efficient.
If I were to write an editor, it would store text as arrays of lines (since lines are what you care about) with maybe one level of hierarchy, such that each 10k lines of the file are in one array. I think that would be fine and if it ran into problems with very large files, relatively minor modifications would take it the rest of the way. (Of course this is untested but I feel pretty confident about it). Rather than calling malloc all the time, a specialized allocator would be in play.
I do think it's a good idea to make a better editor so I wish you good luck with that (dude I am so sick of emacs).
Excitement is nice to feel, but it takes some experience to know when excitement is really aimed in a productive direction. Otherwise we end up with the kind of motivation that so often produces over-complex and mis-aimed software: having a "cool idea" for "exciting technology" and then looking for places to apply it, and the applications don't really fit or don't really work, but we don't want to notice that, so we don't.
To pull examples: an entire one of these essays is on "paren matching" and how it would be really great if you monoidized (ugh) and parallelized that ... the basic idea of which is instantly shot down by the fact that language grammars are just more complicated than counting individual characters. Hey bro, what if there is a big comment in the middle of your file that has some parens in it? The author didn't even think of this, and relegates this to a comment at the end of that particular essay: "Jonathan Tomer pointed out that real parsing is much more interesting than just paren matching." Which is a short way of saying "this entire essay is not going to work so you probably shouldn't read it, but I won't tell you that until the bottom of the page, and even then I will only slyly allude to that fact." Which in itself is contemptuous of the reader -- it is the kind of thing that happens when you are excited enough about your ideas that the question of whether they are correct is eclipsed. This leads to bad work.
There's the essay about the scrollbar -- if you have a 100k-line text file, do you really want a really long line somewhere in the middle to cause the scrollbar to be narrow and tweakyin the shorter, well-behaved majority of the file? No, you probably don't! But this shoots down the idea that you might want to do a big parallel thing to figure out line length, so he declines to think about it. In reality what you probably want is the scrollbar to be sized based on a smooth sliding window that is slightly bigger than what appears on the screen (but not too much).
Besides which, computers are SO FAST that if you just program them in a straightforward way, and don't do any of the modern software engineering stuff that makes programs slow, then your editor is going to react instantly for all reasonable editing tasks.
I don't want to be too overly critical and negative -- these sorts of thoughts are fine if they are your private notes and are thinking about technical problems and asking friends for feedback. It becomes different when you post them to Hacker News and/or the rest of the internet, because this contains an implicit claim that these are worth many readers' time. But in order to be worth many readers' time, much more thought would have had to go in ... and as a result, the ideas would have changed substantially from what they are now.
I didn't read past essay 4, so if it gets more applicable to reality after that I don't know!
If you want to talk about Haskell, fine ... I don't know anything about Haskell, though, and I am interested in high-performance programming, which is an area where Haskell cannot currently play (nor can any GC'd language). Making claims about how the performance of an operation in a slow language doesn't get any slower under certain circumstances isn't that interesting to me.
but history did not go that way....
No, a real-world case is that I have a giant program that uses my own geometric primitives, and now I want to start heavily using a library. I know they are just fricking 3D points or quaternions or whatever. Yet because of some weird ideology you want to increase the amount of gruntwork I have to do, and make my life much less pleasant.
You don't want an 8-dimensional point literal that takes 8 arguments directly, that's never going to be readable. You might want to use a builder. More likely you want to load it from a data file or something on those lines rather than constructing it directly. Where are you even getting these 8-dimensional points from?
WHAT ARE YOU TALKING ABOUT
In Haskell (or indeed in C) it doesn't necessarily have performance implications; an 8-element structure may have exactly the same runtime representation as those 8 elements being passed distinctly.
Wow, okay, this conversation is over.
(b) This has performance implications, not least because of the ABI. And depending on what language you are using, they can be quite severe (good luck if you are using one of those languages that always puts classes on the heap).
It is hard for me to believe that this is a piece of style advice that anyone writing serious software would follow.
If you need to do something wherein the basic task requires 8 arguments' worth of information (which happens A LOT) then trying to factor that into 3-argument pieces is going to give you something Byzantine that is probably also buggy (and it could get extremely heinous, in a way determined by the data dependencies internal to the procedure you are factoring). And if you somehow succeed at all this, congratulations, you just did a bunch of engineering that did not improve the functionality of your software in any way. (In fact it probably made the software take longer to compile).
If doing a certain job needs 8 pieces of information, it needs 8 pieces of information. It doesn't help anyone to try to break that up.
Similarly with this:
keep the complexity of the functions as low as possible
Not really. If you are just factoring some block of complexity in to 4 blocks of less-complexity, well, now you have the same amount of complexity as the original code, plus the complexity of the call graph, and the fact that the person who comes along to read the code will not be able to clearly see the control flow.
There definitely are many cases when factoring a procedure into simpler things is beneficial. But to claim that it's a good idea all the time, or even half the time, is I think mistaken.
But this is all that is really necessary. Once you start getting into components, you add a lot of complexity (even if the pitch is that it's "simple").
In general my policy is that when things get really complicated or specialized, the application knows a lot more about its use case than some trying-to-be-general API does, so it makes sense for the application to do most of the work of dealing with the row heights or whatever. (It's hard for me to answer more concretely since it depends on exactly what is being implemented, which I don't know.)
The actual uncertainty comes from the fact that the two quantities are Fourier transforms of each other... and just by that relationship, inherently, if one gets very localized (== very high frequency bump in its space), its Fourier dual gets spread out very far through space. (You can sort-of analogize this if you know about audio ... a sharp spike in temporal space, when transformed into frequency space, becomes a very big spread of values, because all those frequencies are relatively blunt and they have to somehow fit together to make this sharp thing, which requires an enormous number of them. Or if you go the other way, a sharp spike in frequency space means one frequency, which transforms into an infinitely-long sine wave in temporal space. So think about that kind of thing, except instead these are waveforms where the y value is kind-of the probability of getting that particular x-value as a result if you perform a measurement.)
I have worked on games and engines using several different systems, and the only ones I ever enjoyed treat entities as plain regular structs that you operate on with procedures.
The games I have shipped all treat entities this way. And I never thought "I wish I had made the entity system more complex". The top N problems on our list, where N > 10, are always about graphics drivers or APIs.
Entities are hard enough when they are just structs. Don't insist on making them harder, or you are likely to shoot yourself in the foot when it comes to performance issues, later.
I don't know that you can blame anything on "the fundamentally antagonistic world" since all businesses face a fundamentally antagonistic world and their goal is to overcome that.
If the wage gap is happening and is really so drastic, if women are being undervalued so hard, etc, then there should be a massive Moneyball-style opportunity for people to start companies that correct this error. With the advantages you'd gain by adjusting hiring, you'd completely trounce the competition.
This hasn't happened yet though. Either people are being slow to do it, or the wage situation is not as straightforward as it is being put in these arguments.
Immediate mode GUI systems are allowed to keep state around between frames and the most-featureful ones do. The "immediate mode" is just about the API between the library and the user, not about what the library is allowed to do behind the scenes. The argument that retained-mode systems are inherently better at this doesn't hold water; it is kind of an orthogonal issue.
The thing is that list fusion and whatnot is all just there to get around the handicap that was placed there in the first place by the language paradigm. So you start by insisting on shooting yourself in the foot, then put lots of armor on your boot so the bullet hopefully bounces off.
I assume by "vectors" you mean arrays ... there is no case in which this can be faster than arrays, because in the limit, if the list fusion system works perfectly, it is just making an array. A thing can't be faster than itself.
Here is a (somewhat old) video explaining some of the motivations behind structuring things as IMGUI: https://www.youtube.com/watch?v=Z1qyvQsjK5Y
"What happens if you try to present an immediate mode API for UIs is the status quo with APIs like Skia-GL."
I don't know what Skia-GL is, but in games, the more experienced people tend to use immediate-mode for UIs. (This trend has a name, "IMGUI". I say 'more-experienced people' because less-experienced people will do it just by copying some API that already exists, and these tend to be retained-mode because that is how UIs are usually done). UIs are tremendously less painful when done as IMGUI, and they are also faster; at least, this is my experienced. [There is another case when people use retained-mode stuff, and that's when they are using some system where content people build a UI in Flash or something and they want to repro that in the game engine; thus the UI is fundamentally retained-mode in nature. I am not a super-big fan of this approach but it does happen.]
"and you draw strictly in back to front order so you completely lose your Z-buffer"
That sounds more like a limitation of the way the library is programmed than anything to do with retained or immediate mode. There may also be some confusion about causation here. (Keep in mind that Z buffers aren't useful in the regular way if translucency is happening, so if a UI system wants to support translucency in the general case, that alone is a reason why it might go painter's algorithm, regardless of whether it's retained or immediate).
"But that's the API that these '90s style UI libraries force you into."
90s-style UI libraries are stuff like Motif and Xlib and MFC ... all retained mode!
I don't agree that an IMGUI style forces you into any more shader switches than you already would have. It just requires you to be motivated to avoid shader switches. You could say that it mildly or moderately encourages you to have more shader switches, and I would not necessarily disagree. That said, UI rendering is usually such a light workload compared to general game rendering that we don't worry too much about its efficiency -- which is another reason why game people are so flabbergasted by the modern slowness of 2D applications, they are doing almost no work in principle.
Back to the retained versus IMGUI point ... If anything, there is great potential for the retained mode version to be slower, since it will usually be navigating a tree of cache-unfriendly heap-allocated nodes many times in order to draw stuff, whereas the IMGUI version is generating data as needed so it is much easier to avoid such CPU-bottlenecking operations.
In reality the problem is trivial, you set up a scissor rect (or explicitly mask the pixels in your shader) and then render only stuff overlapping that square. You don't need to invert the pixels for it to be fast; you can render an arbitrarily nice cursor effect.
Sorry, that is plainly false. There is nothing preventing you from treating an offscreen buffer just like any other buffer of non-dirty pixels. Treating the back buffer that way is slightly less conventional but is still just fine.
The point of engineering is to solve actual hard problems.
The main exception is low-number-of-player token-ring style games like RTSs with tons of units. Those usually simulate in lockstep, with the full state of the world extrapolated from inputs that consist of a very small amount of data. This means network traffic is relatively low, but in order for this to work you have to have complete knowledge of everything and exactly when it happened, which means no packet loss can be accepted and everything must be processed in order. So then you have the same kinds of problems as with TCP (even if the underlying transmission is via some other protocol) ... thus these games operate with some large amount of latency to hide these problems.
But, this network design is only the case for a minority of games. Just about any modern multiplayer game that is drop-in/drop-out, where the developers really care about quality of experience, is better off going UDP. (This is not to say that developers always do the best thing, since it's much easier to just say screw it and talk over TCP and call it a day. The temptation to do this is heightened because of all kinds of problems with NAT punchthrough and whatnot; because so much traffic is Web-oriented these days lots of routers mainly care about that, which causes all kinds of interesting annoyances. Thus games that do talk over UDP generally fall back to TCP if they are unable to initiate a UDP connection).
Well, there is one other case of games that run in lockstep, which is when they are console games made by developers who want to avoid incurring the costs of running servers (which are often much higher on consoles because the platform holder charges you out the nose). When you are running in lockstep like that it is more like the RTS scenario above, and thus it doesn't matter much if you use TCP because you are already taking the quality hit. But this is a cost-cutting kind of decision, not an it's-best-for-gameplay kind of decision.
P.S. It's not a good idea to call someone naive about a subject where you yourself may not know enough to correctly judge naivete.
For example, if you are transmitting the position of some guy in a world, N times per second ... and you drop one particular packet ... that's fine, you just get the next one and you have more up-to-date information anyway.
TCP will block the entire connection when that packet is dropped, waiting until it is received again, and not giving any of the subsequent information to the application. This is bad in THREE different ways: (1) By the time the new position is received, it is old and we don't care about it any more anyway; (2) Subsequent position data was delayed waiting on that retransmit and now most of that data is junk too, EVEN THOUGH WE ACTUALLY RECEIVED IT IN TIME AND COULD HAVE ACTED ON IT, but nobody told the application; (3) Other data on the stream that had nothing to with the position was similarly delayed and is now mostly junk too (for example, positions of other guys in totally other places in the world).
It is hard to overstate how bad TCP is for this kind of application.
Whereas I have not done your specific test, I know that for the file sizes of executables I deal with in everyday work (around 10MB), the amount of time I wait for linking is woefully disproportionate.
lld is still slow, it is just less slow than the other linkers.
This is not to disparage anyone working on linkers or say they are not smart. I think they just don't tend to be performance-oriented programmers, and culturally there has become some kind of ingrained acceptance of how much time it is okay for a linker to take.
Now it's "well it doesn't go 500 miles and it doesn't charge in 5 minutes, and okay, it is the quickest to 60mph of any production car in the world, but only on a full charge and that sucks!!!"
EVs keep getting better. Another 10 years from now, what will anti-EV people be saying?
On a meta-level, invoking "virtue signalling" is itself a signal that you want to ad-hominem the human subject rather than discuss and resolve the issues.
It is the opposite of rationality, and it's a bit weird that so many People Who Think They Are Rational are into the idea.
See this example: https://www.youtube.com/watch?v=cfyWvJdsDRI
"Nobody really wrote software rendering like that beyond CG classes".
I read this as a claim that nobody in general wrote software renderers. When by "like that" you just meant using the specific techniques he used.
That said, I still have to disagree, in the sense that, to get to a fast software renderer, you start with a slow software renderer. Nobody does all the crazy optimizations a priori ... so stuff like a divide per pixel was common, say. Calling trig functions in inner loops is of course goofy, but my presumption is that in the next step of refinement those would be lifted out of the loops, because that is the way things are always done.
People in the video game industry wrote tons of this stuff. We would spend weeks figuring out how to get one or two instructions out of the rasterizer or scanline converter, etc. I know this because I was there. I wrote several software rasterizers, and I learned how to do it by reading papers and magazine articles written by other people who wrote software rasterizers.
I have no doubt that other industries did so as well.
Even more recently, companies like RAD Game Tools built as products software rasterizers that are very fast (e.g. Pixomatic).
Also, what's in this article is a simplified introductory take. It is actually much much more complicated than this. (It doesn't look to me like he is doing perspective-correct shading, for example.) Also this guy's code is crazy slow compared to what you'd write in the real world, but hey, it is a tutorial.
True in some sense, but mostly useless. Come on.
Anyway, it's the wrong problem. I don't need help understanding x * 3 + y. I need help understanding what these 30kLOC in these 17 files do.
But come on, I have used emacs for 25 years, and on a daily basis it stalls for annoying amounts of time while I am just doing something simple like editing a .cpp file. Today. In the year 2017.
But this doesn't seem to be nearly enough to tip the balance in terms of operating a business overall ... I consistently hear from people how much it sucks to operate a small business in Europe. I don't see what's wrong in principle with having worker protections kick in at a certain company size, but that doesn't seem to be popular.
If you think about companies as "us versus them", where "us" is workers and "them" is giant faceless monolithic corporations, then your idea that enforcing worker protections is a high priority might make sense.
But most companies are small. I run a software company ... I am just a guy trying to get by, who now in addition to the normal-person's burden of making my life go, has to also make a company go, and that company provides jobs for 10-12 people.
If you make my situation much harder than it is, the company would cease to exist or would downscale to 2-4 people, shedding the majority of the jobs. I am not a faceless corporation, I am just a guy who wants to get interesting things built. My little company is certainly not set up to "exploit workers", especially not on an industrial scale.
Now, paraxoically, if you add a lot more friction to what needs to happen to run a business (regulation around hiring, firing, invoicing, etc), then people like me drop out, and then what you mostly have left is the larger companies who do want to exploit workers because that is just kind of how larger companies work. Plus then you lose all the innovation / energy / economic activity that comes from smaller companies. It maybe seems like not the best idea. (If it is, how come Silicon Valley is not in France?)
As an investor, I fund a small French company and I have seen some of the crap they have to deal with just because they have a handful of employees. It makes me very glad I don't live in France.
This is true for a very small percentage of the population.
And that was the 90s, after Lisp had been around for decades...
For me it's a double whammy. I don't like the lack of headphone jack, and whereas I feel like I could manage grumpily ... for me it kills the excitement of buying the new device, and I think that is important. (My standard listening headphones are Etymotic ER-4Ps; there is no way I am going to downgrade to AirPods).
But the bigger part of the whammy is iOS. iOS is completely terrible at this point. I just can't consistently control the phone. A large percentage of taps or swipes do things I did not intend (how many I'm not sure -- 20%? 33%?) It's just completely crazy. They need to get rid of 3D touch, get rid of double-tap one-hand accessibility mode whatever it's called, get rid of weird swipes from the edge, fix the horrible inconsistencies in the way autocorrect works (or, please, offer a system that just underlines words-thought-to-be-wrong without changing them, and let me tap on them to change them, or use the current autocorrect system but let me tap on a word to un-"correct" it, the fact that the current system just changes what I typed and gives me no recourse to fix it apart from laborious deletions and re-typings, which I often have to do 2 or 3 times, is just haughty and offensive)... and in the meantime, might as well redesign the rest of the UI. Because right now the phone is not a joy to use, it's a constant exercise in frustration. I haven't felt good about using iOS since sometime back around iOS5 or 6.
So it's no mystery to me why sales might be slowing ... I don't want a new one if it's going to continue the downward trend.
The reason is that whoever wrote the GC for your language has to solve an extremely general problem for an extremely large body of users with very different use-cases.
A memory management system for a particular program only has to solve the problems of that program, which is a tremendously simpler thing to do.
General-purpose GCs are like the F-35 or Space Shuttle ... due to the broad nature of demands they are very complicated, and are much more expensive and perform more poorly compared to specific solutions.
I think most documentation sucks and I dislike trying to read it, because it's very hard to get a picture of what's going on, and what these procedures / data structures / etc are really for.
I am much happier when I can just look at a straightforward and clear example, and then just use the documentation to look up specifics of how things work after I already get the basic idea.
It's not because I "need worked examples of things to understand them", it's because that is the way I like to work, because I have had many instances of my life of trying to make sense out of documentation that seems to have been written from a mindset of "formal writing involves not actually telling the reader what things are really for, straightforwardly". I don't know why that disease is so common, but almost all documentation is like that.
I had a lot of fun playing Shenzhen I/O and TIS-100, so if you want to do some programming challenges, I'd recommend trying those.
Stack allocation is what your computer is built from the ground up to do. It is not some kind of workaround or optimization, it is how software was originally designed to work.
I am not just talking about putting copies of things on the stack, but having few copies to begin with, etc.
If you handle strings, and you are copying and freeing strings all the time, that's just slow code.
And it's not the case that GC makes the slow code faster... it's that certain techniques enable GC to not fail catastrophically on slow code.
It's a little bit disturbing that folks are ready to extrapolate this to some kind of universal rule.
In a high-end video game, for example, we hardly allocate anything short-term at runtime. The whole program is architected to avoid it. When we allocate, it tends to be things with medium-to-long-term life (texture maps, sound effects, render targets, whatever) so a generational system is useless there. In fact we don't use GC on these kinds of things at all, because we also need to control exactly when they are deallocated so we can put something in their place, because memory is limited.
In three years, people will have realized it wasn't such a good idea, and moved on to the next "this will solve everything" fad.
Once you have seen enough of these fads go by, it's pretty obvious.
I'd be careful about extrapolating future directions of computer science from something that seems cool this year.
I have an original Tesla Roadster, bought in 2010, with a battery that is basically the first thing they figured out how to do in order to put a car together. (The Model S battery is much more advanced). I drove the Roadster daily for 6 years, and I had about 12% capacity loss after those 6 years. This was a much better situation than Tesla projected (I don't remember what they said at the time, but it was something like 30-40% loss at 7 years, and for a relatively low price they sold an optional battery replacement plan that kicks in at 7 years).
Supposedly the Model S's chemistry is much, much better. Just saying "they're lithium batteries" is kind of a red herring, because there are many many subclasses of lithium battery, and at least according to Musk the fact of lithium is not nearly the most important part, but what really matters is the composition of the cathode and anode: https://chargedevs.com/features/tesla-tweaks-its-battery-che...
[Edit: And the theory that they would have preemptively hobbled the car's maximum range by (.85^6) is just crazy, because it means they could instead have advertised a car that had THREE TIMES THE RANGE on its initial launch, and "range anxiety" was one of the biggest issues they had to overcome. They could have said OUR CAR GOES SIX HUNDRED MILES ON ONE CHARGE, which would be way more important than hiding some degradation.]
(In fact we go out of our way to not do malloc-like things in quantity unless we really have to, because the general idea of heap allocation is slow to begin with.)
If you really care, then you actually profile your system and see what takes how much time, under which circumstances. The results of such a profile are almost always surprising.
I guess this is a basic cultural difference -- almost nobody in the HN crowd really cares whether their software runs quickly; there is just a bunch of lip service and wanting-to-feel-warm-fuzzies, with very little actual work.
In video games (for example) we need to hit the frame deadline or else there is a very clear and drastic loss in quality. This makes this kind of issue a lot more real to us. If you look at the kinds of things we do to make sure we run quickly ... they are of a wholly different character than "guess that calloc is going to do copy-on-write maybe."
The article says you should use calloc because it provides these optimizations. I am saying no, that's goofy, because it is not specced to provide these optimizations.
If you depend on copy-on-write functionality, then you need to use an API that is specced to guarantee copy-on-write functionality. If that means you use an #ifdef per platform and do OS-specific stuff, then that is what you do.
Anything else is amateur hour.
If copy-on-write is a desirable feature, then as the API creator, your job is to expose this functionality in the clearest and simplest way possible, not to hack it in obscurely via the implementation details of some random routine. (And then surprise people who didn't expect copy-on-write with the associated performance penalties.)
This is why we can't have nice things.
If you wanted to keep it the old way, and depend on the nuances of how an allocator stores memory, then ship your own allocator. Video game people do this as a matter of course; it's not a big deal.
In our current non-fictional universe, where we don't know if there is free will, then if you want to believe in it, maybe the burden of proof is on you to explain how it would even be possible given what we know of the universe.
So I am not sure why you think a fiction author is failing at is job by not explaining the mechanism by which there is not free will.
Which is supposed to be what is simplified as LOC goes down.
So if a supposed 5x-10x code reduction (which I've never seen real evidence of) doesn't lead to 5x-10x productivity increase, how much increase is there supposed to be? Surely more than zero?
Every time I have heard this kind of claim (with modern languages), it turned out not to be true except for trivial code or straw-man bad code in the 'bigger' language. So if you have real-world examples that have real-world effort put in, I'd like to see them! (I would be happy to be wrong.)
5x-10x productivity increase would be huge if it actually existed; it would be so unstoppable that everyone would switch to the new really-great language immediately. That hasn't happened, which should be a clue that maybe the increase is not there.
Even a 20% decrease in cost of engineering would be so large as to be unignorable.
I write something like 25kLoC/year (of shipping code, generally very complex stuff) and I don't even program full-time. The two projects I am working on now are 35kloc (the smaller one) and 250kloc (the medium-sized one).
If someone thinks 10kloc is big, I have a hard time thinking of that person as a professional programmer.
(Numbers listed here exclude blank lines and comments.)
Everyone who understands American economic policy knows that the currency is being slowly devalued on purpose. This is not a conspiracy theory, it is common knowledge. The inflation target is always greater than 0. This is in part because of the perceived risks of deflation -- better to be on one side of the line than the other -- but also, generally, the point is to encourage people to spend or invest rather than passively save, because spending and investment grow the economy.
To a libertarian this is one of the most oppressive things about the way the government works currently ... it forces everyone to work more than they would ideally have to, in a sense. (But I say "in a sense" because if the economy were at a much less active level as "normal" maybe everyone would have lower quality of life. I don't know.) If you ever wondered why Ron Paul dislikes the Fed so much, well, it's because of reasons like this.
> "Good programmers know how their GC works. What, are you kidding, or am I misunderstanding?"
I think you are not understanding what I am saying.
You link your allocators into your code so you know what they are. You see the source code. You know exactly what they do. If you don't like exactly what they do, you change them to something different.
A garbage-collector, in almost all language systems, is a property of the runtime system. Its behavior depends on what particular platform you are running on. Even 'minor' point updates can substantially change the performance-related behavior of your program. Thus you are not really in control.
As for your other examples, apparently you're a web programmer (?) and in my experience it's just not very easy for me to communicate with web people about issues of software quality, responsiveness, etc, because they have completely different standards of what is "acceptable" or "good" (standards that I think are absurdly low, but it is what it is).
(And before you can thinking about paying rents or whatever, if you are in the top tax bracket, enjoy paying 13.3% state income tax, which, after Federal tax, is a staggering 22% of your income.)
Good programmers understand how malloc works. What, are you kidding, or am I misunderstanding?
Performance-oriented programmers do not use malloc very much. As you say, you can also try to avoid allocations in GC'd languages. The difference is that in a language like C you are actually in control of what happens. In a language that magically makes memory things happen, you can reduce allocations, but not in a particularly precise way -- you're following heuristics, but how do you know you got everything? Okay, you reduced your GC pause time and frequency, but how do you know GC pauses aren't still going to happen? Doesn't that depend on implementation details that are out of your control?
> even though in practice SO many things do not deliver on this promise in shipped products!
But, "in practice" is the thing that actually matters. Lots and lots of stuff is great according to someone's theory.
The difference is if the memory management is manual, you have the ability to clean it up and reduce that overhead toward 0%.
If it's a system-enforced GC, you are limited in what you can do.
There is not even a reason to believe that the "outer universe" has such things as space and time or information as we know it, and no way to know what a "computation" might comprise in such a situation.
Maybe the situation is not that pessimal, and an outer universe is much like ours, but to prefer that believe one would need evidence, of which we have none.
Maybe you didn't ever have the experience of programming on an 8-bit CPU and don't get the joke? The machines in these games are comically limited, "this is like the cruftiness of an 8-bit CPU but even worse". The funny thing about TIS-100 is that it's a speculative fiction game, postulating an alternate reality -- what if we had gone down the path of multicore CPUs back when they were still super-primitive?
I don't want to think about what time it is when I go on Facebook or whatever, because I have more important things to think about. There is nothing irrational about that (though the premise that increased rationality is something to be desired in this context is deeply questionable to begin with and maybe slightly creepy.)
When an ISP can offer a plan that requires me to think as little about it as possible, that simplicity is a valuable service.
What gets super confusing is that you have a bunch of different stuff flying around. You have textures in different formats and render targets in different formats (some are in sRGB, some are in HDR 16-bit floating-point, some are other random formats somewhere in-between). You need to set up your shader state to do the right thing for both the input texture and the render target, and the nuances of how to do this are going to change from system to system. Sometimes if you make a mistake it is easily spotted; other times it isn't.
And then there are issues of vertex color, etc. Do you put your vertex colors in sRGB or linear space? Well, there are good reasons for either choice in different contexts. So maybe your engine provides both options. Well, now that's another thing for a programmer to accidentally get wrong sometimes. Maybe you want to introduce typechecked units to your floating-point colors to try and error-proof this, but we have not tried that and it might be annoying.
All that said, everyone is about to rejigger their engines somewhat in order to be able to output to HDR TVs (we are in the process of doing this, and whereas it is not too terrible, it does involve throwing away some old stuff that doesn't make sense any more, and replace it by stuff that works a new way).
There are entire organizations devoted to assessing the effectiveness of various kinds of charity and measuring how many lives they save (e.g. http://www.givewell.org/ and https://www.givingwhatwecan.org/), and their reports can be found within 20 seconds of googling, less time than it took for you to type your based-on-no-actual-information sarcastic judgement.
Sarcasm plus knowledge would be fine.
It enables a whole class of highly effective program manipulations that are just unavailable to a non-statically-checked language.
We test the hell out of our stuff, and it works way more reliably than most web sites I have ever seen. But we don't do it with unit tests, because unit tests are not very useful in complex systems, because they do not test anything hard!
How long does it take you to write and test all those tests? Could you have been doing other things with that time? At 40 lines of functionality, the tests are going to be at least as big as the things you are testing (??), so what kind of a multiplier are you taking just on lines of code written? How much does that cost?
[I run a software company where I pay for the entire burn rate out of my own pocket. So these questions are less academic for me than they are for many people.]
I think also what you're talking about is a function of programmer skill. I think if you have a good programmer write a 1000-line procedure, and a bad programmer write a 1000-line procedure, you are going to get drastically different things ... just like with anything.
Usually I just search for the name of the puzzle I want to edit (which is also how you'd do it if it were a ton of different procedures).
My experience has been that people on HN tend to interpret that part of the posting a little more extrapolatingly than I do. I think he is saying something pretty obvious, which is that when you can structure things in terms of pure functions, you don't have to worry about the side-effects that are one of the main issues you need to contend with when factoring things apart.
This is different from being a "fan of functional programming", i.e. believing you should use current functional programming languages to build your projects, or whatever.
If it's 100,000 lines of code, and you break stuff every 40 lines, you have now introduced 2500 procedures many of which don't really need to exist. But because they do exist, anyone who comes along now has to understand this complex but invisible webbing that ties the procedures together -- who calls who, when and under what conditions does this procedure make sense, etc.
It introduces a HUGE amount of extra complexity into the job of understanding the program.
(Also you'll find the program takes much longer to compile, link, etc, harming workflow).
I regularly have procedures that are many hundreds of lines, sometimes thousands of lines (The Witness has a procedure in it that is about 8000 lines). And I get really a lot done, relatively speaking. So I would encourage folks out there to question this 40-line idea.
See also what John Carmack has to say about this:
From reading it you would think Tesla was some kind of failure of a company, rather than a miraculous startup that has done what no American car company has managed to do in over 100 years.
"You see, the fact that Tesla has 400,000 preorders is actually a sign of failure!" Yeah, tell me more...
I stopped reading there. If you have total ignorance of how native applications work, maybe fill that hole before trying to evangelize that everything should be written in JS...
If you modify your example to 400 nested 10-line function calls, how does that change your comparison?
It's a case where some people are choosing to do something that is a lot harder than a straightforward parse ... but as a user, a straightforward parse is actually what I want.
That said, even if you thought this was the right way to go, I am not sure that the internals of their code would look anything like the kinds of parsing tools you are talking about, so I am not sure it supports your point in any way.
> And again, I'm not claiming that ALL parsing is hard.
Parsing is easy. The video you link above is harder, but that's not really parsing any more, it's more like "make sense of this text that is sort of like a working program", which is more like an AI problem.
But anyway. It's pretty clear you haven't written many parsers (or any) so I am going to stop arguing. If I were to "win" this argument I wouldn't get anything out of it. I am trying to help by disavowing people of the notion that certain things are harder than they've been indoctrinated to think. If you don't want that help, fine ... just keep doing what you do and the world will keep going onward.
If someone is going to be offended that a potential employer asks them to reverse a linked list in an interview -- something that seems a bit trendy in the web world lately and several such articles have made the HN front page -- then look, that person does not really know how to program, so of course they think it's hard to do basic stuff. Such a person's opinion on how hard things are is not that relevant to how hard they are given a reasonable background education.
Probably this sounds snobby to some people, but look, programming well is a never-ending pursuit, you can spend your whole life getting better, but it won't help anyone advance if we all pretend that everyone is good already.
It would not change at all, and I have no idea why you think it would, except to guess that the model you have in your head of a hand-written parser kind of sucks. They don't have to suck.
"...not knowing what language you've designed." I have no idea what you're on about here either.
Look, I think you are making things a lot harder than they are. I am not bragging ... I used to build lexers and parsers by hand 23+ years ago when I was a student in college and had almost no programming experience compared to what I have now. It is not hard. If you think it's hard, something is missing in your knowledge set.
(I also built stuff using parser tools 23+ years ago, and was able to very clearly contrast the two methods. Parser tools have gotten slightly better since then, but not much.)
There are a lot of reasons for this, but one of the basic ones is that the lexer does not need to interact in a complex way with the compiler's state. It is a relatively simple pipeline where characters go in one end and tokens come out the other.
Are you confident that the same programmers could have successfully built a line-counter if they built it using parser tools?
(1) Parse everything as though it were left-to-right.
(2) After each node is parsed, look at its immediate descendants and rearrange links as necessary. (Nodes in parentheses are flagged so you don't rearrange them.)
I can tell that the person above who is listing off a bunch of reasons not to use "recursive descent" hasn't written a compiler by hand ever (or not well). Most of the things he is talking about are easier to do by hand than in some complicated and relatively inflexible system.
Note that 'prediction' is mostly a red herring since you can look as many tokens ahead as you want before calling the appropriate function to handle the input. You would need to have a pathologically ambiguous language in order to make that part hard, and if your language is that ambiguous, it is going to confuse programmers!
In general, parsing is easy (if you know how to program well in the first place) and is only made more difficult/inflexible/user-unfriendly by using parsing tools. That doesn't mean that academic theories about parsing are bad -- it's good that we understand deeply things about grammars -- but that does not mean you should use those systems to generate your source code. (I do think it's a good idea to use a system like that to spot ambiguities in your grammar and decide how to handle hem, because otherwise it's easy to be ignorant... But I would not use them to generate code!)
What they are doing is trying to write theories and build conceptual systems about how to do things. That is their job. But when it comes to practical matters, the best route to take, as someone who wants to build a working compiler that gives good error messages and where the parser does not hamstring the rest of it, is to ignore almost all that stuff and just type the obvious code.
Of course these things are relative or subjective and what have you, but it's pretty rare for people who have a lot of experience playing games to seek out stuff on iOS because of the great quality of games there.
If you build a system that incentivizes garbage games, that's what you get, and well, that is what we have.
Fortunately if you are someone like me, who wants to make actual good games, there are still platforms where you can do that, and do quite decently money-wise. I am hoping those don't go away.
But for the most part, it is just that AT&T kept buying smaller companies, which is just what happens in capitalism when one party starts to win, which is why checks on capitalism are necessary.
I am 44 years old, which means I remember growing up at a time when you were not allowed to own a telephone -- because AT&T exercised its corporate monopoly to control what you could plug into your AT&T phone line, and they would only permit that to be an AT&T phone, and they would not ever sell you an AT&T phone, they would only rent you one at an exorbitant price. And they didn't bother to provide you any variety in models, because why would they? There's one phone, that is what you get.
Also, if you wanted to call someone in a different area code, then I hope you are ready to shell out some cash...
If it weren't for state-exercised power, it is quite possible that things would still be this way.
I do not consider today's situation a disaster at all, relatively speaking. (For sure there are still many un-ideal things about it.)
I have had a substantial number of news blog stories written about me, and I think among the general population of news bloggers there's a lack of professional ethics of the kind journalists supposedly used to have. Certainly not all bloggers are bad; some of them are upstanding, but really the majority are not.
When the incentive is just to get the most hits, it is very easy for a blogger to present a quote or situation out of context, or even for an editor to slant a headline in a certain way, in order to make the maximally inflammatory result. When this happens, it is parasitic behavior -- they are degrading your reputation in order to make money. But the amount of money they make off that article is small compared to how much you value your reputation, so the result is massively net-negative to the world.
This has happened to me A LOT so I have a pretty well-tuned sense for how it happens. I also have a pretty long list of journalists and outlets I won't do interviews with ever again.
The issues get pretty subtle. For example, it is common for them to take a one-or-two-sentence aside from an interview and write a ehole article about it, making it seem like you called a press conference just to say that one thing -- which is a massive distortion of your intent (and usually your personality). Because they want the most hits and people being enraged makes hits, it is usually a negative distortion. And it's intentional -- they are trained to look for these opportunities. I think it is very unethical, though of course there is nothing illegal about it -- you did say that exact thing.
I think as long as that is happening, it is hard to take these sites seriously as producing "journalism".
As someone who has been around the web from the beginning, I will tell you this is horse shit.
Andreessen was building Mosaic at NCSA when very few people knew what a web browser even was. (There were only a few browsers in existence at that time, most of them were unusable, and the most popular one displayed only text. ftp was still massively more 'popular' than the Web, and in fact so was gopher ... gopher, FFS.) O'Reilly hosted what was basically the first WWW conference, in New Orleans, sometime in 1992. The attendance was about 40 people -- that is how big a Web conference was at that time. Marc was there. (So was I). People were mad at him because Mosaic was hacking the IMG tag into HTML without waiting for everyone else to discuss and agree on a standard.
So yeah, you are denigrating someone despite having no idea what you're talking about. But hey, I guess that is par for the course on an internet forum.
Also. Marc actually had hair in 1992!!
Your argument is basically "the small company is small". So what? Do you propose that business can work in some other way? How else would you propose that it works?
Then it was "electric cars are too expensive, they are rich peoples' toys, and mainstream consumers will never want them anyway."
Now it is "Tesla will not be able to scale up to meet all the demand."
The fact that EV naysayers have been forced to cycle through this spectrum of responses in less than 10 years should be a clue of some kind.
Come on, seriously.
Imagine a hippie concept such as "you are effectively the same being as that guy over there, if either of you gets hurt, it's isomorphic, it's equally bad to the overall organism".
Now, imagine that this is objectively true, i.e. there is something in the basic laws of reality that, if you could observe it, would show the hippie idea to be obviously true.
Then perceiving this part of the laws of reality would be anti-fitness, so you would evolve to be blind to it.
This is possibly disingenuous, and at least overly rhetorical.
They are not "giving him a voice" to talk about anything related to anything racist, and I'm sure if he used his slot to talk about anything racist, he would get perma-banned from the conference.
It is hard to say more than this without just repeating things said in the conference's statement. The idea is that a professional society ought to be able to cohere even when the members of that society disagree on matters outside the subject at hand. It seems like a good idea.
I run a software company and I'll say straight up I would not hire someone with your attitude.
Sure, maybe it's different this time ... But usually it isn't.
If you can't do this, you are not qualified at basic manipulation of data structures and you should fail the interview if they want someone with basic competence in data structure manipulation. Sorry but that is how it is.
I think the idea that the game industry is "behind" other fields is kind of comical, given that games are some of the most complex software in the world, and big game teams have only a few hundred people on them, and meanwhile something relatively trivial like Twitter has 4000 people. It's true that game teams don't do a lot of Agile or TDD or whatever the next buzzword is, but that is because those things are mostly superstition and obviously don't work when you start attacking hard problems.
So if you are someone a few years out of school who learned TDD it is easy to say "games are behind, they don't do all the new stuff!!" while being unaware that almost all the new stuff is bogus cargo-cultism anyway.
I do agree that the game industry engages in unhealthy levels of crunch that are to its long-term detriment, but this is mostly an orthogonal issue to software engineering practices.
So cancelling whatever Prl is seems like yet more internet outrage culture that we would be better off with less of.
You could have a dummy version that just calls out to libc, for compatibility with systems that you haven't finished porting to yet.
This kind of phraseology is a clear sign that someone's programming experience is extremely narrow in scope.
Yes, I am saying that most plans on how to do things better are not right. Doing things better is often pretty hard.
But there always is some way to do better. The way you find that is you keep trying a lot of things until you build up an experience-based picture of what things are really like. As you get better at this, plans you formulate become more likely to be net-positive.
What I am saying is that TDD strikes me as a pretty terrible plan in the first place, the product of this kind of ideas-untempered-by-serious-experience.
Speaking for myself, I am pretty sure my own productivity would plummet were I to adopt TDD, and in fact I would completely lose the ability to build software as complex as I do; I would drop at least a level or two there. This does not necessarily speak to TDD's suitability for anyone else, which is why I am recommending to judge by output.
I am talking about any scheme of how to do things that is intended to provide benefit. These all start with "wouldn't it be better if X, because Y" and then a plan is made of how to bring this about.
Well, this plan is inevitably imperfect, so it is either that you don't get all of X, or the reasons Y were not correctly understood or accounted for.
Then, there are always some extra drawbacks that creep in that negate some of the benefits. Usually these drawbacks are very subtle, and they can be hard to notice because they are not things that the plan was trying to address.
In the end, usually the net result is negative: the scheme causes more damage than it provides in benefit. But usually it takes a long time to understand this clearly, because the drawbacks can be subtle (but sometimes they aren't, for example, in TDD, how much extra code you are writing all the time).
His 'defense' of the point is basically: Look, when you do TDD you have to put a lot more work into the tests than you thought! It is not just a simple thing!
Okay, fine, but ... Before embarking on TDD, the programmer had a picture in his head of what the costs+benefits of this change would be. Now you are telling him the costs are WAY higher. So a successful defense would have to then make the case that the benefits are also WAY higher.
But he doesn't. Because the benefits aren't higher, in fact they are lower (as is the case with every well-intended scheme in the history of anything.)
As usual my advice on this is: look at the people who build things you find highly impressive, and study how they did it. This is much more fruitful than reading the output of people who want to spend all day telling you how to program (which leaves very little time for them to build software that is impressive, i.e. they never even test their own ideas!)
9-dan is the highest rank in Go. It is not possible to play against anyone higher.
So I am not sure why you think it isn't a big deal.
We implement our own allocators all the time. If you can't even do such a basic thing legally, then the rules are obvious nonsense.
Yeah no. If you had gone back to 1995 and told me that gmail was what you would get when I have a supercomputer in my pocket, a super-super computer on my desk, and all web pages are served by SUPER-super-super computers, I would have quit the industry out of depression.
It is some horrible bullshit when you look at it in perspective.
About the quality issue, no surprise that I also disagree there: the web is especially crappy.
I do not consider any piece of software that I use to be performing acceptably (native or web), but there is a stark difference between the native apps and the web apps, in that the native ones are at least kind of close to performing acceptably, and also tend to be a lot more robust.
Web apps not working is just the way of life for the web. Any time I fill out a new web form I expect to have to fill it out three times because of some random BS or another.
Look at all the engineers employed by Facebook and especially Twitter. WHAT DO MOST OF THOSE PEOPLE EVEN DO? Obviously the average productivity, in terms of software functionality per employee per year, is historically low, devastatingly low. What is going on exactly??
This would be TREMENDOUSLY better than trying to make the browser into an OS.
An experience that is uniformly slow and uniformly broken a different way on every browser...
But as you say, there is not much point debating subjectivity here. It's not like I had the foresight to record benchmarks of how long it took web pages to appear, or to open a window, etc, back in the mid-90s.
Edit: How about if I put it this way:
If you go back in time to the 90s and tell everyone "20 years from now, we will have a much more advanced web where EVERYONE WILL HAVE A SUPERCOMPUTER IN THEIR POCKET", people would imagine the web would be amazing, and responsive and beautiful, and we would be doing some seriously intricate stuff.
Instead ... no, we have a pile of junk that only kind of works, and slowly at that. In terms of potential unreached, the web is kind of a massive failure. (Yes, it is "successful" in the sense that we are able to do a lot with it that we could not 20 years ago, but the mediocre is the enemy of the good, and all that).
In fact I gave a speech about this at Berkeley last week. I think it'll be online pretty soon.
So now you have at least heard someone claim this.
This is a boiling-the-frog kind of situation. They do just enough to get people today to accept what they're doing, then the next steps come later.
Experimenting with model-based programming or whatever other future programming paradigm is healthy. I think we should do a lot of that, because the way we program hundreds of years from now hopefully doesn't look that much like today. BUT, you have to also be aware that there's a reason why these are future paradigms and not current paradigms, and that people building real programs today need to do something that works today. There is no way we could have built The Witness in any model-based system known today.
Yes, this does not solve every possible stack overwrite. But look at the number that have actually happened in the field and whether this would have dramatically reduced vulnerability in those actual real-world cases. Most times it would.
Most notably, for overwrites happening within the local stack frame, you completely remove the possibility of overwriting the return address. This is a fundamental difference in the level of vulnerability of that kind of code.
It's called "reducing the attack surface". Well-known idea.
Maybe my post was a bit hyperbolic, but I chalk that up to being so annoyed at this.
ALL of these buffer overflows in C happen because the C stack grows backward, and old space is after new space.
If anyone have enough of a crap just to standardize a calling convention that went the other way, stack buffer overwrites would ALWAYS go into unused memory. Then security-minded people would switch to this calling convention for secure programs, and many problems would be solved.
(Of course heap problems would still exist but they are much harder to exploit and it is easy to make an allocator that tries to confound heap attacks.)
When you are shipping and maintaining code on 5 or 10 different platforms, this really matters, because the friction of adding new files becomes huge ... you have to go add that file to 5 or 10 different fuckity fuck build systems that are all uniquely terrible, and hey maybe Apple updated XCode to whatever the new lousy version is instead of the old lousy version, so you have to go through the rigamarole of installing that, which of course won't completely work, oh and the internet is slow today, and on some console platform that shall not be named our dev software didn't auto-renew its license and that is mysteriously timing out so now we get to deal with that for hours, blah blah blah.
This is not an exaggeration. You're lucky if you actually get to do any programming on the day you decide to add a cpp file.
Experienced programmers who ship on a lot of platforms really want the simplest and most straightforward way of using code, and this is what that is for C and C++.
More modern languages could be designed to be better at this, but they usually aren't. (A 'package manager' is not really the answer, it is a solution to a kind-of orthogonal problem and usually brings in way too many of its own complexities.)
Because the line of how much is too much is indistinct, someone is going to guess wrong before too long. This is even before you mix in perverse organizational incentives involving short-term view or individual profit vs long-term company health.
Sad but true.
In the late 1990s-early 2000s a decent gaming PC would cost you between $2500 and $3500, and those numbers represented more money than they do today.
A "very high end computer by today's standard", when it comes to games, would have a GPU that's substantially faster than what Oculus is requiring ... I find their requirement shockingly low and wonder if that is a tactical mistake.
For example, you would not have a SpaceX or a Tesla. You would not have Y Combinator as it exists today. You would not even have the video game that I am about to release next month.
Yes, there are a lot of jerks who amass capital and do nothing with it or who do irresponsible things. But you also have people who use it to work very hard to make positive change in the world, and even if those people are in the minority, their impact is very large.
But in practice this is not a problem. So what if I don't know exactly what type a particular thing will be in the end -- I know generally if it is a number, or an array/list, or a hash/index... that is all I need to know. I use one of those basic types. If I need to change it later I change it later, and the fact that I am in a statically-typed language is great for changes like this because it helps me make them with high confidence.
This is why I don't believe that anyone who makes this argument in favor of dynamic languages really has that much experience in static languages. The actual outcome in real life is the opposite of what is described.
There is more mental overhead in dynamically-typed languages, actually .. it's just less visible because it's implicit! It's the overhead of having to "keep all the type information in your head", which dynamic type proponents sometimes seem to be saying is a good thing.
It's not good because it is a tax on everything you do! Whereas in a statically-typed language, sure, you have to do the little extra overhead of putting the types in the program text, but this is quite freeing in the long term, because you can then drop the burden of having to think about what type something needs to be, in most cases.
(It also serves as documentation / literate programming.)
My approach to programming tends to involve rewriting things several times, or heavily modifying them, and as someone who has been programming for 34 years, in a lot of different situations, I find that static typechecking is by far a superior framework when refactoring or rewriting code. It is not even close.
What are you even talking about? This sounds like an assertion from someone who doesn't use statically-typed languages and is just guessing.
To change a data structure in a statically-typed language, you change the declaration then fix any compile errors. It is easy in most cases.
In a dynamically-typed system, data structures actually get way more ossified, because when you change a structure you don't really know what might be broken or when you are really done making the code correct again... Therefore programmers avoid this.
I know a number of people who have quit Valve and almost all of them would cite organizational dysfunction as one of the top reasons for quitting.
I don't know whether that is true -- I have never worked there -- and I don't wish to spread any ill rumors about Valve. I'm just saying that I know a bunch of people who have worked there who think the flat thing is one of Valve's biggest problems (another one being the incentive structure; of course these two things go hand in hand).
You are saying SpaceX lost a 'first' here to BO but that is not really true and that's Elon's entire point. This is not the first VTOL rocket landing either, maybe it is the first rocket to officially reach space and then subsequently VTOL land but that is not as big of a 'first' as most people are thinking it is.
Which is not to diminish what BO just did, it is just to see it in an accurate context.
Hell, if gas is so great, why does it need government subsidies?
Do your research. Subsidies for traditional cars and for oil are massive and dwarf anything Tesla has ever gotten.
I would use SDL in Linux ports of things because it is the closest to a reasonable native API on Linux (which says more about Linux than SDL actually). But even having done so I would then use native APIs in Windows, OSX, etc.
If your standard of quality is high enough, it won't really be possible to reach it using a blanket API like SDL everywhere.
Graham says the subjects of bias "have to be better to get selected", but what is really going on is they have to be better according to the metrics of the judge which are essentially arbitrary.
One layer is the obvious cable TV bundling that most of us probably think is evil and should die.
But the second layer of bundling is ESPN itself. How many people care about "sports" in general, enough to pay for all the sports? No, people usually are into a couple of sports at most. They like baseball, or they like basketball and football, or they like the other football, etc. Or even, they like specific players.
I think there is a future to be had in selective channels available on the internet that cover specific sports in much greater detail than ESPN ever would.
This incident report reminded me of the 'Game Day Exercise' post from 2014:
in which one robustness check that should be a continuous-integration kind of test, or at least a daily test of a normally working system, is such a big deal to them that they make a big 'Game Day' about it, and serious problems result from this one simple test.
After they have lots of paying customers, of course.
I know we are supposed to be positive and supportive on HN but this was a red flag that the entire department has no idea what an actual robust system looks like and were so far away from that, after having built a substantial amount of software, that expecting them to ever get there may be wishful thinking.
So I am completely unsurprised that they are having this kind of problem. The post-mortem reveals problems that could only occur in systems designed by people who do not think carefully about robustness ... which is consistent with the 2014 post. It kind of shocks me that anyone lets Stripe have anything to do with money.
But also, even if I didn't have a choice and had to write the code in C++, I would do it in the style on the slide, not the style endorsed by the link on this HN thread.
I do agree that copying a line and changing spots is a common mistake pattern. However, I also still think that in many cases that is the best thing to do because it results in the simplest code. So rather than go through contortions in the actual code to try and prevent this, I am wondering if some kind of IDE pattern matching is a better way to catch this class of errors.
If you did that with the C++ code in this article there would be errors too. Major straw man.
Is it not consistent with the scientific narrative that procrastination, being a universal behavior, must have been developed evolutionarily for some benefit? It is a pretty sophisticated behavior after all (as the article even describes). So isn't it naive to assume it is a problem to be solved? Maybe it is, maybe it isn't, but shouldn't the early work be in trying to understand the full effects of procrastination on lifestyle and future fitness so that we actually get to a place where we can make judgements about it?
TL;DR: These guys are totally amateur hour.
The mathematical meaning of the statement "two things are entangled" is that you cannot factor the states into the product of two separate quantities. So there is only one equation for the two particles. So once you "collapse" this for one particle (whether or not collapse is a physical action) it is by necessity collapsed for the other particle, because there is no other state that could not be collapsed.
And if you say "why would you let exceptions bubble up that much", well, that is the whole point of exceptions, that they bubble up. If you say "to get rid of nonlocality just catch outside every call", well, now that's equivalent to checking return values always, but more error-prone.
This article seems not to have a lot in the way of new contribution; it is just parroting the oft-repeated idea that you need exceptions to pass "error information" up several layers of abstraction. But here is what I think about this:
1) In this age when we are realizing strong typing is a good idea, that hidden state is a bad idea, and that in general you should be very specific about what is going on, why are we even conceptualizing this as "error information"? Why instead, when we try to open a file, do we not return "all information you might need to know in the case of opening a file" (which includes what happened if it didn't open properly). As soon as you make that conceptual switch, all this hand-wringing goes away. It's a non-problem. You certainly shouldn't add heinous complications to your program to solve this non-problem.
1a) This conceptual change also helps disambiguate between what the article calls "hard errors" and "soft errors". In portions of the code where you have attempted an operation that might have failed, and you are not completely sure that it didn't fail, you have the full body of "what happened" information (it is a small struct or whatever). After the situation has been checked and you know it is exactly what you need to be, you may drop the other information and pass the raw file handle. At this point it is clear that these parts of the code should only be executed if the file handle is valid, and if that is not true, the programmer made an error. This is analogous to the situation with nullable and non-nullable pointers (in some languages you would even use the same mechanisms to deal with null pointers and invalid file handles, etc, but I am not sure this is really helpful.)
2) If one insists on not making the simplifying leap from (1), well, maybe the other problem is that you have so many layers. If you didn't have so much glue, your code would be simpler and easier to deal with, and it would run faster, and you wouldn't be so worried about needing to pass lots of context information up several layers between modules, because those situations don't really arise.
It's so bad that I believe it's unethical to offer it for download as working software because people with work to do on short timescales (as I had) may choose to rely on it and then get screwed.