Optimizing for problems you don’t yet have just keeps you from launching and getting successful enough to actually care about your compute costs.
It's a good way to build non-scalable applications. Because if you scale the application, then at some point, the computer's time will become more expensive than developer's time. Of course, that cost is economic externality for the development shop, so why should they care?
Edit: I am not sure word "scale" is obvious. There is Google-like scaling, which is we run the software on many machines in house. But there is also Microsoft-like scaling, which is many users run the software. Collectively, they have pay the cost and they have to waste the energy.
No one's saying "let's do stuff the stupid way!" - they're saying hey, let's focus on getting users before fleshing out the technical details of the what-if-we-actually-make-it scenario. Or, as the classic saying puts it, don't put the cart before the horse.
Unless you are building a technical work of art, it's all about finding the right product.
Ex: If your building a game with N players that can see each other walking around then that's N^2 updates.
Now, plenty of ways of dealing with that problem, but it's something to keep in mind even if you solution is not yet worth implementing.
Most likely you'll never reach Google scale, but, you'll be happy you did that as the application grows more complex and you don't have to test every part of it for each tiny change.
Processing efficiency is still extremely important in the embedded world. Don't think that the embedded market is small, practically every product you buy has s/w in it.
When you make for example a million units of a product, every single byte and every cycle counts as there is a large multiplier to take account of.
Efficiency counts, just not in the web world.
Knowing how to code efficiently, even if you choose not to, never goes out of style. I don't think I've ever heard anyone say that out loud.
Which regrettably by now even end-users are painfully aware of!
what I want to say is:
(for reference: I am a js "fullstack" dev)
For example, anyone that has been using OO in a non-GC'ed language for a reasonable amount of time will know that it's bad news to constantly create and free small objects due to the MM overhead. However, I constantly encounter JS-heavy web sites/apps that a) become slower as you use them and use an inordinate amount of memory, and b) really thrash the GC because they simply don't pay attention to allocations at all and constantly use constructs that result in one-off allocations that then need to be recycled.
The fact is that the browser and the DOM is a UI layer, and UI frameworks fit perfectly when implemented using OO architecture.
This (among others) is where AOT-compiled, GC-less (or with special implementations of those) come in. And even then, there are applications where even the cheap, mostly-compile-time abstractions of such languages prove to be too clunky and you need to drop down to assembly (bootloaders, demos, parts of OSes or language implementations).
So, while you can write efficient JS code, it's not going to be efficient enough for many cases.
Then imagine having access to real computers on real networks hit your site over whatever time horizon you want. It would be great. If this could somehow be done on mobile that would be great too (maybe in a push notification way "1x test ready to run, click here to start", app opens, does it's thing, then closes)
I wish Sony sold a non-smart version of this TV because other than the software, it's pretty nice.
Even if you aren't actively using it, there are things running in the background that crash. I was watching football yesterday and I kept getting "The Samba Service has Stopped" messages. The only option in the box was OK to acknowledge it. What am I supposed to do with that? It kept coming back so I googled it and found how to disable notifications for that app and that eventually stopped the problem.
I also found the Samba service watches what you watch so that it can target ads better.
The comment is repeating a saying, it's not original. I have heard it a ton of times. It's really an entire category of arguments which can be called "premature optimization".
While some optimization is premature, the saying is repeated in a way that is crazy overly-broad. It's based on a false assumption that is only approximately true in some of the cases for web development. The assumption is that bad performance can be traded for more compute power without other negative side effects. This is very untrue for most practical systems in web development. Most compute-heavy decisions will do both things - increase the compute resources you need, and degrade user experience. If you're super tiny working on super simple systems, then there's a certain envelope of request response times where it won't matter if you are faster or slower. So there are some cases where it is a valid argument, but to me, these seem incredibly specific to a certain stage of a startup. Beyond that, what matters is prioritization, not complete disregard for performance concerns.
And yet the very first sentence in gp's comment:
"software development != web software development"
So gp already qualified his statement is not applicable to web development, and then you criticise his comment with a web development example...
A good QA analyst is like any other good employee - they always have something else they'd like to test.
I've never seen any detailed case studies to prove it in either way.
Does the imported function do exactly what you need? No not quite - maybe write your own.
Is it a complex function that will require a lot of testing? yes - an import is probably best.
Are you likely to use other parts of the imported library in the future? Yes, then import may be a better choice.
Will it look good on your CV? Lets face it everyone does it.
The craftsman cares they can accomplish the task without needing a large library. The business cares that it can get to market quickly and profitably at minimal cost. “I prefer your competitor because their code was lovingly hand crafted instead of being shipped quickly with the features I need” said no customer, ever (witness that we all use compilers rather than hand-coding the machine code in an assembler... all the same arguments were made against compiled code back in the day, and compiled code was the right answer, then and now).
And then the successful business, in contrast, actually looks at total cost of ownership.
I would also love to see some examples of a 2G (!) library that people are casually importing. Where have you had this problem?
> said no customer, ever
> witness that we all use compilers
Well, strictly speaking, developers are also customers for programming tools' company, and there the tidiness of the code is important. I'm sure there are many other domains where writing your software well is actually encouraged by economic factors :)
But, I think the dedication to writing perfect code without executing it is misguided. It's 2018 - we have interactive debuggers, excellent profiling tools, and unit tests. Most developers have a computer with 4+ cores and 8G+ of memory. It would be foolish not to take advantage of that.
Unlike the stand-alone, isolated mainframe era, our applications today are interconnected.
Take a manufacturing environment. Shop orders take in inventory information, labor detail, assembly progress, facilities and supplies usage, etc., any or all of which can involve independent systems. In turn, each production step can create information that needs to go back to each system.
Any error or change in those inputs and outputs could force a rerun of all systems downstream of the first error. This is especially noteworthy to the guy/gal being called in at 3AM to unravel such hairballs.
I'd daresay that most interconnection with external systems in the modern mobile environment is primarily used for social media, tracking and other privacy suckage. (Ghostery output can be quite surprising, for example.)
The web has become a very frequent path for machine-to-machine communications through APIs, besides other, more direct pathways for internet traffic on ports other than 80/443.
Edit: investigating closer, this even happens with JS disabled, and also in Chrome (with less load though).
The page is only using 14mb of memory, which is a bit higher than some pages, but it's about 4mb of JS source, and another 4mb of objects held in memory. It isn't desirable, but shouldn't be causing any issues.
And though the page does load fairly quickly for me, a glance over what it does during that load makes me suspect I know your issue: Styles were recalculated more than 75 times, with the repaint happening more than 35 times.
There's also a huge peak in the middle of the JS being executed. The culprit  has a tightly packed for-each loop. Three nested for-each, with each one contained a lambda, which contains two or three more lambdas. It's not performant code. That particular script is also 500kb of more of the same kind of code. (Might also point out that the form-handler on that page is an even bigger script.)
So: I'd think it's the repaint from too many styles coming in overriding each other, but it might just be a reliance on a large library, which doesn't seem to be well thought out.
I get 5-9% cpu usage with just this site open on edge.
A lot of the language around "why functional programming" goes to the same space. Strongly typed, functional solutions expose problems during code development which avoid runtime problems from sloppy thinking.
I also think simpler is good. So, coding discipline which favour simple techniques (within limits, this is not one dimensional) are good. If you have to exploit a very complex mechanism, the old world old school method was to look in the NAG library for a well written solution (presuming numeric problems, which predominated then) -and now, we do much the same: look for a solution in your language compatible runtime library space, which is a well trodden path.
Even back then it would be impossible for a serious mainframe program to be written and expected to work first time.
I disagree with most of this article, and I have ~25 years experience. No, I didn't program mainframes, but I'm tired, as are younger programmers, of even older programmers trying to say that the way they did programming is still "better".
The writer isn't entirely wrong, but the simple fact is that software isn't written the same way anymore. Stop trying to force antiquated methods down younger people's throats. The way I wrote programs 20 years ago is inherently different from the way that I wrote programs 10 years ago is inherently different from the way I write programs today.
15-20 years ago, we didn't write tests. We had QA that wrote our tests for us. We tested the code as well as we could (I became pretty damn good at testing my own code) and then we threw it over a wall to QA. Today, we have zero QA and I write tests for my own code.
10-15 years ago, you aimed for 0 defects, especially for enterprise code, because your enterprise customers couldn't afford downtime. Today, in a SAAS environment, you care about defects but your have a global set of customers, and you roll your code out slowly and watch metrics.
I have a friend in growth at Facebook and his manager got mad at him because he was focusing too much time on testing his own code. Apparently he's supposed to leave that to external QA, and you can always fix the code later. On some growth teams, code quality and maintainability don't matter, all that matters is getting customer growth with new features as quickly as possible. Is that inherently wrong? No, it's a different way of doing business. 10 years ago there was no such thing as a growth team.
The way software is used is different, and the way software is developed is different. Mainframe methodologies, while interesting to read, is not relevant. Things like Optimize Upfront is nonsense to me, especially in a global context. You iterate on your features quickly, including optimization. You couldn't do that in mainframe computing, but these days I deploy to production 10 times a day, and depending on how I deploy, I can see problems fairly quickly and iterate without affecting most of my users. That's definitely not a paradigm that you would see back then, when you would have to schedule time, etc.
It's gotta be both approaches (yours and the articles) but the real problem is the demand for programmers is so high and the barrier to entry is so low that the quality has suffered; the quality of the libraries, build systems, documentation, designs, interfaces, all of it.
You can't magically drag all modern programmers through the mud using line editors and 16 bit processors for 2 years to learn everything the painful way.
I honestly don't see a way out. We're in the eternal September of software development and it's all down hill from here unless we make some commitment to raising the barrier to entry and decreasing the incentives, making it hard again.
It's like how at the end of its life, most of the people still using AOL instant messenger were respectable computer experts. Same idea.
When I started, compiling took serious time (hours sometimes). So you were much more careful about making mistakes. Compilers also had bugs, as did linkers, debuggers, you had to know how to spot these things and when to question your code and when to question your tools.
Operating system containment and protection was more an intention than a reality and it was relatively easy to lock up or crash a machine through faulty code. Workstation uptimes of weeks was seen as impressive. These days, things are so stable that "uptime" just tells me when my last power outage was.
When we released software it was on physical media, which was either mailed to people in physical boxes or running on physical machines that were shipped out. Not making mistakes was much more important in that situation since you couldn't just deploy an update without a lot of cost and ceremony.
It's all changed so fundamentally; I'd be open to having an instruction course where people have to target some vintage machine (which we'd likely have to virtualize) and have them just deal with it for 6 months. I don't know how many signups you'd get though.
For example I hate dependency injection. I despise it, I think it’s stupid. But my company does this, so I do it. Many other companies are doing it. I adapt or die.
The last two decades were a madness of inventions, with computing doubling almost every 3 year, new languages and paradigm invented, new tools, the internet, the web, phones, giant displays, small displays with touch. We surely won't get that much change in the next two decades.
Yes. That "growth team" just added a bunch of inscrutable garbage to your code base, perhaps hoping that someone would clean it up later. Of course no one ever will, since they're too busy "getting customer growth with new features as quickly as possible."
But is it inherently wrong? No. It introduced a new feature quickly, a lot more quickly than I ever could. The code i produce is maintainable and relatively bug free but I couldn’t have gotten it up and running as quickly as these kids did.
Also the business decided this is what they wanted to do. Invest a low amount of money to see if the feature works and then if it does pass it onto more senior programmers that turn it into a real service. If it doesn’t stick then throw it away.
It’s not the best, and I would never employ it but it’s one strategy and works if your care more about growth than efficiency.
Do you work for Intel ? /s