开发人员应该首先考虑可读性还是性能?

通常,开发人员将面临两种解决问题的可能方法之间的选择——一种是惯用的和可读的,另一种是不太直观的,但可能性能更好。例如,在基于 C 的语言中,有两种方法可以将一个数乘以2:

int SimpleMultiplyBy2(int x)
{
return x * 2;
}

还有

int FastMultiplyBy2(int x)
{
return x << 1;
}

对于技术和非技术读者来说,第一个版本更容易识别,但是第二个版本可能性能更好,因为比特移位比乘法操作更简单。(现在,让我们假设编译器的优化器不会检测并优化它,尽管这也是一个考虑因素)。

作为一个开发人员,哪种方法更适合作为初始尝试?

15083 次浏览

IMO the obvious readable version first, until performance is measured and a faster version is required.

Readability. The time to optimize is when you get to beta testing. Otherwise you never really know what you need to spend the time on.

I would go for readability first. Considering the fact that with the kind of optimized languages and hugely loaded machines we have in these days, most of the code we write in readable way will perform decently.

In some very rare scenarios, where you are pretty sure you are going to have some performance bottle neck (may be from some past bad experiences), and you managed to find some weird trick which can give you huge performance advantage, you can go for that. But you should comment that code snippet very well, which will help to make it more readable.

Readability 100%

If your compiler can't do the "x*2" => "x <<1" optimization for you -- get a new compiler!

Also remember that 99.9% of your program's time is spent waiting for user input, waiting for database queries and waiting for network responses. Unless you are doing the multiple 20 bajillion times, it's not going to be noticeable.

In your given example, 99.9999% of the compilers out there will generate the same code for both cases. Which illustrates my general rule - write for readability and maintainability first, and optimize only when you need to.

Take it from Don Knuth

Premature optimization is the root of all evil (or at least most of it) in programming.

The larger the codebase, the more readability is crucial. Trying to understand some tiny function isn't so bad. (Especially since the Method Name in the example gives you a clue.) Not so great for some epic piece of uber code written by the loner genius who just quit coding because he has finally seen the top of his ability's complexity and it's what he just wrote for you and you'll never ever understand it.

A often overlooked factor in this debate is the extra time it takes for a programmer to navigate, understand and modify less readible code. Considering a programmer's time goes for a hundred dollars an hour or more, this is a very real cost.
Any performance gain is countered by this direct extra cost in development.

Readability for sure. Don't worry about the speed unless someone complains

Putting a comment there with an explanation would make it readable and fast.

It really depends on the type of project, and how important performance is. If you're building a 3D game, then there are usually a lot of common optimizations that you'll want to throw in there along the way, and there's no reason not to (just don't get too carried away early). But if you're doing something tricky, comment it so anybody looking at it will know how and why you're being tricky.

Write for readability first, but expect the readers to be programmers. Any programmer worth his or her salt should know the difference between a multiply and a bitshift, or be able to read the ternary operator where it is used appropriately, be able to look up and understand a complex algorithm (you are commenting your code right?), etc.

Early over-optimization is, of course, quite bad at getting you into trouble later on when you need to refactor, but that doesn't really apply to the optimization of individual methods, code blocks, or statements.

The answer depends on the context. In device driver programming or game development for example, the second form is an acceptable idiom. In business applications, not so much.

Your best bet is to look around the code (or in similar successful applications) to check how other developers do it.

The bitshift versus the multiplication is a trivial optimization that gains next to nothing. And, as has been pointed out, your compiler should do that for you. Other than that, the gain is neglectable anyhow as is the CPU this instruction runs on.

On the other hand, if you need to perform serious computation, you will require the right data structures. But if your problem is complex, finding out about that is part of the solution. As an illustration, consider searching for an ID number in an array of 1000000 unsorted objects. Then reconsider using a binary tree or a hash map.

But optimizations like n << C are usually neglectible and trivial to change to at any point. Making code readable is not.

It depends on the task needed to be solved. Usually readability is more importrant, but there are still some tasks when you shoul think of performance in the first place. And you can't just spend a day or to for profiling and optimization after everything works perfectly, because optimization itself may require rewriting sufficiant part of a code from scratch. But it is not common nowadays.

Both. Your code should balance both; readability and performance. Because ignoring either one will screw the ROI of the project, which in the end of the day is all that matters to your boss.

Bad readability results in decreased maintainability, which results in more resources spent on maintenance, which results in a lower ROI.

Bad performance results in decreased investment and client base, which results in a lower ROI.

If you're worried about readability of your code, don't hesitate to add a comment to remind yourself what and why you're doing this.

using << would by a micro optimization. So Hoare's (not Knuts) rule:

Premature optimization is the root of all evil.

applies and you should just use the more readable version in the first place.

This is rule is IMHO often misused as an excuse to design software that can never scale, or perform well.

I'd say go for readability.

But in the given example, I think that the second version is already readable enough, since the name of the function exactly states, what is going on in the function.

If we just always had functions that told us, what they do ...

How much does an hour of processor time cost?

How much does an hour of programmer time cost?

IMHO both things have nothing to do. You should first go for code that works, as this is more important than performance or how well it reads. Regarding readability: your code should always be readable in any case.

However I fail to see why code can't be readable and offer good performance at the same time. In your example, the second version is as readable as the first one to me. What is less readable about it? If a programmer doesn't know that shifting left is the same as multiplying by a power of two and shifting right is the same as dividing by a power of two... well, then you have much more basic problems than general readability.

You should always maximally optimize, performance always counts. The reason we have bloatware today, is that most programmers don't want to do the work of optimization.

Having said that, you can always put comments in where slick coding needs clarification.

Readability.

Coding for performance has it's own set of challenges. Joseph M. Newcomer said it well

Optimization matters only when it matters. When it matters, it matters a lot, but until you know that it matters, don't waste a lot of time doing it. Even if you know it matters, you need to know where it matters. Without performance data, you won't know what to optimize, and you'll probably optimize the wrong thing.

The result will be obscure, hard to write, hard to debug, and hard to maintain code that doesn't solve your problem. Thus it has the dual disadvantage of (a) increasing software development and software maintenance costs, and (b) having no performance effect at all.

You missed one.

First code for correctness, then for clarity (the two are often connected, of course!). Finally, and only if you have real empirical evidence that you actually need to, you can look at optimizing. Premature optimization really is evil. Optimization almost always costs you time, clarity, maintainability. You'd better be sure you're buying something worthwhile with that.

Note that good algorithms almost always beat localized tuning. There is no reason you can't have code that is correct, clear, and fast. You'll be unreasonably lucky to get there starting off focusing on `fast' though.

Priority has to be readability. Then comes performance if it's well commented so that maintainers know why something is not standard.

There is no point in optimizing if you don't know your bottlenecks. You may have made a function incredible efficient (usually at the expense of readability to some degree) only to find that portion of code hardly ever runs, or it's spending more time hitting the disk or database than you'll ever save twiddling bits. So you can't micro-optimize until you have something to measure, and then you might as well start off for readability. However, you should be mindful of both speed and understandability when designing the overall architecture, as both can have a massive impact and be difficult to change (depending on coding style and methedologies).

The vast majority of the time, I would agree with most of the world that readability is much more important. Computers are faster than you can imagine and only getting faster, compilers do the micro-optimzations for you, and you can optimize the bottlenecks later, once you find out where they are.

On the other hand, though, sometimes, for example if you're writing a small program that will do some serious number crunching or other non-interactive, computationally intensive task, you might have to make some high-level design decisions with performance goals in mind. If you were to try to optimize the slow parts later in these cases, you'd basically end up rewriting large portions of the code. For example, you could try to encapsulate things well in small classes, etc, but if performance is a very high priority, you might have to settle for a less well-factored design that doesn't, for example, perform as many memory allocations.

Readability. It will allow others (or yourself at a later date) to determine what you're trying to accomplish. If you later find that you do need to worry about performance, the readability will help you achieve performance.

I also think that by concentrating on readability, you'll actually end up with simpler code, which will most likely achieve better performance than more complex code.

"Performance always counts" is not true. If you're I/O bound, then multiplication speed doesn't matter.

Someone said "The reason we have bloatware today, is that most programmers don't want to do the work of optimization," and that's certainly true. We have compilers to take care of those things.

Any compiler these days is going to convert x*2 into x<<1, if it's appropriate for that architecture. Here's a case where the compiler is SMARTER THAN THE PROGRAMMER.

It is estimated that about 70% of the cost of software is in maintenance. Readability makes a system easier to maintain and therefore brings down cost of the software over its life.

There are cases where performance is more important the readability, that said they are few and far between.

Before sacrifing readability, think "Am I (or your company) prepared to deal with the extra cost I am adding to the system by doing this?"

I don't work at google so I'd go for the evil option. (optimization)

In Chapter 6 of Jon Bentley's "Programming Pearls", he describes how one system had a 400 times speed up by optimizing at 6 different design levels. I believe, that by not caring about performance at these 6 design levels, modern implementors can easily achieve 2-3 orders of magnitude of slow down in their programs.

As almost everyone said in their answers, I favor readability. 99 out of 100 projects I run have no hard response time requirements, so it's an easy choice.

Before you even start coding you should already know the answer. Some projects have certain performance requirements, like 'need to be able to run task X in Y (milli)seconds'. If that's the case, you have a goal to work towards and you know when you have to optimize or not. (hopefully) this is determined at the requirements stage of your project, not when writing the code.

Good readability and the ability to optimize later on are a result of proper software design. If your software is of sound design, you should be able to isolate parts of your software and rewrite them if needed, without breaking other parts of the system. Besides, most true optimization cases I've encountered (ignoring some real low level tricks, those are incidental) have been in changing from one algorithm to another, or caching data to memory instead of disk/network.

Readability is the FIRST target.

In the 1970's the army tested some of the then "new" techniques of software development (top down design, structured programming, chief programmer teams, to name a few) to determine which of these made a statistically significant difference.

THe ONLY technique that made a statistically significant difference in development was...

ADDING BLANK LINES to program code.

The improvement in readability in those pre-structured, pre-object oriented code was the only technique in these studies that improved productivity.

==============

Optimization should only be addressed when the entire project is unit tested and ready for instrumentation. You never know WHERE you need to optimize the code.

In their landmark books Kernigan and Plauger in the late 1970's SOFTWARE TOOLS (1976) and SOFTWARE TOOLS IN PASCAL (1981) showed ways to create structured programs using top down design. They created text processing programs: editors, search tools, code pre-processors.

When the completed text formating function was INSTRUMENTED they discovered that most of the processing time was spent in three routines that performed text input and output ( In the original book, the i-o functions took 89% of the time. In the pascal book, these functions consumed 55%!)

They were able to optimize these THREE routines and produced the results of increased performance with reasonable, manageable development time and cost.

Readability first. But even more than readability is simplicity, especially in terms of data structure.

I'm reminded of a student doing a vision analysis program, who couldn't understand why it was so slow. He merely followed good programming practice - each pixel was an object, and it worked by sending messages to its neighbors...

check this out

If there is no readability , it will be very hard to get performance improvement when you really need it.

Performance should be only improved when it is a problem in your program, there are many places would be a bottle neck rather than this syntax. Say you are squishing 1ns improvement on a << but ignored that 10 mins IO time.

Also, regarding readability, a professional programmer should be able to read/understand computer science terms. For example we can name a method enqueue rather than we have to say putThisJobInWorkQueue.

If you're going to release your software, you must care about the result, not the process.

The users are not going to read your code, they are going to use your software, and they don't want to be frustrated by unnecessary long waiting. They will hate you if your well-indented properly-commented application runs slowly or eats a lot of memory.

In short, think about the users, not yourself, so prefer performance over readability.

One of the best examples of this rule is Quake video game. Its code is not well-structured and is often hardly readable, but it can render thousands of polygons at very high frame rates on 1995-1996 PCs. Quake, and a lot of other video games including Call of Duty (which is derived from Quake 3 engine), wouldn't exist if Carmack preferred readability over performance.