你曾经喜欢的编程实践是什么,后来改变了你的想法?

在我们编程的时候,我们都在开发我们使用和依赖的实践和模式。然而,随着时间的推移,随着我们的理解、成熟度甚至技术使用的变化,我们逐渐意识到,一些我们曾经认为很棒的实践已经不再适用(或者不再适用)。

我曾经经常使用的一个实践例子是 单一对象模式的使用,但是近年来已经发生了变化。

通过我自己的经验和与同事的长时间争论,我已经认识到 单身并不总是可取的-它们可以使测试更加困难(通过抑制像模仿这样的技术) ,并且可以在系统的各个部分之间产生不必要的耦合。相反,我现在使用对象工厂(通常使用 IoC 容器)来隐藏单例的性质和存在,以免系统中那些不关心或不需要知道的部分知道。相反,它们依赖于工厂(或服务定位器)来获取对这些对象的访问。

本着自我完善的精神,我向社会提出以下问题:

  • 您最近重新考虑了哪些编程模式或实践,现在又试图避免哪些?
  • 你决定用什么取代它们?
8366 次浏览

Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.

In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!

The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.

Hungarian notation (both Forms and Systems). I used to prefix everything. strSomeString or txtFoo. Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.

This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.

Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!

(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)

I thought it made sense to apply design patterns whenever I recognised them.

Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.

Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.

I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.

Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.

  • It doesn't make anybody's life easier
  • Its more code that can have bugs in it
  • A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution

Hungarian notation - It just adds noise. With modern IDEs and well written, tight code it's not necessary, at least not in statically typed languages. Unfortunately, most of the teams I've worked with still insist on using it in some form.

The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.

Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down. So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.

I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.

Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.

Really, slavishly adhering to anything can be detrimental.

  • Trying to code things perfectly on the first try.
  • Trying to create perfect OO model before coding.
  • Designing everything for flexibility and future improvements.

In one word overengineering.

In C#, using _notation for private members. I now think it's ugly.

I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.

Single return points.

I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.

Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.

Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.

I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.

Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.

The code will not look at all what you first thought it would look like. Happens every time :)

Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.

In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.

Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.

Prototyping in the IDE. Like all newbies I have learnt that jumping into the code is a bad idea. Now I tend to abandon silly ideas before even using a keyboard.

Designing more than I coded. After a while, it turns into analysis paralysis.


//Coming out of university, we were taught to ensure we always had an abundance
//of commenting around our code. But applying that to the real world, made it
//clear that over-commenting not only has the potential to confuse/complicate
//things but can make the code hard to follow. Now I spend more time on
//improving the simplicity and readability of the code and inserting fewer yet
//relevant comments, instead of spending that time writing overly-descriptive
//commentaries all throughout the code.


The "perfect" architecture

I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.

And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.

We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.

The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.

Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.

Initializing all class members.

I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:

  • normally means that every variable is initialized twice before ever being read
  • is silly because in most languages automatically initialize variables to NULL.
  • actually enforces a slight performance hit in most languages
  • can bloat code on larger projects

Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).

When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.

I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.

Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.

I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.

Never crashing.

It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.

Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.

My favorite "don't crash" pattern is this:

public User readUserFromDb(int id){
User u = null;
try {
ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
if (rs.moveNext()){
u = new User();
u.setFirstName(rs.get("fname"));
u.setSurname(rs.get("sname"));
// etc
}
} catch (Exception e) {
log.info(e);
}
if (u == null){
u = new User();
u.setFirstName("error communicating with database");
u.setSurname("error communicating with database");
// etc
}
u.setId(id);
return u;
}

Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)

A few:

  • Started using braces in the same line rather than on a new line (if (... ) {)
  • using camelCase instead of non_camel_case
  • stopped using printf() for debugging
  • started relying on third party libraries rather than writing every bit from scratch

jrh

Abbreviating variable/method/table/... Names

I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...

Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.

I used to write few routines. Each routine did a bunch of stuff.
Now I break the tasks into many, short routines, where each routine do one specific thing (whenever possible).


Also, routine's arguments declaration style, for a long arg. list:
before

int foo (char arg1, int arg2, float arg3, double arg4)

now

int
foo (
char arg1,
int arg2,
float arg3,
double arg4  )

that's, of course, a matter of taste.

That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.

I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.

Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.

There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.

Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?

Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.

Catching only exceptions you know of in high availability services.

This is one place where I disagree with my own company's advice. The theory is that you should catch only exceptions you know of since you have no guarantee over what the 'bad' thing that happened is. If memory got corrupted or if the CLR itself got wedged, you're not going to recover.

However, when I worked on high availability services, I found that there were often cases where I wanted to express "Catch as many errors as you can and keep going". Yes, in theory we could have seen exceptions that we couldn't handle but with well tested code on a environment you control (and with not much native code in the mix apart from what the system provides), this turned out to be a better option than only catching exceptions you knew about.

The CLR team's stance on this is "Don't let your thread execute in an unknown state" while my stance is "If you know your scenario, this is probably ok". It may not be ok if you're running a bank website but in most cases, this will give you better availability and not force you to wonder why your app is restarting so frequently.

You can see both sides of the debate at http://blogs.msdn.com/clrteam/archive/2009/02/19/why-catch-exception-empty-catch-is-bad.aspx

Header files shall not include other header files.

I used to be strongly opposed to the idea of headers including other headers - based on a bad experience early in my engineering career. Having the headers included explicitly in the order needed right there in the source file seemed to work better.

Now - in general - I'm of the mindset that each header file shall be self-sufficient, i.e., not require other .h files to be included before it in the source file. Especially when developing in C++...

Requiring all code to be clean code, even if it is already working.

In academic environments there is such a focus on clean code that the temptation afterward is big to always clean up ugly code when you come across it. However, cleaning up working code has a number of downsides:

  • The time spent cleaning it up doesn't add any value to the product at that time, while that same time spent debugging or doing feature development does add value.
  • There is a risk of breaking already working code. Nobody is so amazing that they never introduce bugs when refactoring. (I had to eat some humble pie when my bugs got shipped to the customer.)

Ofcourse, once that piece of ugly code needs new features, it often is not a bad idea to refactor it. But this is the point: refactoring and clean up should only happen in combination with feature development.

Creating stored procedures for accessing data. Hell to maintain (especially if you develop on test server and have to maintain other server), and you end up with gazillion stored procedures called NewInsertStoredProcedureLines, NewSelectStoredProcedureLines... Now that it happily resides hard coded in the app, makes me a happy camper.

Accessing the database directly.
In my older code, I use querys and datasets extensively. Now I use an ORM for most things. It gives me much cleaner code and better reusability. Typically I now only access the db directly in small programs, or when needed for performance.

The most significant change I've made is my approach to N-tier. I had been a believer in the separation of logic along physical tiers and building middle-tier "application servers". Going back to windows DNA using DCOM, MTS and COM+, then later on using .NET Remoting. At the time it had seemed reasonable from a security and scalability perspective to build systems this way. But having done it enough times to find that the added complexity (which is significant), network communication overhead, deployment issues, developer training, and the reality that security was never increased (because we never actually locked down firewalls between servers) has lead me to conclude that its seldom justified or warranted.

I'm still much in favor layering, and doing so in such a way as to allow tiering if it becomes a requirement, which I'm continuing to find, it seldom does.

Writing docblock method descriptions that simply reiterated what the method name already told you. The bad old days:

/**
* Returns the logger obj
* @return log_Core
*/
public function getLogger()
{ ... }

Now:

/**
* @return log_Core
*/
public function getLogger()
{ ... }

Of course, well-named functions help.

I had two changes of mind through my career as software developer I was taught in school and university.

Like many things in life these changes come from experience and observation and those two are contradictory (just like life!).

More or less the first one describes why/when to use "big systems" over "small systems" and the second describe why sometimes "proprietory systems" have advantages over "standard systems".

I know it's a little long/philosophic answer, but you can skip to the "in conclusion"!


ONE: "Small/Indie" software is equally good as "Big name/Standard" software.

I always wondered why companies use big name software like Microsoft, SAP, Oracle etc. that cost a lot of money to develop for and licences.

I learned a valuable lesson from someone that rather payed A LOT OF MONEY for using an Oracle DBMS instead of MySQl, which would have been sufficient for the cause because it was a very small amount of data to be stored in the database for the software project.

Basically when you use "Big name/Standard" software like SAP, Oracle or Microsoft you want to buy "security" that is best summarized in "30 years from now I will still find developers for SAP".

Smaller companies can go bankrupt and you have a problem maintaining your software system for a longer period. Maybe the "small/indie" software will do the job but you can't be sure to have it supported the next year.

I've seen it numerous times that a software company (even bigger ones) goes under and you suddenly have problems to get support and/or developers (for a reasonable price) on the market for your software system.

In conclusion: There are good reasons like security or support to use "Big name/Standard" software, even if they are expensive and have their own problems.


TWO: Software language/concept/system X is the only right way to do things.

In my younger days I was a purist. Everything had to be this or that with no grey areas in between. E.g. I did all stuff in C++ (MS Windows, then Java (Windows/Web), then PHP (Linux/Web)etc... even ColdFusion (Windows/Web) or ABAP (SAP).

Now I don't think there is the only "right way" to do things. I'm now more a generalist than a purist. Also I'm very sceptical of large libraries which are provided by Java etc... or systems like software layers for PHP etc.

Also I'm very sceptical of the OO-mantra that has been accepted everywhere it seems. OO is great in its own ways, but it's not THE solution to every problem. I live by the KISS (keep it simple, stupid) principle and I often find it very hard to learn all the classes/functions of a certain language to just do simple things for a small website project. E.g. I'm always wondering why JSP is used for small simple projects that could be done with PHP in a fraction of the time.

So today I'm very sceptical of large/slow/overhead software systems... often it is better to do stuff yourself for small projects than overkill everything with a large functionality that yet again has to be tailored down to suit your needs.

Most of the time I'm faster in developing a website with database connectivity from scratch (e.g. PHP) than implement it with an (expensive?!) and complex and HUGE library (e.g. JSP) because most of the features aren't even useful.

For example: You want to use weblog software X on your website, which is pretty cool because of the built-in functions like RSS export, web services etc.etc. BUT there is a serious overhead in learning all the library functionality and conventions of the weblog software... yes, when you finally have understood it, you can use all the sweet functions and features... but in about half the time you could build the 10% of the features you really need from scratch.

In conclusion: Keep it simple, stupid works. Many times a simple (even if 'cruder') solution is better than a complex (but 'nicer') solution. Use the tools best suited for the situation not a fixed mantra.

TDD and unit tests in general. At some point I was the advocate of TDD at my workplace, but over time I learned it really does not bring anything to the table, at least with a statically typed language.

Don't get me wrong, I still think automated functional tests are very important to have.

Checked Exceptions

An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.

Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.

Compact code.

I used to love getting any given function down to the absolute essentials, and often had nested function calls to reduce the line count.

After having to maintain code a few years old, I realised that reducing the line count simply made the code less readable, and taking shortcuts only resulted in pain down the track!

Documenting the code with extensive inline code comments. Now I follow Uncle Bob's view that the code should be self-documenting: if you feel the need to write a comment about certain piece of code, you should refactor the code instead to make it more legible.

Also, code comments tend to get out of sync with the actual code they are supposed to describe. To quote Uncle: "the truth is in the code, not the comments".

Highly recommended book: Clean Code: A Handbook of Agile Software Craftsmanship

Never commenting code, hoping to always rely on the notion that code should describe itself.

When I first started programming I quickly adopted the idea that extensive comments are useless, and that instead code should be written in such a way, so as to describe it self. Then I took it to an extreme, where I would never comment code. This works well, at times, for code representing a business domain, because the detailed documentation needs to be somewhere else (like a DSL, or document) and the meanings of class members are obvious. However when developing more 'frameworky' code it becomes more difficult to infer meaning. This is true of myself looking back at my own code, not to speak of others needing to use it. I certainly use the comments for .NET Framework classes, and other frameworks, why shouldn't I write them for my own frameworks? Normally, I only comment classes, or methods if they have non-obvious characteriscts, or have certain dependencies on parameters, and have special types of behavior.

Moreover, I realized that commenting certain types of classes facilitated my thinking process. When I am able to verbalize the purpose and characteristics of a class, I may also rethink its entire existence.

In effect, on the spectrum between no-comments to essays for each code block, I have inched away from no-comments, toward reasonably and effective use of them. In the future, when the language itself allows for the declaration of more rules, use cases, etc., such as DbC, more use of expressions over statements, the need to comment will diminish even further. In the meantime, comments remain useful.

No duplication/code reuse I fell for this big time. Duplication is fine if it creates less work overall than the work needed to remove the duplication. In some ways this is a type of over architecture.

Writing my code in spanish