I'm currently looking into which library I'm going to use to handle authentication in a breakable toy project. Now, despite it being just a breakable toy, I want to do it with as few constraints on technical quality as possible because I want to maximize the learning experience I'm going to get from it. That means I don't just want to quickly put something together that just works. I want something that works, but that would also hold up in real world scenarios, even though the project will at best only be used by myself. Which means that I'm going to be picky about any libraries that I take a dependency on, just as I would if this were a project that I'd be getting paid to work on.
So as I was browsing through a few possible alternatives for my authentication needs, I started thinking about my thought process when evaluating libraries/frameworks to use. I generally base my decision on the following items, listed in order of importance (to me).
How well does it work for my scenario?
If a library satisfies all other items on this list, that certainly doesn't mean it's an automatic lock. How it works and the impact it has on my code is definitely the most important factor.
I've noticed that I let the number of watchers/forks on sites like Github influence my opinion. If a project has many watchers and many forks, odds are high that there's a relatively large group of happy users as well as people involved with the project. It also increases the odds that the project will be around for a while. Of course, inactive Open Source projects often remain available as well but if nobody's working on it, I'm not exactly tempted to take a dependency on it. Log4net is a notable exception to this, obviously. But when a project has a lot of people interested in it, or better yet, contributing to it, it's a good sign that you'll easily get help if needed, it's only going to get better in the long run and that it might get forked should the original developers stop working on it. As the author of an Open Source project that doesn't have a lot of watchers/forks (Agatha), I'm aware that my point of view on this is rather hypocritical but hey, it is what it is.
I don't have the time to do an in-depth review of the code as I'm sure most of us don't do either. But I do like to glance over the code to get a general feel of the quality of the code. I focus mostly on the clarity of the code and also keep an eye open for sloppiness or downright WTF's. I guess the questions I'm mostly trying to answer when doing this are: "is this code I'd like to try to improve or fix if I need to?" and "how easy would it be to debug this when I need to troubleshoot some non-obvious issues?".
Location of code and issue tracker
A lot of people will probably take issue with this, but I consider it to be a major plus if the project is on Github. Not just because of my personal preference of Github, but because they truly encourage people to collaborate and contribute to projects and they make it very easy to do so. Also, the site is fast! I cringe when I have to look over issues of projects on Codeplex because it's just terribly slow. And the UI doesn't come close to that of Github either. I've heard that Bitbucket is pretty similar to Github, but I've never even looked for projects there. In any case: I want to be able to download the latest version of the code at any time, or of a particular branch if I need to, as easily as possible. I also prefer an issue tracker which is fast, responsive and easy to search. It doesn't have to be Github, but those 2 requirements are important to me.
If it's GPL, I don't use it. Also, I check whether or not a commercial license needs to be purchased when you want to use the library/framework in production. Pay attention to dual-licensed projects because that Open Source license might not apply to commercial/production use!
I'd love to hear your thoughts on this. Did I miss any important factors? I just quickly put this post together so it's likely that I missed some good ones :)
A lot of developers working in enterprise environments are used to not getting the right hardware that would make them more productive. You know how it goes: Operations likes to 'standardize' the hardware that gets rolled out and in too many cases, no exceptions are made for developers. Which is a shame, really.
The simplest, and probably most effective example is that of giving developers SSD's instead of regular disks. I'm willing to bet that using an SSD instead of a regular disk could save an hour of lost time per day, per developer on average. Suppose the hourly cost of a developer is 80€. Now suppose you have 30 developers in total. That would be waste of 2400€ a day. Now suppose you have to buy 30 SSD's at 300€ each, which would set the company back 9000€.
That 9000€ investment will have paid itself back after 112,5 man-days, which is only 5 calendar days if you have 30 devs. After those 5 calendar days, the investment starts reducing development costs with 2400€ a day (on average), which for a full year (using 220 working days) could result in a yearly saving of 528000€ for the company.
Of course, Operations is usually a separate division from those that are responsible for development, which means that they have a separate budget sanctioned for what they need to do. Now suppose that Operations has to spend about 5000€ per year to replace failed SSD's. And lets assume that the yearly support cost for those SSD's is 20000€. I pulled that number out of my ass, which unless I'm mistaken is standard operating procedure whenever an Operations division gives you an estimate of support costs. Anyway, it would mean that on top of the 9000€ investment, a yearly recurrent cost of 25000€ would need to budgetted. And for what? The Operations division can't demonstrate a single benefit for their division to justify the cost.
Nevermind the fact that the company would save about 500000€ a year...
I recently referred to an interesting .NET job as a 'non-typical .NET job'. I hadn't used that term yet up until that point, so I thought that was rather interesting. But what exactly do I mean with 'non-typical .NET job'? It's pretty simple really: a job where you're using .NET technology without blindly following the guidelines, recommendations and software from Microsoft on how to develop software on the .NET platform. It basically means that you'll use whatever you think is most appropriate for what you're trying to do.
The biggest problem in the .NET world is that most companies that do .NET development just stick to what Microsoft tells them to use and how to use it. Many .NET developers largely focus on that, because they know all too well that it increases their odds of getting hired. And let's face it: Microsoft has a solution for practically everything. The only problem is that those solutions are rarely the best in what they're trying to solve. But hey, no manager gets fired for going with Microsoft, right?
The result is that there are too many companies and too many developers that focus only on what Microsoft offers. But there's a lot more to software development than what Microsoft offers, or even knows about. There are countless examples of Microsoft being late to whatever technical party is interesting at the time. And when they show up, they certainly don't always make a good impression.
If you're the kind of developer that likes to learn from what other software development communities are doing, odds are high that you're screwed. There is an interesting OSS community within the .NET world, and they frequently produce great solutions, quite often based on succes stories coming from other development communities. The problem is not that .NET developers don't have great solutions available to them. The problem is that the majority of them simply don't know about them only because there hasn't been any Microsoft hype about it, or that the devs who do know about it aren't allowed to use it because their managers are sceptical about it, most likely also because there's no Microsoft backing for the technology or architectural style that is being proposed.
I'm not advocating the avoidance of Microsoft products or solutions. By all means, use Microsoft products if they are indeed the best solution to your problem. But do be aware of the things that are getting attention outside of the Microsoft sphere and use them when it makes more sense to use them. That's the essence of the 'non-typical .NET job' and that's exactly what makes it interesting: using the right tool for the right job.
My last 2 years in high school, I was lucky enough to have 13 hours of programming-related classes every week. Those classes covered a few languages, but the most of it was spent on C/C++ (and nothing advanced either… mostly basic stuff) and Visual Basic. We thought we learned a lot, and I guess from a purely syntactical point of view that may have indeed been true. But we didn't really learn a lot about the differences between good code and bad code.
We mostly cared about getting our assignments working. When you completed an assignment, you moved on to the next one and none of the code of the previous assignment really mattered anymore. Maintainability or even readability were things that never even occurred to us. Maybe a few of the teachers tried to tell us about it at some point, but it certainly didn't stick.
In the final year, we could pick a project to work on for the majority of the year. There were no limits on the assignment, so most people picked something that interested them. I was pretty interested in manager/simulation games, so I figured I'd just build my own Formula 1 manager game. The player would get full control over a F1 team, which in my game meant:
- picking the suppliers for tires, fuel and engines, all of which had varying levels of quality and costs associated with it
- hiring a chief-designer, chief-mechanic and drivers, all of which had a specified talent level and negotiations took the results of the team's previous year into account, as well as the results of the potential hire's previous team, which influenced prices and negotiation tactics.
- landing sponsors, some of which were more than happy to sponsor your team depending on last year's results, or wouldn't even answer your calls and they all had varying budgets as well
I came up with a pretty good formula to simulate realistic race results based on all these factors as well as a dose of luck, or bad luck. If you played for a few seasons, the decisions you made all added up nicely, and gradually. When the project was due, it all worked. Since I loved playing those kind of games, I tested it very extensively by playing it regularly. When issues came up, I'd fix them. When I thought of improvements, I implemented them. I delivered something that worked, and that was large enough in scope and complexity to be impressive for a school project.
But the codebase was truly atrocious. I wasn't using proper structures for either data or behavior and I ended up with a shitload of extensive multi-dimensional arrays. All of my types were in those arrays, as I just didn't know yet that they should've been types of their own. Of course, I listed all of the indices on pieces of paper so I'd know which data properties corresponded with each index. Behavior was all implemented in the UI layer. I was using VB5 at the time, which didn't really encourage me to seek out nicer ways to structure my code. I copy/pasted large gobs of code whenever I needed to reuse something. And of course, whenever I had to make a change, I needed to do it in multiple places. Quite a few places in most cases even. I was using flat files to store data, so I didn't have the opportunity to make a horrendous mess out of a SQL-based data layer but if I had used a database, my usage of it would've surely been epic.
I got a very good grade for the project, and while I was happy to see that all of my hard work was rewarded, it did kinda scare me that a codebase that was so brittle and so painful could actually work, and work well even. It was my first encounter with what is by far the biggest problem with software development to this day: if you keep adding code, sooner or later it might actually work. Luckily, this was just a school project. But can you imagine if this were a project done by a professional company? That codebase would've cost money to develop, and the company would actually have a working project, no matter how expensive it would've been to keep working on it. At the time, I couldn't imagine a company in such a situation that would say: "ah, it sucks… we need to start over". And I think we all know that very few companies would actually do that.
I love that horrendous codebase for the lessons it taught me so early on. I learned a lot about what you shouldn't do when you're developing something that's supposed to stick around for a while, and I'm glad I learned it without it costing a lot of money. Of course, that doesn't mean that every single thing I've written since is all roses and peaches, but it put me on the path of trying to continuously improve as a developer, not only in what I was able to do with code, but also as to how clear I could make it to others.
Note: I no longer have that codebase unfortunately. Really wish I did, if only because it'd be pretty funny going through it after all these years.
Depending on who you ask, software developers have been programming for over 40 to 50 years. And we still have no truly objective way to measure a developer's productivity and quality level. People can think or say that a developer is fast or productive, but they can't truly quantify it. People can think or say that a developer is great, good or bad, but again, they can't truly quantify it with numbers. We often look at other metrics to give us an idea on quality or productivity such as code metrics, or velocity but those numbers often reflect the efforts of a group of people. After all, a codebase is more often the result of the efforts of more than one person, and a team's velocity is typically the result of the productivity of, well, a group of people. Those numbers can't really be used to gain insight to the quality or productivity of a specific developer.
Not only is the current way of trying to determine the productivity or quality of a developer rather subjective, it's quite often based on things that are only one part of the equation: what they did, and the time in which they did it. I've always felt that that only really tells you half the story. I don't hold much value on knowing how long a developer took to write a certain piece of code or to complete a specific feature. I also don't really care a lot about the code metrics for that piece of code or that specific feature. All I know is that it took some effort. The quality or productivity of that effort can't truly be measured without knowing how much extra effort will be introduced later on because of that effort.
Suppose you have 2 developers, John and Sally. You give them both the same task and you tell them that the estimated workload for that task is 12 hours. John completes the task in 10 hours, and Sally needs 14 hours to complete it. It would be easy to think that John is more productive than Sally based on this. Especially if results like those were to repeat quite often. But what if John's solution requires an extra 6 hours of effort down the line to fix some bugs, or because his code was such a mess to follow that it slowed down future tasks which required another developer to comprehend the code of John's task? That would put John's eventual total to 16 hours of effort, instead of the originally reported 10 hours. And what if Sally's solution didn't introduce any future effort whatsoever down the line? Hell, maybe Sally's solution saved someone else an hour here or there because her solution was so easy to comprehend or perhaps even to reuse. Her total for that task remains at 14 while at the same time maybe even reducing future numbers. In this scenario, Sally is clearly a more productive and better programmer than John though we wouldn't know about it if we'd based ourselves on the initial numbers.
Consider another scenario though. John again spends 10 hours on the same task, and Sally again spends 14 hours. John kept it as simple and as straightforward as he could. Hell, he actually took a shortcut or two to finish faster. Sally is an outspoken member of the Software Craftsmanship movement and stays up to date with all of the latest and greatest patterns and approaches that are making their rounds on the internet. Sally spent a little bit of extra effort to make sure the code is as great as it possibly can be. It's incredibly readable and uses a great new pattern that all the cool kids on the internet rave about. The code metrics clearly show that Sally's solution is of a much higher quality. Unfortunately, it turns out that the other team members are having trouble understanding that code and they often lose time trying to figure it out whenever they need to do something which involves that part of the codebase. While Sally may claim that she took more time because she did it 'better', the extra effort wasn't worth it because no matter how well her intentions with that code, it ended up costing other people time. And what if John's shortcuts didn't introduce any future effort?
There are countless variations on this scenario which makes it virtually impossible to come up with a way to measure productivity or quality based on hours spent or code metrics. The only way to objectively measure this is to define a new metric which holds into account the extra effort that will be introduced later on. I'd call this metric Eventual Efficiency. Quality or productivity by themselves don't really mean anything if one influences the other negatively. Efficiency however can be seen as a combination of the two. Did you write truly fantastic code which ended up not mattering at all? Not efficient. Did you write mediocre code that got the job done and didn't (or hardly) introduced future effort? Sounds pretty efficient to me. Did you write great code that reduced future effort? Very efficient! Did you write bad code quickly that ended up introducing a lot of effort? Not efficient at all.
Unfortunately, we'll probably never get to the point where we can actually measure the Eventual Efficiency of developers. Until we do, it's worth keeping in mind that whatever numbers we might be looking at don't really mean anything without the link to the related future effort.