Something I've noticed over my years of developing stuff for people is the distinction between those who love small module pipelines and those who prefer monolithic large app pipelines.
An example of small app pipelines is a pipeline made up of lots of little apps - each app modifying data and passing it on to the next app. Usually under control of some master calling process that marshalls the data and ensures that only the data that *needs* to be processed actually gets processed - it handles dependency checking and so on. A good example of a control process might be Scons or KJam - a Python built system.
A more monolithic approach is a large app that basically does all your processing for you - everything that does data transformation is within the app. You just call that one big app and sit back and wait till it's done.
Now the advantages of the small app is flexibility. Want to change how shaders are generated? Sure, just replace that module with another one. As long as it can handle the file formats as they are forwarded on, you are good to go.
Another good thing about this process is that the data itself is usually held in file format between applications which makes debugging that much easier. If one node in your tree is doing bad things you have the trail of data to look at to work out which node is doing The Bad Stuff(tm).
Another advantage is that the module code tends to be able to be incorporated into other modules relatively easily. Have a materials processing module? Great, you can probably wrap that up into a Maya Plugin relatively easily because the code is designed as standalone code in the first place.
Cons of this approach include speed issues - since each app is an individual app it means that the app needs to start up, data needs to be loaded, processed, saved and then passed onto the next. This is both time and bandwidth consuming with so many loaded and saved temporary files. It's also loaded with many points of failure - each node could be the wrong version of the application, or someone made a modification of that node and didn't test the entire pipeline to ensure it plays well with others.
Another issues is lack of consistency. While a well run project has very defined parameters for how modules are built there is often enough vagueness that developers tend to create their own ways of how the module logs errors, or what languages a module uses ("Oh, this bit is Python, but that calls this Perl Module that then accesses this other website"). The lack of overall framework often results in each module having it's own set of very specific overall dependencies on 3rd party code / feature sets. Sometimes these dependencies can even be at odds with what *other* nodes in the network require.
One last problem that can crop up is an extension of the internal dependencies problem - what works in isolation doesn't work in combination. For example, a module that's written to work on it's own can't be compiled into another module simply because it uses the same libraries as the larger module but a different version. There's an external library collision - then what do you do? Because everything is built in isolation there is way less forcing of conformity of library usage. The classic"Well, it works on my machine"problem.
The advantage of the large app is that generally data is passed from one internal process to the next in memory, which makes it a lot faster than the small node pipeline. Also, all the code is in one place which makes the dependencies issues far less - you *have* to load the entire pipeline in order to test new code because, well, it's all in one place.
The disadvantage of this approach is that code reuse tends to be at a minimum - when the code for a particular operation is inside a larger application it tends to get targeted at that specific application and molded for it - the idea of an independent module with no dependencies tends to get lost. The code itself also tends to be way more mission specificand less flexible than code written for smaller modules, and certainly there is less in the way of error checking internally because you tend to trust the data that is fed in more than you would as an independent module (although that can also have the plus of speeding the code up a bit - lacking all that value range checking it just Is Faster).
It's also way harder to debug - you get all the logging output from every step rather than just the one you want and generally have to sit through gobs and gobs of other code to get to the part you want.
An observation I've noticed is that individual engineers preferences toward one type of pipe against another tends to come from their platform of choice.
Linux is a small app driven environment - lots of small console apps all strung together to make an operating system. If you've ever seen linux users string together commands on the command line, piping the data from one into another you are seeing a microcosm example of what the small apppipeline looks like.
Windows users tend to go for more monolithic application approach since that's what dialogs and so on are built around. Windows does have DLL's it's true, so each module could be built as a small app, but anyone who's done any extensive work inDLL's can tell you, versioning can get out of hand very very quickly under windows and sometimes diagnosing this can be of great pain.
My personal feeling leans towards the monolithic app, simply because I'm a windows user - the small app pipeline just has too many points of failure and too many smaller internal dependencies that all have to be set perfectly for it to work.
Whatever else you may say about windows, it does at least have far more graceful legacy handling of older formats. Smaller hand built modules tend to be far less fault tolerant, but report less so when they do fail you have no idea why. The small module approach is definitely The Way To Go in certain situations, but too much dependency on it meansyou end up with large pipelines with god awful sets of dependencies within them. One change tested in isolation and it brings the whole thing down.
Just something to think about when you are designing your next pipeline....
Well, this week saw the shuttering of 3D Realms, developers of the Duke Nukem Franchise.
There's so much to say about these guys - 13 years and no game? 2 Engines, gobs of money and development work and still no game? So many people have had a pop at them over the years and lets face it, it's pretty easy to do this. Wired has awarded them their yearly vaporware award so many times that it's silly.
3DRealms have never released a game on a windows platform. The last internal game they released (besides re-releasing Duke Nukem 3D on XBLA) was Duke Nukem 3D. They've released other stuff developed by other people - expansion packs, but nothing that was supposed to be their flagship.
But what is less generally known is the work they did with external developers. Max Payne only existed to the degree it did because 3D Realms went in there and helped them out, paying milestones and helping with IP Creation.
Prey was the success it was in part because of the help that 3D Realms gave developer Human Head (and it's worth mentioning that Human Head are an awesome bunch of guys, but they are as dependent on publisher money as the next developer - having a group like 3DR run interference for them was invaluable). It's doubtful that game would be quite the quality it was without 3DR's help.
3DR was more than just"the guys making DNF"- they were a scrappy indie developer who actually walked the walk - they made their own decisions, brooked no interference from publishers and generally were everything an indie was supposed to be.
However I also suspect that was part of their downfall - they weren't making friends with publishers, and publishers do tend to have long memories about that kind of thing.
I also suspect that when they required money to finish DNF that, given the largely silly amount of time they've already had on DNF that it worked against them - from the publisher point of view I can quite see that funding a company who was openly rebellious and that has basically proved they can't actually get stuff done on time or in budget probably wasn't a great bet.
Lets be honest here - 3DR probably bears quite a lot of the responsibility for whats occurred here - they made their bed and now they have to lie in it.
I have the feeling that there was some brinkmanship regarding IP rights behind this door closing - it may well that their distribution partner, 2K, wanted those IP rights and 3DR, knowing that a developers only real value is the IP it owns, refused to give them up. 2K, being in the driving seat cos they have the money probably said something like"Well, give us the rights or go out of business"and 3DR, being 3DR, would rather do that than give them up for free.
NOTE - this is personal speculation, not any kind of insider info.
But having said that, could nothing have been worked out so the world got to see what was reportedly one hell of a game? I just feel it's sad that the world is deprived of a great gaming experience, and from the point of game developers in general, there goes one of the poster boys for indie development.
Possibly as a result of their own hubris and certainly as a result of their inability to actually, you know, get something done and release it. But still, people have lost employment and we've all lost a great game and a poster boy for indie development.
This is a sad day.
Here's a fun little exercise that I call Distillation.
Can you distill the essential wisdom about any given thing / situation / person down into one sentence? Preferably using a common phrase?
Working with other developers - Respect the fact that there is more than one way to skin a cat.
or how about
Dale Carnieges"How to win friends and influence people"- Learn to listen and preemptively find out about that which your intended target is into so you can ask leading questions.
I was playing this with myself last night, just looking for the lowest level of distillation I could get to on this kind of stuff. Some are kinda"Duh"results - yes, obvious, yet how many people ignore the obvious?
Some more examples.
Friends - Treat them as you want to be treated, but understand everyone is also different.
Making Games - Iteration is key and tools that make that go faster are king.
Writing - it's moreimportant to get words down then revise them than it is to make them perfect first time.
You get the idea. Can you come up with examples of your own?...
I'm taking time out of the regular blogging to post a public service announcement.
These are bad. They are not"cool". They are not"fun". They are not"Manly". They are stupid. There is no middle ground on this. If you have these on your car / truck, you are an idiot. Please remove them so the rest of the world doesn't have to see how incredibly inane you are.
I now return you to regular blogging.
So there's been a lot of furor recently about crunch mode - Mike Capps, Epic CEO stood up at an IGDA meeting and basically said"Screw 40 hour weeks. When I hire people I want people how will, when the chips are down, do 60 hour weeks without complaining". He then went on to pour oil on the fire by attempting to mollify the statement by explaining how Epic has a 2am moratorium - people have to go home then.
This was followed a few days later by an AP at Epic who, in attempting to clarify things made the statement that Epic actually schedules crunch.
Now the two statements are connected but I want to treat them separately because for me personally they are two different things.
It should also not be forgotten that Mike Capps is on the board of the IGDA and one of their basic platforms is the Quality Of Life (QoL) issues - where developers are made to work overtime and crunch like mad with no choice or remuneration on the other side.
Now I've talked before about my feelings on crunch - I consider there to be two types. Freely done overtime where people - either individually or banding together - do overtime to make something better, to put in the polish, to make something better than it would otherwise be. I think this is awesome and this is where great - not just good - games come from. I LOVE this passion and everything that can be done to nurture it should be done (catering in the evening, 24 access to the facility etc).
Then there's bad crunch which is company mandated and is usually an admission of failure from the planning point of view - either scope control, bad task duration planning or revised publisher requirements. I think this statement alone makes my feelings on the idea of 'scheduled crunch' clear. This is a clear failure - scheduling crunch is basically saying"We can't plan properly and we aren't even attempting to try"and is, at root, taking time from the developer who is probably under peer pressure to acquiesce.
Now I realize that things happen in video games. Publishers do make sudden demands that weren't factored into planning, or suddenly an outsourcing house suddenly vanishes, or a project that was planned simply doesn't work and needs re-writing. We are on the tip of the bleeding edge; of course it's hard to plan for the unknown. Overruns of time do happen, and having developers who aren't suddenly going to throw up their hands and say"More than 40 hours? Screw you!"is massively useful and helpful. So in that sense I do believe that the first part of what Mike Capps was saying is something I agree with. I want to work with people who are prepared to put the time in to make something better than good. But I don't expect it to be company mandated.
Having developers that want to go that extra mile is essential for polish and great games. It just is - 22 years in this business has reinforced that to me many times. However there is this implied idea that crunch = great games, and while I can see that there's definitely a co-relation, there are also plenty of companies out there crunching like mad and producing lots of crap. It's entirely possible to crunch and waste everyone's time. Crunch != polish. Passion + Planning of what you need to do + Talent = polish. And in some cases Passion + Planning + Talent + Crunch = more polish. But if you can't make something that you know will be good in 40 hours then 60 isn't going to make any difference.
But companies 'expecting' this and treating it as business as usual is also not acceptable. People's situations change - they might be able to do 60-80 hours at the drop of a hat when they are in their 20's, but in their 30's kids come along and commitments happen, and then what? Should they have to move on because that's the corporate culture?
Well, ultimately, yes, because it's up to the individual to choose where they work and either accept that culture or not. I applaud Mike Capps for actually getting up and saying what the culture is at Epic. People can make their owndecisions as to whether to accept this culture or not because at least they know what it is. The company is what it is, for better or worse - you as an individual can make the choice to go there or not.
I also happen to know that Epic makes everyone's crunch mode very worthwhile - they actually give a small bonus before crunch starts to make it a bit more palatable (not much comfort for those people who's relationships it will destroy mind you, but a better gesture than you get from almost any other company). And the end bonuses for being on a successful Epic project are impressive indeed - the company does not shirk back from sharing it's success which is why they have such a low turnover rate I'm sure.
But the point should be made that in most cases overtime is effectively a loan of time from the employee's to the employer who may or may not repay it later with bonuses. And lets face it, 95% of the time do not because either the games doesn't make enough and/or they just don't see why they should have to.
Expectation of this loan of time from the employees by the company is just wrong. If I, as an employee, were to expect that the company cover my taxes each year and I may or may not pay them back dependent on outside factors, well, they'd never consider it. But they _do_ expect the same from me.
Epic DOES make the crunch worth while but many companies do not. Either way it's all risk onthe employee side. While I definitely believe the choice of that risk should be there for the employee, expectation that they will just accept it is not.
Now there are also a group of individuals making a lot of furor at IGDA meetings about the fact that, as they see it, Mike Capps is basically pissing all over part of the IGDA's charter - that of QoL. Their point of view is that *any* requirement for overtime, implied or explicit, is wrong and damages the industry. They want a level playing field for everyone regardless. Lots of studies from the turn of the century are trotted out and everyone does a lot of hand wringing.
My thinking here is that while yes, there is some hypocrisy about Mike's comments and being on the IGDA board. I think even he'd accept that.
But I don't believe that the IGDA (or anyone else for that matter) should be legislating or judging on what is or is not acceptable. If 20 year olds (or 40 year olds for that matter) want to do 60 hour weeks, who thehell are we to tell them they can't? It's a matter for the individual to decide, not a committee.
I have the feeling that those who make the most fuss are those who feel that they *should* be good enough to work at somewhere like Epic but are never going to because they are so engaged in their own social lives. So instead of saying"Ok, I am prepared to make this sacrifice to make great games"they want to level the playing field so they can have their cake and eat it too. Basically alter the reality of the situation to one in their favor so they can regard themselveson the same tier as those at Epic.
Good luck with that. All great works require sacrifice. I believe Epic is just making statements to that effect, and while I definitely don't think it's ok to demand that sacrifice up front, I do believe that the environment has to be there that accepts it (and encourages it) if the individual chooses to put that in. I just think that Epic (as others) are looking for people more likely to want to. And there's nothing wrong in that.
There's also a group of people who state"well, I can do great work in 40 hours, why can't everyone else?"- who are incidentally the same kinds of personality who write impenetrable spaghetti code of lots of templates and STL and then sit back and say"What? I can read it, why can't you?"- who regard the whole thing as management failure.
They are missing the point a little though. Sure, at an individual level people can get their tasks done in 40 hours (Well some can anyway) but in those situations it's often those kinds of people who would benefit most from the extra 20 hours. If you've already got your tasks done in 40 hours, then the extra 20 is pure gravy to produce more stuff or polish the crap out of what you've got. At that point it's not about frantically working all hours just to get the basics done, it's about truly making the product better and fantastic. Sure, it shouldn't be expected, but a couple of odd weeks of doing 60 hours instead of 40 should make the product even better. Hiring people who *want* to do that isn't wrong or bad.
As long as the company in question is upfront about what their expectations are then I see the industry as self regulating. If your company gets a rep for lots of overtime / death marches with little or no back end then that kind of thing soon gets out there - developers love to talk - and it'll come back to bite you.
I guess ultimately it comes down to this one phrase -"The ends never justify the means"and that seems to be ignored inthis case. Because success has come with some crunch doesn't make crunch something to be mandated / planned for because that's just transferring the time / money risk to the individual developer, and no amount of"Aren't you passionate enough?"peer pressure bullshit will cover a destroyed marriage because you had to go to work 80 hours a week for the past 2 months....