Almost exactly to a year ago the merger between Cloudera and Hortonworks went through.
I've had a fairly inside field view of the process and the rollercoaster has been fun or at least never boring. Along the way, I kept having these four questions.
Four questions which are like an organizational rorschach test. But they all look like butterflies to me.
Do you want to release on schedule or by bug count?: There are fundamentally good reasons to ship software only when its bug count drops below a certain margin, but that usually circles around a single development line and not an entire ecosystem distribution. The customers aren't all the same and they don't care about problems in the same way across the components. Shipping a release with bugs is so much worse when you are planning a big bang release with another six months before the next attempt at fixing the problems. However, there are two assumptions hidden in this - new adopters will wait for a release instead of exploring alternatives during the delay and that the internal bug count is a good proxy for the issues that customers will encounter.
There are immediate organizational downsides to this as well. Any team that meets its bug counts for the expected due date through personal heroics of team members will feel disappointed that either they worked too hard or that they took up tech debt for workarounds they didn't need to. And the team that is currently holding up the release will get pressure from all sides and possibly a bad review in the future.
Of course, there's still no good answer because you can't just ship whatever you have because it is the 2nd Tuesday of the month. There's quite a lot of balancing between hitting the dates and closing all the release blockers. In general, most of the quality of code discussions circle around this particular trade-off point.
If there's an argument, is it better to build or discuss?: Decision making is always full of conflict. Technical discussions tend to be easier to tie-break since measurements are possible without involving external entities, unlike say advertising or marketing ones. That said, often specifications which are left on the table without implementations tend to grow outwards either to add more scope to the project or to tackle specific conditions which are imagined during a meeting. A design discussion is not the right place to shift the scope of a project or to add customer scenarios into the mix, but several arguments are discussed out to expand the implementation routinely. These are easily recognizable when applied to cross-cutting features like authentication or authorization which every teams to gets to provide input on.
The main reason why these arguments end up in discussions rather than arriving at an implementation which can be criticized more constructively is because the discussion happens between non-implementors, either being architects or senior engineers. The people who would be responsible for putting together a prototype are usually never in the meeting and even if they were, they are happier to evade being targeted by the engineers in charge of oversight.
Discussions are useful to clarify disagreement. And in this note Chesterton's fence is very much applicable. The implementation is often surprise-heavy and ends up having to bypass several decisions made in dicussions. Empowering the implementor to communicate disagreement with the designer is the most important communication pathway I've observed.
Do you organize teams by skills or involvement?: Before I get into it, let's talk about specialists versus generalists. For a total team conflict reduction, having specialists is better than generalists, since each person has clear responsibility for their own area of expertise with no involvement with others in the team - either to approve or disagree. Naturally, this results in teams getting fragmented into niche skill specialists and leaves the organization to manage staffing by either over-staffing specialists or under-staffing the team when someone is on vacation or quits. From a skip-level up the ladder, that's where the utility of folks like me come in, since I don't mind being thrown into a problem which requires reskilling (at the expense of some conflict with established patterns with questions like "why do we always do it this way?").
Assuming you have a team of specialists and a single manager, then the immediate problem comes up when any of the specialists has a "career" conversation with you. The next level up is management, at which point the specialist will be tackling a team where they are familiar with a fraction of the skills. Having organizations where specialists occupy their own organization structure and float between projects is a way to bypass that issue, since the size of the project is not related to the location of the specialist in the org chart. But that reduces the involvement of the specialist in the team, where the success of the project is only indirectly tied to the future prospects. There is some middle ground here, but that needs to be found for each growth stage of the organization.
Is it better to have big plans or small plans?: Project planning isn't quite war, but it still holds that the map isn't the same as territory. The size and scale of the ideal plan varies as you move between approval of said plan and the execution of it. Big plans tend to motivate leadership, while they tend to overwhelm the foot soldier who can't quite see the map. However, the difference in software engineering is that a big plan can come from the other direction - engineers wanting to do complete rewrites to improve productivity and implement features faster. Because the rates of distruption over a month remains roughly the same, a big plan therefore is likely to fail, since priorities can change over two quarters more than it does over a single one.
It is easy for organizational panic when objectives can be further away than your plans. The comment about "I don't see how your plan is going to get us there" isn't the end of discussion, it is merely an opportunity to admit that from the hole you are currently in, a better plan can come only after you climb out of it. Again, there isn't a good size for a plan - there are plans within plans and all that. And there are ways to meaningfully go ahead with feature flags and fall back mechanisms where the big plan can roll out in stages, where an unexpected event is merely a pause in the process.
And then: These questions are important to me, not because they have right or wrong answers, but because your answers tell me what your experiences and perspective on software development are. And then perhaps, more questions to ask me.--
“I would rather have questions that can't be answered than answers that can't be questioned.”
― Richard Feynman
In 2018, I've have spent more time interviewing than I have ever done since 2004.
I went through the entire interview loop at five companies in total. At two of those places, I had two cycles of interviews as I got referred sideways by the original interviewers. Including the recon visits, lunches and phone screens, I've spent about two whole work weeks of my 2018 talking to recruiters, managers, engineers, architects and directors.
One half-hearted offer (pay-cut included) and one golden one (wow) later, I'm still working for Hortonworks till it doesn't. To make sense of this, I'm trying to distill those two weeks into something that can be bottled for the top shelf.
First up, the Bay Area is special and you are not.
Silicon valley pulls in technical talent from across the planet. It might be the most expensive place to hire people, but the companies I interviewed at can afford it. If you are looking for a low-level systems engineer working on performance problems who understands distributed systems, you might be looking a pool of high hundreds. Rejecting an almost perfect candidate isn't as much of a problem here, because give it a couple of weeks and the recruiter will dig up another prospect locally or at least, find someone who wants to move here.
From that perspective, there's no reason to hire someone like me to work on a CDN route optimization or large scale object store, when they can just snipe people burning out at Amazon or tired of being not promoted at Google. There's no need to find someone who will learn things quickly or grow - you can pick out people who've spent years completely conquering their niche and employ them for two years, four tops.
In short, if you want to do something new, amazing and interesting, find a startup, cut your pay to nothing and unbalance work for life - don't come looking at big company to take a bet on you, go to a VC or someone beholden to them. And there's nothing wrong with that approach, just that it is very different from the tech bubble over in India.
Second, I've got "advanced impostor syndrome". I've got enough knowledge to make myself dangerous, but not enough to be a renaissance man.
Performance and debugging are really wide fields where you spread yourself thin, except where you go way too deep. There's nobody out there who can know all of what's necessary to do that work and if they do, they're out of date in six months anyway. Actually you don't need to know it all, but you have to know enough to guide your search to narrow down a symptom to a problem. The real skill set is using your intuition to ask better questions and finding ways to test for those questions, temporarily hold mental models to work out what's happening in the time dimension across multiple layers of user code, virtual machines, system libraries, kernels and hardware.
At this point in my life, I probably know three things about everything anyone could ask me in an interview, but to the real expert in the field, that is good but never enough. I'm full of anecdotes about how a particular standard solution to a problem doesn't actually work, because there are other considerations which mess with some assumptions hiding in it. And then a lot of anecdotes about how theoretically impractical problems do have actual solutions, but only at the scale of the current use-case. And then some more about how approximations do work better, because they work well enough to be an answer in the real world (yeah, who care about a 0.001 pixel difference?).
My point being that the war story chest doesn't make me more employable, but to me, those stories represent instances where I learn new things. A small hint that learning didn't stop the day I graduated and I walked out with a clear recollection of all binary tree algorithms anyone would ever need for leetcode. If next week someone threw a bunch of Rust code at me and asked me to fix it, I'll find a way, but not in 45 minutes and probably not with code written on a whiteboard. However, as I learned that is not relevant to finding a job in Silicon Valley, at least not at places I interviewed.
In that perspective, my choices have turned me into an inveterate beginner. Being able to learn, absorb and get to answers on something completely new to me is exactly what my current job demands from me. And that means my skill sets are starting to leave the realm of classification or specialization. For anyone trying to box me in, I don't quite fill a box and I straddle too many.
That makes me feel like I don't belong - an impostor syndrome, but an advanced one.--
A company of wolves, is better than a company of wolves in sheep's clothing.
-- Tony Liccione