-
When You Think You’re a Fraud
When You Think You’re a Fraud
Imposter Syndrome is a lot like alcoholism or gout. It comes in waves, but even when you’re not having an episode, it sits there dormant, waiting for the right mix of circumstances to trigger a flare up.
Hi, my name is Jeff and I have Imposter Syndrome. I’ve said this out loud a number of times and it always makes me feel better to know that others share my somewhat irrational fears. It’s one of the most ironic set of emotions I think I’ve ever experienced. The feelings are a downward spiral where you even question your ability to self-diagnose. (Who knows? Maybe I just suck at my job )
I can’t help but compare myself to others in the field, but I’m pretty comfortable recognizing someone else’s skill level, even when it’s better than my own. My triggers are more about what others expect my knowledge level to be at, regardless of how absurd those expectations are. For example, I’ve never actually worked with MongoDB. Sure, I’ve read about it, I’m aware of its capabilities and maybe even a few of its hurdles. But I’m far from an expert on it, despite its popularity. This isn’t a failing of my own, but merely a happen-stance of my career path. I just never had an opportunity to use or implement it.
For those of us with my strand of imposter syndrome, this opportunity doesn’t always trigger the excitement of learning something new, but the colossal self-loathing for not having already known it. Being asked a question I don’t have an answer to is my recurring stress dream. But this feeling isn’t entirely internal. It’s also environmental.
Google did a study recently that said the greatest indicator for high performing teams is being nice. That’s the sort of touchy, feely response that engineers tend to shy away from, but it’s exactly the sort of thing that eases imposter syndrome. Feeling comfortable enough to acknowledge your ignorance is worth its weight in gold. There’s no compensation plan in the world that can compete with a team you trust. There’s no compensation plan in the world that can make up for a team you don’t. When you fear not knowing or asking for help out of embarrassment, you know things have gone bad. That’s not always on you.
The needs of an employee and the working environment aren’t always a match. The same way some alcoholics can function at alcohol related functions and others can’t. There’s no right or wrong, you just need to be aware of the environment you need and try to find it.
Imposter syndrome has some other effects on me. I tend to combat it by throwing myself into my work. If I read one more blog post, launch one more app or go to one more meetup, then I’ll be cured. At least until the next time something new (or new to me) pops up. While I love reading about technology, there’s an opportunity cost that tends to get really expensive. (Family time, hobby time or just plain old down time)
If you’re a lead in your org, you can help folks like me with a few simple things
- Be vulnerable. Things are easier when you’re not alone
- Make failure less risky. It might be through tools, coaching, automation etc. But make failure as safe as you can.
- When challenging ideas propose problems to solve instead of problems that halt. “It’s not stateless” sounds a lot worse than “We need to figure out if we can make it stateless”
Writing this post has been difficult, mainly because 1) I don’t have any answers and 2) It’s hard not to feel like I’m just whining. But I thought I’d put it out there to jump start a conversation amongst people I know. I’ve jokingly suggested an Imposter Syndrome support group, but the more I think about it, the more it sounds like a good idea.
-
People, Process, Tools, in that Order
People, Process, Tools, in that Order
Imagine you’re at a car dealership. You see a brand new, state of the art electric car on sale for a killer price. It’s an efficiency car that seats two, but it comes with a free bike rack and a roof attachment for transporting larger items if needed.
You start thinking of all the money you’d save on gas. You could spend more time riding the bike trails with that handy rack on the back. And the car is still an option for your hiking trips with that sweet ass roof rack. You’re sold. You buy it at a killer deal. As you drive your new toy home, you pull into the driveway and spontaneously realize a few things.
- You don’t hike
- You live close enough to bike to the trail without needing the car
- You don’t have an electrical outlet anywhere near the driveway
- You have no way of fitting your wife and your 3 kids into this awesome 2 seater.
This might sound like the beginning of a sitcom episode, where the car buyer vehemently defends their choice for 22 minutes before ultimately returning the car. But this is also real life in countless technology organizations. In most companies, the experiment lasts a lot longer than an episode of Seinfeld.
In technology circles, we tend to solve our problems in a backwards fashion. We pick the technology, retrofit our process to fit with that technology, then we get people onboard with our newfound wizardry. But that’s exactly why so many technology projects fail to deliver on the business value they’re purported to provide. We simply don’t know what problem we’re solving for.
The technology first model is broken. I think most of us would agree on that, despite how difficult it is to avoid. A better order would be
People -> Process -> Tools
is the way we should be thinking about how we apply technology to solve business problems, as opposed to technology for the sake of itself.
People
Decisions in a vacuum are flawed. Decisions by committee are non-existent. Success is in the middle, but It’s a delicate balance to maintain. The goal is to identify who the key stakeholders are for any given problem that’s being solved. Get the team engaged early and make sure that everyone is in agreement on the problem that’s being solved. This eliminates a lot of unnecessary toil on items that ultimately don’t matter. Once the team is onboard you can move on to…
Process
Back in the old days, companies would design their own processes instead of conforming to the needs of a tool. If you define your process up front, you have assurances that you’ve addressed your business need, as well as identified what is a “must have” in your tool choice. If you’ve got multiple requirements, it might be worthwhile to go through a weighting exercise so you can codify exactly which requirements have priority. (Requirements are not equal)
Tools
Armed with the right people and the process you need to conform to, choosing a tool becomes a lot easier. You can probably eliminate entire classes of solutions in some instances. Your weighted requirements is also a voice in the process. Yes Docker is awesome, but if you can meet your needs using VMs and existing config management tools, that sub-second boot time suddenly truffle salt. (Nice to have, but expensive as HELL)
Following this order of operations isn’t guaranteed to solve all your problems, but it will definitely eliminate a lot of them. Before you decide to Dockerize your MySQL instance, take a breath and ask yourself, “Why am I starting with a soution?”
-
Metrics Driven Development?
Metrics Driven Development?
At the Minneapolis DevOps Days during the open space portion of the program, Markus Silpala proposed the idea of Metrics Driven Development. The hope was that it could bring the same sort of value to monitoring and alerting that TDD brought to the testing world.
Admittedly I was a bit skeptical of the idea, but was intrigued enough to attend the space and holy shit-balls am I glad I did.
The premise is simple. A series of automated tests that could confirm that a service was emitting the right kind of metrics. The more I thought about it, the more I considered its power.
Imagine a world where Operations has a codified series of requirements (tests) on what your application should be providing feedback on. With a completely new project you could run the test harness and see results like
expected application to emit http.status.200 metric
expected application to emit application.health.ok metric
expected application to emit http.response.time metric
These are fairly straight forward examples, but they could quickly become more elaborate with a little codification by the organization.
Potential benefits:
- Operations could own the creation of the tests, serving as an easy way to tell developers the types of metrics that should be reported.
- It helps to codify how metrics should be named. A change in metric names (or a typo in the code) would be caught.
Potential hurdles:
- The team will need to provide some sort of bootstrap environment for the testing. Perhaps a docker container for hosting a local Graphite instance for the SUT.
- You’ll need a naming convention/standard of some sort to be able to identify business level metrics that don’t fall under standard naming conventions.
I’m sure there are more, but I’m just trying to write down these thoughts while they’re relatively fresh in my mind. I’m thinking the test harness would be an extension of existing framework DSLs. For an RSpec type example:
describe “http call”
context “valid response”
subject { metrics.retrieve_all }
before(:each)
get :index {}
end
it { expect(subject).to have_metric(http.status.200)}
it { expect(subject).to have_metric(http.response.time)}
end
end
This is just a rough sketch using RSpec, but I think it gets the idea across. You’d also have to configure the test to launch the docker container, but I left that part out of the example.
Leaving the open space I was extremely curious about this as an idea and an approach, so I thought I’d ask the world. Does this make sense? Has Markus landed on something? What are the HUGE hurdles I’m missing. And most importantly, do people see potential value in this? Shout it out in the comments!