I'm a Web Developer and Entrepreneur out of Washington DC.

Zvi Band

Founder of Contactually.
I'm also passionate about growing the DC startup community, and I've founded Proudly Made in DC and the DC Tech Meetup.

Firing Myself


It’s inevitable – and vital – that, as a company scales, a founder has to remove roles from themselves.

At first it’s building a team underneath you – specialists, demand fulfillment, etc. Less time is spend on day-to-day execution, and more time is spent planning, overseeing, and managing. But there still may be some bastions of your original role remaining as part of your overall responsibilities.

But we reached a point where I had to fire myself from that. Not just because the company needed me focused on more high-leverage activities. But my skills didn’t match what was needed for this stage of the company. In order to grow the very product I had created, I had to remove myself from it.

As a founder, I’m still responsible for the overall vision and high-level execution. But day-to-day, the role I once valued myself by is no longer mine.

For 24 hours, I’ve never felt more miserable. Then, I never felt more empowered.





I’ve been wanting to write this post for a while, as it’s something I’ve often stewed about (when in truth I shouldn’t).

There are a lot of charlatans in the startup journey.

Charlatans that claim to be angel investors, but haven’t written a check in years, if ever.

Charlatans that claim to be working on their startup.

Charlatans that will happily advise your startup, because they know The Way.

Charlatans that say their startup will beat yours.

Charlatans that go on stage, beat their chest, and it turns out there’s nothing behind the curtains.

Charlatans with “tons of contacts” that want to do business development.

Charlatans in VC firms, that have no fund to make new investments out of.

Charlatans will waste your time. Charlatans will get your hopes up. Charlatans will give you bad data. 

I wish it were socially acceptable to call these people out in anything more public than a 1-1 conversation or reference check. Sometimes it’s not their fault, they don’t realize it.

But as soon as your BS meter starts going up, run. Charlatans are everywhere.

The Startup Guide to NPS (Net Promoter Score)


A little over a year ago, we were overhauling our metrics, and were looking for a practice for measuring overall happiness, primarily with our product. There are straightforward quantitative metrics you can leverage and track, which we do, such as conversion rates, churn, and ACV. But we wanted a signal that would give us a leading indicator as to how successful we were overall. We happened on Net Promoter Score, which surprisingly few people talk about (Eric Ries wrote a good post). There still isn’t that much written about how to tactically implement NPS in a startup, as well as lessons you may learn going forward. Here is what we have learned and practiced.

Rolling Your Own vs Existing Product

Aside from existing survey tools like SurveyMonkey, at the time we couldn’t find a pre-baked NPS tool (there’s now CustomerVille, Promoter.io, Delighted, and others). We decided to roll our own. It’s pretty simple to implement, was directly integrated into the customer experience, and then we have direct access to the data.

How It Looks

Here’s how we designed ours to look – it’s a modal that pops up when people open up Contactually. You can grab the stars off of FontAwesome. People click on the appropriate star or they click “Not Right Now” – it’s important to have an exit option. We didn’t for a short time, and saw that if people were forced to make a decision, they would get pissed off and make a low ranking, solely because they were being bothered.

Screen Shot 2014-09-14 at 8.41.01 AM

Now those familiar with NPS may notice that there is a discrepancy between what the “proper” implementation of the survey. We have 10 stars to choose from, but ideally you’d have 11 options, with ability to give a score of 0 as an option. Given how the resulting score is calculated, we decided to stick with our survey choices, and would see little difference.

We also ask a followup question, which has proved immensely valuable.

Screen Shot 2014-09-14 at 8.50.07 AM

When to ask

There doesn’t seem to be any best practice here, so this is what we came up with. Given we’re a SaaS product, there’s a customer lifecycle to be mindful of, and we wanted to track that. Our system flags people to receive the NPS survey, and the next time they log in, they’ll be presented with the questions. We ask all users on the 30th day after signing up, and then we also ask every 90 days after we last asked them, repeating in perpetuity. We’re pretty happy with this pattern.

Measuring the data

This is pretty straightforward with the NPS survey. Keeping in mind my note above re: 10 options instead of 11, this is a bit of Ruby code.

Now, the fun comes in when you decide which survey results you want to look at. Our standard is to look at results on a rolling 30 basis, but we can sometimes go right down to the week if we want something more granular (day-by-day is too spiky).

The valuable thing that you can do now is start separating out results by user cohort or common characteristics. We measure and track separately new userspaid customers, and all users. You can imagine how those translate into different parts of your business – happier new users means higher conversion rates, while happier paid customers reduces churn.

Reporting and leveraging the data

We look at the data weekly as part of our weekly metric ritual. However we also have the daily raw survey results emailed out to the team automatically, as part of our larger automated nightly stat email (more on that in another post). Seeing customers who express their love for your product when you first open your email in the morning gives you a little bit of extra dopamine, and then our customer success team reaches out to anyone with a negative score to see what’s going on. Overall, it helps establish some baseline bridge and empathy between the customers and our entire team.

With access to the full database of results, NPS scores can be used in analysis – e.g. examining common traits among high-scoring respondents. One practice we’ve done in the past is query all low-scoring new users to understand what we could have done better.

With permission, you can also start circulating the results, and using the qualitative feedback in marketing efforts.

We also send the raw NPS results to investors and advisors :-)

Some gotchas

  • Given NPS is user-generated content, keep in mind that their perception (aka score) may differ from the reality of their actions. People with 9s or 10s may love your product but still churn out for a different reason. People could be momentarily ticked off by a bug or something completely unrelated, and give you a 0 that day. My favorite anecdote is asking people why they gave us such low score, and they respond that Contactually is their secret, and they don’t want to tell anyone. Oh well.
  • Even when averaging out over a rolling 30, the results can sometimes be spiky. We cheer when it hits new peaks, and try and see what’s happening when it dips. While we focus more on the overall trend, looking at the cohorts gives us some better idea (e.g. a lot of people signing up after a conference on the same day).
  • The general goal is to have it be positive (more promoters than detractors), and people will throw out different numbers. Some throw out +50 as a target. You can look at a lot of benchmarks, but with such a wide distribution, it’s clear that there is no best practice here. We just care about the direction and velocity of our score, and focus on making it better each month.



I never valued company culture.

Culture, mission statement, values – that was all stuff that I never saw anyone pay any attention to other than lip service. Any conversations about values seemed so far removed from the day-to-day of what I was doing.

Fast forward to today – Contactually is passing 40 people. Values, mission, culture – that’s what I think about the most. Mission keeps your eye on the ball in a way no product roadmap or task list could do. Culture (not perks) gets employees jumping out of bed each morning and coming into the office, with no thought in their mind of leaving anytime soon. Values makes sure that the things that mattered so much when founding the company still carry through to today.

We’ve updated the values on our site. And we have a mission statement that we truly believe in (the latter being a recent addition).

The one thing for a startup CEO


I’ve been involved in numerous ventures and surrounded in a world of startups and founders doling out advice, blog posts, and anecdotes. I read this stuff voraciously, and tried to take it to heart when it was finally time for my at-bat. To no-one’s surprise, the knowledge that “stuck” was the information I was either looking for, or made sense. Other lessons were skipped or devalued. The only way to learn these lessons was to get punched in the face.

I’ll start with the most important thing I’ve learned. Forget code, pitch decks, users, metrics, funding, press. The most important thing in a startup is people.

What I’ve written below comes about as, when I hear myself talk nowadays, I’m shocked as to the content. As a software developer, talking about people, culture, vision, etc. does not come naturally to me. It’s not a tangible product or trackable metric, and it took a long time, and lots of missteps (which I still make), to start to appreciate it.

Bringing the absolute best people onto your team, and never stopping that effort. Very quickly remove the ones that turn out not to be the best.

Communication and coordination – the flywheel may be spinning with everyone heading in the same direction initially. You have to find and invest in systems to ensure that everyone keeps moving in the same direction consistently.

Culture. Culture is the glue. There are so many forces at work against you, but culture can zero all of them out for your people.

Leadership. Figure out the type of leader you are, and the type of leader that the people in your company need you to be.

Your customers are people too. They are just as important as the people in your company. It’s really easy to pay lip service to that mantra, and really hard to fulfill that – especially as you start to have more customers than you ever fathomed.

Fire yourself. Your people are better individual contributors than you.

If you’re reading this as a first time founder, or while still employed, this advice does not come as a surprise to you – because I’m not the only one who says it. My goal, and hope here, is by being another voice, potentially you will start focusing on this earlier than I do.

Technical Debt + Red October


There’s been a little bit written about technical debt in an early stage software product. Technical debt is something we think a lot about at Contactually.

Technical debt is a balance of problems that are generated through rapid product development, normally in an early stage product.

“Move fast and break things

Given how key it is for a startup to test product in front of customers as fast as possible, technical debt is not just a byproduct but a necessity. Given how many unknowns we were encountering in the early days of Contactually (and even now), we adopted one of Facebook’s core mantras and made it our own. While we were still getting to product market fit, we gave ourselves permission to launch things that were not fully tested, did not have all edge cases satisfied, and frankly, were ugly in both form and function.

Now that we’ve achieved an acceptable level of product/market fit in our core offering, we’ve course corrected. Actually, to be completely honest, it’s not that we woke up one day and saw that we’re checking all the boxes and could clean out the skeletons in our closet. Our users told us. Overwhelming support issues, higher churn, shaky metrics, and a incessant stream of bug reports. They loved the promise of the product, but the actual day to day usage was bumpy.

Technical debt is usually attributed to the code-level shortcuts taken, marked by a quick TODO and quickly forgotten. Who cares about those? We’re building a user facing product, so our debt was anything that the user would see:

  • As we kept adding + modifying, overall performance started to decline.
  • Bugs appeared.
  • Usability issues were numerous.
  • The design was extended and stretched too far, yielding an overall unattractive mess.

Making the decision

Late 2013, we knew we were in trouble. It was clear, even just in talking to our team, that we were spending more time hearing about issues with the product than positive results. We had to act fast. We made the decision that, for now, we had reached a level of feature completeness, and just needed to make what we have work.

It was time to pay down. But how?

The squeaky wheel gets the grease

At first, the engineering team and I came up with a list of all the problems we saw in the application, and our wishlist. To no-one’s surprise, it yielded a list of internal architectural challenges, refactors, and rewrites.
But what matters to the user?

We changed our tune, quickly. Here’s what we did:

  • Started tracking overall satisfaction - Net Promoter Score is the most straightforward. To this day, the NPS is the clearest indicator of user satisfaction with our product.
  • Tracked application performance, and identified hotspots – The best way to improve performance is pretty obvious – look at what’s slow, and fix it. A few minutes clicking around on New Relic can give an engineer a clear idea of what the slowest pages are, the least performant database queries, and clear areas for code optimization. We started at the top and just worked our way down. We now report on our Apdex score (New Relic’s measurement of how fast pages return) weekly as a top-level company metric.
  • Actually fixed bugs If you haven’t set up a simple exception-reporting tool like Airbrake, Exceptional, or Honeybadger in your application, do so now. To pay down debt, we just… started fixing what we saw. (Note: clearing your backlog of exceptions also really helps overall performance, too). We now wince every time we see a bug report come in that’s anything other than some strange exception.
  • Asked our users – This was the thing I was most excited about, and the hardest pill to swallow at the same time. We had already amassed a collection of issues that we received inbound from users. That was nice, but knowing that we were only hearing what someone had gone out of their way to tell us, we didn’t have a clear signal. So we did something insane – we asked our top 300 users to give us their list of every annoyance, frustration, bug, blocker they had. We ended up with something on the order of 1200+ items. We looked through every single one, prioritized, grouped, and ended up with a list of what we knew we needed to fix. Granted, we had a lot of feature requests and off-topic improvements (blah blah faster horse…), but we could see what people were having issues with.
  • Internally, we sat down the entire company for a two hour session where everyone went page by page, workflow by workflow, and logged everything they could find.


We made the conscientious decision to not build any new features until we fixed the majority of these. In fact, no planned feature enhancements or internal tooling was done either. We shut down everything to focus on these issues. We called it Red October.

We emerged from it triumphant. Our team felt better not just with the end result, but the process.

Managing Technical Debt ongoing

We’re now a little bit better with how we manage debt – we have to be. Our core values & culture guides us to help our users as much as possible – and buggy software doesn’t do much for them. So here’s what we do:

  • Track performance as a top-level company metric.
  • Regular meetings with the sales + support teams to understand the “burning issues” and concerns that we’re hearing from customers – and prioritize those.
  • Track our Net Promoter Score, and identify issues resulting from that.
  • Periodically ping both our most active users as well as newly activated customers, to understand what their main concerns are.

While we have no regrets for the path that got us here, we know we had to break a few eggs and disappoint our customers. Moving forward, the burden is on us to deliver value, in both new and existing components of the product.

How things get done


As a startup iterates itself past the short-term uncertainty – where there is no clarity about what you might even be working on that afternoon – systems need to come into place. These systems serve multiple purposes:

  • Introduce medium/long term planning and goals
  • Distribute work amongst a growing team
  • Communicate and get everyone on the same page.
  • Provide a reporting and accountability mechanism.

As we’ve grown, the presence of such a system has become incredibly important. While the practice of advance work planning is well established when it comes to software development (product roadmaps, scrum, Jira, pivotal tracker, etc – if interested in this specific topic, read my article on that), it’s less defined when it comes to general company goals.

More recently Christoph, investor in and supporter of Contactually, wrote up about the growing practice of OKR’s.

For the past two years, back to when it was just the three founders, here is the practice that we implemented and still, for the most part, stick to.

The 30-60-90
A simple document, updated monthly, with three columns.

30 days out 60 days out 90 days out

We’ll line up, for each team, what we want to achieve. As the month goes on, we are able to see how we’re tracking this month, and ensure that nothing is slipping through cracks (unless we just have no time). This used to be where we would track metrics that we want to hit (number of users, MRR, etc) – but now we’ve moved that to a separate spreadsheet.

Just the act of figuring out what you’re going to do over the next 90 days can benefit a startups strategic thinking, and break out of the potentially circuiotous “what are we going to do this week” activity.

Achievements and Objectives
This is something our team practices religiously. Prior to our team meeting, everyone emails to the team alias on the same thread (usually initiated by me) a bulleted list of what their Achievements from last week are, and what their Objectives for the upcoming week are. Achievements are usually gathered by copying and pasting the previous week’s Objectives, which gives you continuity to mark what you did do, and re-prioritize what was not completed. Being a simple email, this gives people the freedom to express their past accomplishments and immediate priorities in different ways – sometimes to be replicated by others. Tony may paste in a screenshot of an excel spreadsheet he uses for his own planning, with color coding to reflect progress. Alexandra may throw key milestones hit or major news for the rest of the team. Brian may throw in what he needs other people to do.

Email is as simple as it gets, but we’re investigating systems that lest us streamline this for a growing team, this is where tools like 15Five or idonethis come in.

Paper Notepad
This is primarily me, but I see other people on the team starting to do this as well. I never was able to fully adopt Getting Things Done, but having a paper notepad with me at every moment of the day works perfectly for me. My work day doesn’t start until I’ve lined up exactly what I need to do that day, from the minute (respond to XYZ, delegate ABC) to the major (plan XYZ feature). If a meeting or 1-1 conversation yields additional tasks for me, they get added to the notepad. My day doesn’t end until I’ve completed what’s on the paper, or marked what I can move to the next day or delegate out.  There are no shortage of to-do list apps and task managers out there, and I’ve tried a fair share of them. YMMV, but the physical presence of a notepad and the tactile reward of crossing something off a list still reigns supreme for a neanderthal like me.

Like any advice I dole out, take this as just a data point, but this process has so far been working for myself and the Contactually team.

Just have fun


If you’ve read more than a couple of my blog posts, you’ll find that I spend just as much time focusing on more touchy-feely subjects as I do on more tactical learnings. There’s a reason for that, as the psychological state of you and your team has a tremendous effect on your ability to execute, and therefore, the success of your venture.

Flashback to late 2011/early 2012. I was miserable. My first time as CEO, across the country from my fiancé, dog, and support base, trying to raise money and having a terrible time doing it (in hindsight it was because **I** was terrible at it). I was hearing “No” pretty much every day. I probably was on the edge of full depression, as every day seemed dark with no path out.

At the first DC Tech Meetup after coming back to DC, I ran into Bryan Sivak, a former entrepreneur who’s now bringing some badly needed innovation to the federal government. As I was talking to him about the challenges I was facing, he gave me one key piece of advice.

“Just have fun.”

And so I do. Every day. Entrepreneurs love what we do every day. Being in a startup is fun. As challenging as it may be on a daily basis, I could not ask for a more enjoyable, fulfilling professional experience. For some reason, those three words just struck a nerve, and rarely a day goes by where I don’t recall them, and I can’t help but be happy.

You’re collecting data, but are you using your metrics?


From the school of interesting ways we’ve failed…

There is a vast chasm separating the ability to collect metrics and use metrics.

The lean startup canon pushes for being data driven, so you’ll find that every startup has a plethora of people using a plethora of tools to “be metrics driven.” Lots of data. A/B Testing. Multivariate testing. All of this lingo circles around as common knowledge.

So we “did” that. Dropped Google Analytics on every page. Kissmetrics? Sure, why not. Mixpanel was being delivered ~100 different types of data points. We set up A/B tests all over the place in Optimizely. We built at least a dozen different “dashboards” and specific reporting tools in our own application.

It didn’t work.

We suffered from information overload – we had so much data on our hands, we had no clue what was actually happening. We had no discipline to regularly look at and understand the data. A/B tests were so easy to set up, we set up a lot of them, yielding inaccurate results, which we would never check. Our designated time to review metrics would be a mess of clicking around the various tools, trying to just understand what we were seeing. KPIs assembled would be inconsistent from month to month, yielding mistrust in the data. Luckily thing were going well, but if they hadn’t, it would have been hard to figure out what wasn’t.

We tried instrumenting the tools to tell us what we thought we needed, but they never delivered on that.

It reached a crisis point where we were talking to interested investors and realized we didn’t know the current metrics off the top of our heads, nor, even after looking through the data we had, could we answer some of their deeper questions.

Here’s what we did, which might work for you.

  1. Decide what your important goals are for the company. These are usually pretty standard for whatever vertical you’re in. We’re a B2B startup, so these are standard things like MRR & churn.
  2. Decide the metrics that should be tracked. Come up with a set of metrics that will tell you how you’re performing on your top goals. We have ~15, of varying importance. These are divided among top level departments (product, sales, marketing, customer service).
  3. Put them in a Google Spreadsheet, one per line. Yeah, I know, it’s late 2013 and we’re still using spreadsheets. But there are a couple key advantages of using a spreadsheet. The primary benefit is you get to decide what you need to track – you’re not limited to the data that’s in a third party tool, or how they calculate & present it. We fought for too long trying to get our main dashboards to be part of other tools, rather than using them to get just the data we wanted.
  4. Instrument your tools to collect that data. Now you’re able to use those powerful tools like Mixpanel & Google Analytics to answer exactly what you need – and you’ll find that may be completely different than what’s readily available. So this is really hard, no way around it. Don’t believe me? Try getting an accurate MRR calculation from Stripe data, amortized properly for annual plans (it’s OK, I’ll wait).
  5. Collect weekly. This is another advantage of using a spreadsheet: each team leader, while putting together their metrics to report to the team, is now looking at each and every individual value.
  6. Discuss weekly. Every monday morning, these team leaders run through each individual metric, explain why it changed, and answer any questions. This process usually results in clear actions for the week, questions we need to resolve, and experiments to run. If there is any question about integrity of the data (like an underlying API changing, as happens) – those are top concerns.
  7. Review with the team. We start off every team meeting with talking about the top level metrics.
  8. Go deep. As needed, dig into individual tools to gain better insight. This is where tools like Mixpanel really shine.

This has made a substantial difference for us.

Newer Posts
Older Posts