U.S. flag

An official website of the United States government


Tech Challenge Playbook

V.1 March 2021

The government benefits when companies demonstrate how capable they are during contract competitions, rather than writing about their capabilities in a traditional, written proposal. “Tech challenges” are one form of a capability demonstration. Tech challenges are primarily used to test a company’s ability to design, develop, and deploy software applications and systems.

For the most part, the ability to code software is binary. You can either code or you can’t. Yes, there are a variety of technologies and languages to code with, but the ability to code is still binary. Additionally, coding capability has recently become more of a commodity in the federal marketplace.

Therefore, assessing a company’s ability to simply code software is not an effective approach to picking a partner that will successfully support your government software project. For these reasons, “Can they code?” is not the right question to ask during a tech challenge evaluation. “Can they solve technical problems?” is a much better question to focus on when assessing a company’s software engineering capability.

Download PDF

ABOUT THIS PLAYBOOK

The U.S. Digital Service (USDS) has helped federal agencies plan and execute a variety of coding and tech challenges. This Tech Challenge Playbook is a compilation of what we’ve learned. It provides a reusable roadmap for planning and executing technical challenge evaluations.

Before diving in and crafting your tech challenge, first go in with your eyes open by considering your situation:

Know it’s an investment. Technical challenges can be fruitful but they are a resource-intensive evaluation method to plan and execute.

Determine number of Offerors. Because tech challenges are resource-intensive, consider how many Offerors you want to participate. More Offerors results in a longer evaluation.

Use in Phase 2. In most cases, it makes the most sense to employ tech challenges during Phase 2 of multi-phase evaluations. Use after a Phase 1 down-select which should be an easy-lift and highly discriminating.

Decide multiple vs single award. For a multiple award vehicle, you’re evaluating corporate capability to support a variety of projects. For a single award contract, you’re evaluating the Offeror’s ability to support a specific project.

1. Involve the right people at the right time.

Technical challenges require early and frequent collaboration from the government team involved, from the planning phase all the way through contract award.

Checklist

  • Identify the technical skills needed to plan, test, and evaluate the Tech Challenge, including design, product management, and engineering.
  • Identify and involve technical evaluation team members early.
  • Involve contracting officers when Tech Challenge planning begins. Do not wait for a finalized procurement package.
  • Collaborate with your procurement lawyers early and iteratively throughout the process.

Key questions

  • Do you have enough technical talent (fed staff) to evaluate the Tech Challenge? Use non-voting “technical advisors” to supplement an evaluation team’s technical expertise, especially in areas like human centered design.
  • How many people will you need to evaluate the Tech Challenge? An evaluation team should be as lean as possible. Have a diverse set of skills, backgrounds, and voices on your evaluation team. A diverse team reduces risks associated with biases and improves evaluation report quality.
  • Are fed staff available to plan, test, and evaluate the tech challenge? Make your Tech Challenge simple, focused, and only as complex as it needs to be in order to test your Offerors. This will help fed staff find time to plan, test and evaluate.

2. Identify and prioritize the discriminators.

Discuss the scope of work to identify & prioritize future objectives and past/current problems that companies will tackle on this contract. Next, identify aspects of the work that are not only high priority but will also allow the government technical evaluation team to clearly discriminate between Offerors as you test them via the technical challenge.

Checklist

  • Identify high priority goals, objectives, and current pain points related to the work on this contract.
  • Using your high priority goals, objectives, and pain points, identify aspects of the work that will allow the government to discriminate between companies during the tech challenge. If most companies can perform a task adequately, then that task will not be helpful to evaluators in discriminating between Offerors during the tech challenge.
  • Ensure the discriminators you identify relate to successful performance on the contract.

Key questions

  • What is success in supporting the most important work on this contract? What do you want the future to look like? Weave those objectives into your scenario and evaluation criteria.
  • What skills does the contract awardee need to already have versus learning on the job?
  • Do your technical discriminators allow room for unanticipated excellence?

3. Turn discriminators into eval criteria.

Translate those high priority discriminators related to the work into 2-3 specific evaluation criteria that will be used to test and assess each Offeror’s capabilities.

Checklist

  • Author 2-3 criteria that are:
    • Specific/focused
    • Discriminating
    • High priority
    • Testable/assessable
  • Refine criteria based on the technical skill level of the technical evaluation team.
  • Review criteria and simplify as much as possible. A common mistake on tech challenges is the government attempting to evaluate too many elements.
  • Agree on an evaluation rating scale with ratings definitions.
  • Review that the evaluation team will have adequate flexibility based on the evaluation criteria text and the evaluation rating scale. It is beneficial to evaluate on a flexible continuum as much as possible rather than using evaluation criteria elements that are binary/pass-fail.

Key questions

  • How well did you derive your evaluation criteria from your goals, objectives, and problems related to the work?
  • For each evaluation element, will most Offerors be able to perform it well? If so, that evaluation element is a waste of your time because it won’t discriminate between Offerors.
  • Do evaluation team members understand the meaning and intent of the evaluation criteria?
  • Even without knowing the tech challenge format or scenario yet, how long might it take to evaluate these draft criteria? This will begin to estimate the time investment needed. Remember that the more criteria you have, the more time the evaluation will take.

4. Select a challenge format and author a simple scenario.

The challenge format will depend on your evaluation criteria. Consider how much the criteria dictates a format that evaluates the Offeror’s journey during the tech challenge versus a format that evaluates the final outcome produced/submitted by the Offeror. Author a simple scenario; elaborate scenarios aren’t necessarily more discriminating. Consider whether the scenario is real or fictional. Real scenarios can favor companies that have already worked with that Agency or program office.

Checklist

  • Referring to your evaluation criteria, discuss tech challenge formats that best allow companies to demonstrate their capabilities. The tech challenge format will heavily shape the text in the solicitation’s instructions to offerors and evaluation sections.
  • Decide how much you need to observe Offeror performance in-person/real-time vs offsite.
  • Design aspects of the scenario to accommodate effective use of the evaluation criteria
  • Discuss whether using a real scenario will provide an unfair advantage to incumbent contractors. Fictional scenarios that test your same evaluation criteria can help level the playing field. Even if using a fictional scenario, include realistic wrinkles to test an Offeror’s problem-solving capability (i.e., an imperfect govt-provided API).
  • Have a peer review the draft scenario to ensure its clarity, the scenario is focused, and the level of effort is appropriate for the time given.
  • Draft the user stories, data, and other govt-provided artifacts for the tech challenge.

Key questions

  • To what degree are you evaluating the Offeror’s journey vs their code deliverable(s)?
  • Do you need to adjust the format to include “remote friendly” options?
  • Does the material specifically link back to success on the overall contract?
  • Does the scenario reflect the skills and talents of the author rather than the skills needed by the Offerors?
  • Are you allowing Offerors to be innovative during the technical challenges? Tech challenges should allow Offerors to shine via creativity and excellence while technical problem solving.

5. Plan logistical details and constraints.

Plan how the challenge will run, the artifacts to collect, and any communications with the Offerors.

Checklist

  • Acquire and prepare any hardware, software, and/or data to be provided to offerors for the challenge.
  • Plan for communications with offerors before, during, and (if necessary) after challenges.
  • Clearly articulate duration/timing schedule and expectations to Offerors.
  • Discuss the number and type of Offerors expected to bid, including team partners.
  • Discuss any restriction on who will be allowed to participate from the Offeror’s team. Allow vendors to bring their best talent. Do no limit participation to only key personnel.
  • Discuss additional logistical constraints including:
    • Location
    • Onsite escorts
    • Consensus location/timing
    • Breaks and transition time between Offerors

Key questions

  • Does the government need to provide any hardware, software, and/or data to the offerors for the tech challenge?
  • What is the duration of the challenge? How will presentations be scheduled to give each Offeror an equal amount of time to work on the challenge?
  • What work is allowed/expected to be performed by the Offerors before the challenge begins?
  • What communications with the Offerors will be allowed before, during, and after the challenge? The evaluation team needs to understand why the Offerors made the decisions that they did. The “why” is the vital supporting information behind the observations in the technical evaluation team’s report. Find a way to dialogue with each Offeror to clarify the “why.”
  • How do we expect the artifacts or challenge responses to be submitted or presented?
  • Do we want to limit the amount and type of Offeror participants?
  • Should you video record the in-person challenge? Historically, video recording an in-person technical challenge has not helped the government better defend its award decision. 

6. Prepare for and execute a test run.

Create the artifacts for the tech challenge. Test the feasibility and effectiveness of your tech challenge by having a team of feds perform the challenge while the evaluation team performs a mock evaluation.

Checklist

  • Prepare for the test run:

    • Gather/create user stories, data, facilities, & tools for your tech challenge.
    • Pull together a mock Offeror team comprised of government staff volunteers.
    • Allow the mock Offeror team adequate time to prepare to perform the challenge.
    • Create an evaluation guide (not a checklist) for the evaluation team to use. The guide promotes consistent application of the evaluation criteria across Offerors.
    • Set a test run date and ensure key participants can attend. Include CO & legal.
  • Execute the test run:

    • Follow the instructions and evaluation criteria outlined in the draft solicitation.
    • Treat the dry run just like a real evaluation.
    • Evaluators conduct an actual evaluation, including the consensus process and documenting results (strengths, weaknesses, ratings, etc.).

Key questions

  • When will the key players (eval team, CO, legal) be available for the test run?
  • What user stories are needed as a foundation for the scenario?
  • What needs to be created by the mock Offeror team and how much time do they need?
  • Where is the evaluation team uncertain with the current scenario/challenge? Is it unclear?
  • What other key items should the test run clarify or validate?
  • In what format/style does the technical evaluation team expect to package their results?  

7. Refine & share the tech challenge.

After the test run is complete, use the lessons learned in the test run to improve the tech challenge and the draft solicitation. Share the revised draft solicitation internally and with industry.

Checklist

  • Use the lessons learned:
    • Conduct a retrospective session for the tech challenge and for the consensus process. Invite evaluators, CO, legal, and the mock Offeror team.
    • Ensure the eval panel knows how to execute the evaluations consistently.
    • Agree on the format & writing style of eval team docs (Tech Eval Report).
    • Consider whether additional staff (non-evaluators) should be assigned to support the logistics side of executing the tech challenge
  • Update the instructions to offerors and evaluation criteria in the draft solicitation.
  • Share the refined draft solicitation with industry and, if time allows, request feedback.
  • Share a schedule with industry of projected dates for tech challenge participation by Offerors.
  • Plan what info is being disclosed to Offerors at what times/dates during the tech challenge.

Key questions

  • What worked with the scenario? What didn’t work?
  • Where is the evaluation team uncertain with the current scenario/challenge?
  • Do you want feedback from industry on the draft solicitation or are you simply posting it to inform industry of what is coming so that they can plan teaming and resources?  

8. Execute and evaluate the tech challenge.

The time has come to administer the tech challenge! You have prepared and it will go well.

Checklist

  • Provide Offerors with access to data, repositories, tools and facilities as promised in the solicitation.
  • Use an evaluation guide (not a checklist) that promotes consistent application of the evaluation criteria across all Offerors.
  • Conduct evaluation consensus meeting on the same day as each Offeror’s tech challenge performance. That meeting should occur as soon as an Offeror’s performance is completed.
  • Cite code and technical tool observations as much as possible in the technical evaluation report.

Key questions

  • Has the entire evaluation team blocked off the necessary time for the challenge, including the compilation of findings?
  • How will the evaluation team reach consensus on each vendor’s submission?
  • Do you expect to communicate with Offerors during the tech challenge? This can allow the government to more clearly understand the rationale behind the Offeror’s decisions, which can help you produce a more defendable evaluation. If so, how will you manage that communication?

9. Defend the award decision.

Tech challenge evaluations typically produce very detailed and technical feedback on the Offeror’s proposal/performance. A transparent debriefing process that includes specific technical feedback on the Offeror’s performance will significantly reduce the risk of a sustained protest while also helping the Offeror to learn from their experiences in the tech challenge.

Checklist

  • Tech challenge evaluation reports cite code and tool data clearly referring to Offeror performance. This leads to transparent Offeror debriefings and reduces unsuccessful Offerors’ frustrations.
  • Use basic, fair, and allowable communication between the CO and Offerors to help avoid significant frustrations (i.e., Eval team cannot access the Offeror’s GitHub repo).
  • Expect protests. Tech challenges tend to attract highly-qualified companies and this frustrates less-qualified companies as well as unsuccessful Offerors that are already working with the program office or agency.

Key questions

  • Who will attend Offeror debriefing meetings?
  • Who from the technical evaluation team will be involved responding to protests?  

10. Review and retrospective.

Reflection and feedback are good. Take time to solicit and then listen to non-attributional feedback from both Agency participants and from Offerors.

Checklist

  • Ask evaluation team members for feedback on how well the technical challenge went and how much it contributed to the award decision.
  • Ask Offerors for feedback on their experiences during the technical challenge.
  • With the feedback, identify what you should keep doing and what you should change. Document and share this in a discoverable place (e.g., Agency Wiki).

Key questions

  • What do you want to find out from the Offerors regarding the challenge from their perspective? Did they have the information about the procedures? Was the challenge too big for the time allotted? Or too easy? Ask specific questions.

Resources

  • Tools, Templates & Samples

    Use these artifacts to get a head start on your work.

  • Case Studies

    Learn from the good work of your peers, or contribute your own!

  • Learning Center

    Advance your career by building new skills.

  • Contract Solutions & Vehicles

    Don’t reinvent the wheel before checking out these ready-made solutions.