The U.S. Department of Defense (DoD) has prioritized strengthening its relationship with Silicon Valley. In 2015, it launched the Defense Innovation Unit to accelerate military adoption of commercial technology. The Defense Innovation Board, chaired by former Google CEO Eric Schmidt, followed the next year with a mandate to bring the best of Silicon Valley to the U.S. military. In 2020, after several high-profile Silicon Valley protests, DoD adopted a set of AI ethical principles. Commentators have called the potential rift between DoD and Silicon Valley a “national security threat” and called for “unprecedented partnerships between government and industry.”1
Is the relationship between DoD and Silicon Valley improving? How would we know? What are the key factors driving this relationship that we should monitor? Heeding the call of Peter Scoblic and Philip Tetlock, among others, Foretell issue campaigns aim to shed light on complex questions such as this through a two-phase process that combines the complementary strengths of stakeholder and crowd judgment. In the stakeholder phase, we surveyed 17 stakeholders with a professional interest in this topic to understand what they expect to happen to the DoD-Silicon Valley relationship, how we would measure the status of the relationship, and what factors will most influence it. In the crowd phase, which begins today, we will elicit ongoing forecasts from the Foretell crowd to arbitrate stakeholder disagreements, identify potential biases or blindspots, and monitor the trends stakeholders identified as most critical to the future of the DoD-Silicon Valley relationship.
Figure 1. Factors are trends that affect the future of the DoD-Silicon valley relationship. Outcome measures are measures of the state of the relationship. Both factors and outcome measures are operationalized using forecastable metrics.
Summary of takeaways from the stakeholder phase
- A majority of surveyed stakeholders (56%) expect the DoD-Silicon Valley relationship will improve over the next five years.
- Stakeholders expect two of the outcome measure metrics—DoD contracts with the “Big 5” tech companies and highly cited AI publications supported by DoD funding—to be significantly higher than their historical trajectories suggest.
- Stakeholders had high disagreement about the trajectory of one outcome measure metric: Carnegie Mellon computer science graduates employed by a company that works with DoD.
- Stakeholders consider China military aggression to be the most important factor shaping the DoD-Silicon Valley relationship.
- Stakeholders are divided on whether a stronger U.S. tech sector would have a positive or negative effect on the DoD-Silicon Valley relationship.
- Stakeholders expect three factors will increase relative to their historical trends: China tech capabilities, general geopolitical tensions, and U.S. political polarization.
- Stakeholders disagreed in their forecasts on four factor metrics: China shooting conflict in the South China Sea, “big tech” revenue, U.S. political polarization, and Silicon Valley protests.
See a graphical display of the forecast questions discussed here and their relationship to the future of the DoD-Silicon Valley relationship. You can also read more about our methodology. If interested in sponsoring your own issue campaign, contact us.
Details: Key takeaways from stakeholder phase
The stakeholder phase had two components: First, we interviewed the stakeholders to understand how they would measure the state of the DoD-Silicon Valley relationship (outcome measures) and what factors they think would most affect that outcome. Using the information gathered in the interviews, we developed a survey that includes the most cited outcome measures and factors. Then, we surveyed the stakeholders to elicit their overall views on where the DoD-Silicon Valley relationship is headed, the importance they attribute to each of the factors, and their own forecasts of the metrics underlying the selected outcome measures and factors. Below we provide more details on the key results of the stakeholder phase.
1. A majority of stakeholders expect the DoD-Silicon Valley relationship will improve over the next five years.
Of the 17 surveyed stakeholders, 56% (10) expect the DoD-Silicon Valley relationship will be better in five years than it is today; 31% (5) expect it will be the same; and 13% (2) expect it will be worse. None of the stakeholders expects the DoD-Silicon Valley relationship will be much better or much worse in five years.
2. Stakeholders expect two of the outcome measure metrics—DoD contracts with the “Big 5” tech companies and highly cited AI publications supported by DoD funding—to be significantly higher than their historical trajectories would suggest.
Based on input collected from stakeholders in initial interviews, we use two outcome measures to assess the state of the DoD-Silicon Valley relationship: transactions between DoD and Silicon Valley and DoD’s access to the tech talent developed within the private sector (personnel). Each outcome measure is informed by multiple forecastable metrics.
Outcome Measure Metrics
Metric | Outcome Measure | Aggregate forecast category (prediction interval percentile) | Disagreement |
---|---|---|---|
"Big 5" tech DoD contracts | Transactions | Above trend (76th percentile) | Low |
DoD tech subcontracts to Northern California companies | On trend (63rd percentile) | Medium | |
Defense Innovation Unit transitions | On trend (49th percentile) | Low | |
Highly cited AI publications supported by DoD funding | Personnel | Above trend (71st percentile) | Low |
CMU computer science graduates to companies with DoD contracts | On trend (54th percentile) | High |
As shown in Table 1, the stakeholders expect the largest departures from the historical trend in the dollar value of contracts between DoD and the “Big 5” tech companies—a transaction metric—and the percentage of highly cited AI publications supported by DoD funding—a personnel metric. Stakeholders who expect the DoD-Silicon Valley relationship to improve over the next five years were particularly bullish on the former. As shown in Figure 2, they forecasted the 76th prediction interval percentile, compared to the 69th prediction interval percentile for the stakeholders who do not expect the relationship to improve. By 2024, that difference would amount to half a billion dollars ($1.8 v. $1.3 billion).
Figure 2. Source: Bloomberg Government
3. Stakeholders disagree about the trajectory of one outcome measure metric: Carnegie Mellon computer science graduates employed by a company that works with DoD.
The surveyed stakeholders disagree about how the percentage of recent Carnegie Mellon computer science graduates whose first job is at a company that works with DoD will change over the next three years. 24% expect it will be below trend (10th-35th prediction percentile); 41% expect it will be on trend (35th-65th prediction percentile); and 35% expect it will be above trend (65th-90th prediction percentile).
Figure 3. This metric tracks the number of CMU computer science graduates whose first job is at a company with a DoD contract in the previous two years as a percentage of all CMU computer science graduates whose first job is at a for-profit company. Source: Carnegie Mellon University First Destination Outcomes dashboard
4. Stakeholders consider China military aggression to be the most important factor shaping the DoD-Silicon Valley relationship.
We also asked stakeholders to assess the impact of an increase and decrease in the seven factors identified in the interviews on the DoD-Silicon Valley relationship. As shown in Table 2, most stakeholders (82%) expect an increase in China military aggression to have a positive impact on the DoD-Silicon Valley relationship. Other stand-out factors were Silicon Valley protests (expect more protests to have a negative impact on the relationship), U.S. political polarization (expect a drop in polarization to have a positive impact on the relationship), and China tech capabilities (expect an increase in Chinese tech capabilities to have a positive impact on the relationship). Stakeholders who expect the DoD-Silicon Valley relationship to improve over the next five years feel even more strongly than their counterparts about the importance of China military aggression and China tech capabilities.
Factor Significance
Factor | If factor increases | If factor decreases | ||||
---|---|---|---|---|---|---|
% expect positive impact | % no impact | % expect negative impact | % expect positive impact | % no impact | % expect negative impact | |
China military aggression | 82% | 18% | 0% | 6% | 59% | 35% |
China tech capabilities | 65% | 35% | 0% | 6% | 65% | 29% |
General geopolitical threats | 41% | 53% | 6% | 6% | 71% | 24% |
U.S. tech sector strength | 24% | 47% | 29% | 47% | 35% | 18% |
U.S political polarization | 0% | 35% | 65% | 71% | 29% | 0% |
U.S. trust of military/government | 71% | 18% | 12% | 12% | 24% | 65% |
Silicon Valley protests | 0% | 24% | 76% | 53% | 47% | 0% |
Table 2. This table shows the effect stakeholders expect increases or decreases in the seven factors would have on the DoD-Silicon Valley relationship.
5. Stakeholders are divided on whether a stronger U.S. tech sector would have a positive or negative effect on the DoD-Silicon Valley relationship..
Stakeholders are divided on the directional impact of changes to the strength of the U.S. tech sector. As shown in Table 2, 24% of stakeholders expect a stronger U.S. tech sector would improve the relationship between DoD and Silicon Valley, while 29% of stakeholders think it would worsen the relationship. The disagreement correlates with views on the overall trajectory of the relationship. Among stakeholders who expect the DoD-Silicon Valley relationship to improve over the next five years, 40% expect a stronger U.S. tech sector would improve the relationship, while none of their counterparts hold that view.
6. Stakeholder expect three factors will increase relative to their historical trends: China tech capabilities, general geopolitical tensions, and U.S. political polarization.
As shown in Table 3, stakeholders expect China tech capabilities, general geopolitical tensions, and U.S. political polarization to increase relative to their historical trajectories.2 The China tech capabilities factor is informed by two metrics: the ratio of U.S.-to-China-authored highly cited AI publications and the percentage of SMIC revenue from 14/28 NM chips or below. For both metrics, stakeholders forecast departures from the historical trajectories in a direction of greater China tech capabilities: fewer U.S.-authored highly cited papers relative to China (33rd percentile), and an increase in the percentage of SMIC revenue from 14/28 NM chips or smaller (70th percentile).
Factor Metrics
Table 3. This table shows stakeholder forecasts for the 15 metrics that inform the seven selected factors. For time series-based metrics, the forecasts are shown as prediction interval percentiles based on an ETS projection of the historical data. The forecast categories are based on prediction interval ranges: much below trend (below 10th prediction interval percentile); below trend (10th-35th prediction interval percentile); on trend (35th-65th prediction interval percentile); above trend (65th-90th prediction interval percentile); and much above trend (greater than 90th prediction interval percentile). Disagreement level reflects the number of forecast categories that more than 20% of stakeholders forecasted. Low disagreement is one forecast category; medium disagreement is two forecast categories; and high disagreement is three forecast categories or more. For binary metrics, the forecasts are a simple average of forecasts for the designated time period. Disagreement levels are based on the standard deviation of forecasts. Low disagreement is a standard deviation of less than 10%; medium disagreement is a standard deviation of 10-20%; and high disagreement is a standard deviation of greater than 20%. For the two survey questions with only one or two historical data points, we didn’t provide a disagreement level. Metric-factor correlation is the relationship between the metric and the factor it informs. Factor correlation is positive if an increase in the metric means an increase in the factor, and a decrease in the metric means a decrease in the factor. Factor correlation is negative if the opposite is true.
The general geopolitical tensions factor is informed by one metric: the geopolitical risk index, which stakeholders generally agree will progress above trend (75th percentile). The U.S. political polarization factor is also informed by one metric: the ANES survey of U.S. political polarization, which stakeholders also forecast to progress above trend (71st percentile) but with more disagreement.
7. Stakeholders disagreed in their forecasts for four factor metrics: China shooting conflict in the South China Sea, “big tech” revenue, U.S. political polarization, and Silicon Valley protests.
Where stakeholders significantly disagree, that disagreement might be more relevant than the aggregate forecast for that metric. Consider the two outcome measure metrics shown in Figures 1 and 2 above. For "big tech" DoD contracts, stakeholder disagreement was low. Accordingly, the stakeholders’ aggregate forecast—of above trend—is a noteworthy takeaway. By contrast, for Carnegie Mellon computer science graduates employed by companies that work with DoD, stakeholder disagreement was high. Under those circumstances, the takeaway should be uncertainty—in effect, “we don’t know”—rather than an aggregate forecast.
For the factor metrics, disagreement—or group uncertainty—is the key takeaway for four metrics, two binary and two time series-based. The binary metrics with high disagreement include the chance of a China shooting conflict in the South China Sea in the next six months (Figure 4), for which stakeholder forecasts range from 1% to 71%, with a standard deviation of 22%. The other binary metric with high disagreement is the chance of an employee protest at one of the “Big 5” tech companies against the companies’ involvement with DoD in the next 12 months, for which stakeholder forecasts range from 8% to 95%, with a standard deviation of 30%.
Figure 4.
The time series-based metrics with high disagreement include U.S. political polarization, for which 35% of stakeholders expect it will be on trend (35th-65th prediction percentile); 35% expect it will be above trend (65th-90th prediction percentile); and 24% expect it will be much above trend (90th percentile or greater). The other time time series-based metric with high disagreement is ‘big tech’ revenue, for which 24% expect it will be below trend (10th to 35th prediction percentile); 41% expect it will be on trend (35th-65th prediction percentile); and 29% expect it will be above trend (65th-90th prediction percentile).
What to look for in the crowd phase
In the crowd phase, which begins today, we will publish the 15 factor metrics and five outcome measure metrics on Foretell and elicit ongoing forecasts from the Foretell crowd. Their forecasts should provide the following insights:
- Stakeholder comparison. Where stakeholders disagree, the crowd can arbitrate the disagreement. Even where stakeholders don’t disagree, because the crowd is a larger and more-diverse group, its aggregate forecast might capture new perspectives that highlight stakeholder biases or blindspots.
- Changes over time. While the stakeholder forecasts are rough snapshots of their views at a moment in time, the Foretell crowd will provide more granular, ongoing forecasts. Their forecasts will also capture changes in trends over time. And as questions begin to resolve, we will be able to compare the crowd and stakeholder forecasts with the actual outcomes and see how the crowd adjusts their forecasts in light of the actual outcomes.
A forthcoming report will discuss the results of the crowd phase and their practical significance to the future of the DoD-Silicon Valley relationship.
Participating stakeholders*
Catherine Aiken, CSET [interview and survey]
Anthony Bak, Palantir [survey]
Jason Brown, Google; formerly U.S. Air Force, Joint Artificial Intelligence Center [interview and survey]
Michael Brown, Defense Innovation Unit [interview]
Miles Brundage, OpenAI [interview and survey]
Bess Dopkeen, House Committee on Armed Services; formerly Department of Defense [survey]
Ed Felten, Princeton [survey]
Melissa Flagg, Flagg Consulting and The Atlantic Council’s GeoTech Center; formerly CSET, Department of Defense [interview and survey]
Michael Horowitz, University of Pennsylvania; formerly Department of Defense [survey]
Nicholas Joseph, Anthropic [interview and survey]
Josh Marcuse, Google; formerly Defense Innovation Board [survey]
Igor Mikolic-Torreira, CSET; formerly Department of Defense [interview and survey]
Enrique Oti, Second Front Systems; formerly U.S. Air Force, Defense Innovation Unit [interview]
Scott Phoenix, Vicarious [interview]
Jack Poulson, Tech Inquiry [interview and survey]
James Ryseff, RAND; formerly Google, Microsoft [interview and survey]
Jack Shanahan, Retired U.S. Air Force, Joint Artificial Intelligence Center [interview and survey]
Trae Stephens, Anduril Industries, Founders Fund [survey]
Danielle Tarraf, RAND [interview and survey]
Anthony Bak, Palantir [survey]
Jason Brown, Google; formerly U.S. Air Force, Joint Artificial Intelligence Center [interview and survey]
Michael Brown, Defense Innovation Unit [interview]
Miles Brundage, OpenAI [interview and survey]
Bess Dopkeen, House Committee on Armed Services; formerly Department of Defense [survey]
Ed Felten, Princeton [survey]
Melissa Flagg, Flagg Consulting and The Atlantic Council’s GeoTech Center; formerly CSET, Department of Defense [interview and survey]
Michael Horowitz, University of Pennsylvania; formerly Department of Defense [survey]
Nicholas Joseph, Anthropic [interview and survey]
Josh Marcuse, Google; formerly Defense Innovation Board [survey]
Igor Mikolic-Torreira, CSET; formerly Department of Defense [interview and survey]
Enrique Oti, Second Front Systems; formerly U.S. Air Force, Defense Innovation Unit [interview]
Scott Phoenix, Vicarious [interview]
Jack Poulson, Tech Inquiry [interview and survey]
James Ryseff, RAND; formerly Google, Microsoft [interview and survey]
Jack Shanahan, Retired U.S. Air Force, Joint Artificial Intelligence Center [interview and survey]
Trae Stephens, Anduril Industries, Founders Fund [survey]
Danielle Tarraf, RAND [interview and survey]
*Three participating stakeholders requested anonymity.
Footnotes
1Contrary to popular media coverage, some research, including at CSET, suggests the dynamics of this relationship are more nuanced, and perspectives within these communities vary.
2 We’re only able to describe forecasts in terms of departures from historical trends when we have historical data, i.e., time series-based metrics. The China military aggression factor, for example, is informed primarily by non time series-based metrics, which prevents us from comparing it to the other metrics in this manner.
To stay updated on what we’re doing, follow us on Twitter @CSETForetell.