Meta-Analysis of Present-Bias Estimation Using Convex Time Budgets

We examine 220 estimates of the present-bias parameter from 28 articles using the Convex Time Budget protocol. The literature shows that people are on average present biased, but the estimates exhibit substantial heterogeneity across studies. There is evidence of modest selective reporting in the direction of overreporting present-bias. The primary source of the heterogeneity is the type of reward, either monetary or non-monetary reward, but the effect is weakened after correcting for potential selective reporting. In the studies using the monetary reward, the delay until the issue of the reward associated with the "current" time period is shown to influence the estimates of present bias parameter.


Introduction
Efforts to foster collaboration between science and industry have long been a part of innovation policy efforts in many countries (Cunningham and Gök, 2012). Firms stand to benefit from accessing the specialized infrastructure and expertise available in universities, while researchers gain access to practical problems that can provide greater relevance for their research, to industrial capabilities for manufacture, and to assistance in commercializing their ideas to take them to market. These collaborations are becoming increasingly important for the innovation process, in part because many new inventions are directly rooted in science, such as biotechnology, information technology, and new materials (OECD 2002, Albuquerque et al 2015. Yet there are several barriers that inhibit collaboration, including financing constraints, information asymmetries which prevent researchers and firms interacting, and transaction costs in negotiating collaboration agreements. Government subsidies may provide an incentive to seek out these connections and may foster increased interaction between firms and scientific units. While such policies have been used for some time in the U.S., Western Europe, and Japan, they are now also becoming common in middle and high income countries that are attempting to close the gap with the most developed countries through innovation. For example, Mexico's National Council for Science and Technology (CONACYT) operates a program, the Programa de Estímulos a la Innovación (PEI), which gives innovation subsidies to either private firms or firms that collaborate with universities or public research centers. 2 In 2013, the program funded 703 projects with a median grant size of USD160,000. About half of these projects went to firms that collaborated with at least two universities or research institutes. Similarly, Malaysia's science fund gives preference to applications that show links with firms; and Colombia's Colciencias is also attempting to generate more linkages between industry and research.
However, despite the increasing popularity of such policies, there is currently little empirical evidence on the causal effects of R&D subsidies for science-industry consortia (Cunningham and Gök, 2012). Scandura (2016) investigates the impact of public funding for university-industry R&D projects in the UK using propensity score matching based on firm characteristics. She finds positive effects of the program on firms' R&D expenditure and share of R&D employment.
However, she does not investigate the effect of the program on collaboration. Also, although the matching technique produces a comparison group of firms with similar observable characteristics, the results could potentially be driven by differences in unobservable characteristics. Lööf and Broström (2008) use a similar matching technique to estimate the impact of university collaboration on innovation in Sweden. They find that large manufacturing firms that do collaborate with universities have higher innovation sales (share of sales from new or improved products) and a greater number of patent applications, with again the same concerns about selection on unobservables.
This paper adds to this literature by studying the effect of the In-Tech program in Poland on science-industry collaboration, research and innovation, and product commercialization. The In-Tech program provides grants to consortia of research entities and firms for proposed research projects. Applications receive a score based on peer reviewer ratings and those with a score above a threshold are offered funding. Based on this funding rule, we use a regression discontinuity (RD) design to estimate the effects of receiving In-Tech funding for applicants to the 2012 and 2013 calls for proposals. We use data from In-Tech application forms to show that applicants above and below the cutoff have similar characteristics, suggesting that the RD approach is valid.
Follow-up information on projects and consortia outcomes comes from a 2016 survey of 400 applicants both above and below the funding cutoff that was specifically designed to measure the impact of In-Tech. The consortium leaders in our sample have a mean of 100 and median of 28 research employees and a mean of 140 and median of 41 technical and administrative staff members. Most projects are in a field of engineering, with the largest field being electrical, mechanical, or materials engineering.
Our findings show that receiving In-Tech funding increases the probability of a project being completed by almost 60 percentage points (from about 20% completed to close to 80% completed).
The survey responses regarding collaboration reveal that most consortia had already collaborated before applying to the program (about 85 percent of applications). However, the impact estimates show a 14 to 18 percentage point increase in the likelihood of a new collaboration taking place within the consortia since applying. We also examine whether In-Tech funding crowds out collaboration with entities outside the consortia and do not find this to be the case, suggesting that In-Tech led to more science-industry collaboration overall.
Looking at innovation outputs, we find that receiving In-Tech funding increases the probability that the consortium applied for a patent related to their proposed project (from about 15% to 60%) and there is no effect on applying for a patent for another project. Similarly, members of a consortium that received In-Tech funding are more likely to publish a paper related to their proposed project, with no significant effect on publications related to other projects. We do not find an effect on other measures of innovation, such as having developed a new industrial design, new prototype, new products, or new process during the past five years. The lack of impact on these measures may be due to the fact that about 80% of consortia in our sample report doing at least one of these activities, so the baseline level of innovation is quite high.
Finally, for commercialization, we find that receiving In-Tech funding leads to about a 20 percentage points higher probability of a product related to the proposed project being ready for sale or currently being sold in Poland. However, these products currently only account for about 1% of firm sales, which may be the case since our follow-up period is relatively short, covering only 3 to 4 post-funding years.
Although there is relatively little literature on the effectiveness of research grants to foster industryscience collaborations, our paper is related to a broader literature that has studied the effects of grants to firms, or to researchers separately. The early literature on R&D subsidies to firms typically relied on panel estimation, instrumental variables (IV) techniques or propensity score matching to estimate these effects and may be subject to estimation bias. 3 More recent papers use regression discontinuity designs and find that government policies to promote R&D by firms have increased patent applications by these firms in Italy (Bronzini and Piselli 2016), the UK (Dechezleprêtre el at 2015), and the US (Howell 2015). The studies for the UK and US also find positive effects of R&D policies on firm sales (the paper for Italy focuses on patents only).
A related literature examines the impact of National Science Foundation (NSF) or National Institute of Health (NIH) grants to researchers in the US on publications and patents. Two earlier studies using OLS and IV estimation find no or only small effects (Arora and Gambardella 2005 and Jacob and Lefgren 2011). A more recent study also using IV estimation concludes that a $10 million boost in NIH funding leads to an 18% increase in the number of patents citing the funded 5 research (Azoulay et al 2015). In a similar vein, Ganguli (forthcoming) studies a large-scale grant program for scientists in the former Soviet Union, funded by financier George Soros, and finds that the grants more than doubled researcher publications. Branstetter and Sakakibara (2002) and Sakakibara and Branstetter (2003) measure the impact of government-sponsored company-to-company cooperative R&D projects in Japan and the US using matched control groups of firms. They find that participating in these consortia leads to a higher number of patents. Other papers have used matching techniques or structural models to estimate the effect of EU sponsored research joint ventures, some of which include collaborations between firms and universities. These studies tend to find a positive association between participating in a research joint venture and firm productivity and profitability (see for example, Benfratello andSembenelli 2002 andBarajas, Huergo, andMoreno 2012), but they do not distinguish between joint ventures that include universities and those that did not.
Our findings are thus in line with previous findings regarding the positive impact of R&D subsidies on publications, patents and commercialization. To our knowledge, ours is the first paper to analyze data from a survey that was specifically conducted to measure the impact of the subsidy program. 4 These data allow us to consider a broader range of outcomes than previous studies, and, in particular, to document the positive effect of Poland's In-Tech program on science-industry collaboration, thus adding a new finding to the literature.
The rest of this paper is organized as follows. Section 2 describes the In-Tech program and its application process and scoring. Section 3 discusses the RD design and follow-up survey. Section 4 presents the results and Section 5 concludes.

The In-Tech Program
Poland's National Center for Research and Development (NCBiR) is a government entity whose programs account for around 40 percent of the national research and development budget. One of the programs it offers is Innotech, which is designed to support research entities and businesses in 6 carrying out innovative projects in scientific and industrial areas. Innotech has two program tracks, In-Tech and Hi-Tech. Our focus is on In-Tech, the larger of the two program tracks. 5 In-Tech has three main objectives. The first is to increase the number of developed and implemented innovative technologies in Poland, the second is to increase funding spent by companies on research and development projects that can benefit the economy, and the third is to strengthen collaboration between the scientific community and firms. The goal is to take ideas from the research stage through to commercialization. Beneficiaries can receive funding for two phases: a research phase of up to 24 months, and an implementation phase of up to 12 months. 6 They could apply for both research and implementation, or for the research phase only.
The program provides funding for each project of up to 10 million PLN (approximately USD2.9 million), with a median grant size of 2.3 million PLN (approximately USD660,000). 7 Funding can be used to support research expenses during the research phase, including payroll, scientific equipment, research services, operating costs and overheads; and in addition during the implementation phase can also cover costs associated with getting a product ready for market, including advisory services, legal fees, administrative fees, market research, compliance certification with national or international standards, and activities involved with obtaining industrial property rights. Recipients are given the grants, but the program does not provide any additional capacity building activities such as mentoring, training, connections to angel investors and Venture Capital funds, etc. Up to 100% financing of research institutions for research tasks is provided, while firms receiving funding are expected to contribute between 10 and 35 percent of the costs depending on firm size and type of expenditure.
Public funding for these projects is potentially justified as addressing several market failures. The first is a standard failure in the credit markets for financing risky innovation -where asymmetric information and the risk of failure make banks reluctant to lend for research and development activities. Second, if innovation has spillover benefits for other firms due to copying or building on research findings, then the social benefits may exceed the private benefits leading firms to 5 The Hi-Tech program is designed for businesses in advanced technology areas, and funded 60 firms over two funding cycles. 6 Beneficiaries could receive funding for up to 36 months if they applied only for the research phase, and have the possibility of extending the implementation phase to 12 months if they can provide sufficient justification. 7 The Polish Zloty has fluctuated between 3 and 4 PLN to the USD over the period 2011-2016. We use 3.5 for these conversions to provide an approximate sense of the USD value during the lifetime of the program. underinvest. A third market failure is a coordination failure between industry and research units.
Advanced technological research can require costly equipment and experts that may be beyond the reach of many firms. Universities and other research units have this equipment (typically publicly financed) and research experts, but may not have the right links to firms or knowledge of market needs. While the program does not directly provide match-making services, the funds may incentivize industry and research units to come together in order to be able to apply for funding.

Application Process
The program was established in November 2011, and was advertised in national newspapers as well as through the website of NCBiR. Applicants could apply as a scientific consortium consisting of at least one research unit and at least one enterprise, or otherwise as an enterprise or as a center for industry and science. There were two annual calls for proposals, with a one month period to apply online, the first in April 2012, and the second in July 2013. There was no restriction on the sectors in which firms could be operating, so long as the proposal was for developing and implementing innovative technologies. Applications are detailed, with information about the qualifications and prior experience of the applicants, the proposed research tasks, how the partners in a consortia will work together, the possibilities for commercialization, and a detailed budget.
The two calls attracted 968 applications, of which 808 were from consortia, 158 from enterprises, and 2 from scientific units. It funded 164 of these, with 159 of those funded coming from consortia, and only 5 applications from enterprises being funded. As a result, our impact evaluation focuses only on consortia, since there are too few applications from enterprises funded to allow estimation of the impact of receiving In-Tech funding on that group.

Scoring
Proposals were sent to three peer reviewers, who were typically Polish experts in the area of the application. A World Bank process evaluation noted that many of these experts had limited industrial expertise and business background, leading to the potential favoring of the scientific value of the projects over their commercial value. These reviewers were asked to grade each proposal on eight criteria, each of which they would score on a 5 point scale, for a maximum score of 40. Proposals were scored on i) the scientific value of the project; ii) the level of innovation of the product that the project would result in; iii) the previous achievements and experience of the 8 project team; iv) the extent to which they have the necessary material and human resources to implement the project; v) the degree of cooperation between scientific units and industry; vi) the expected economic results; vii) the commercialization potential; and viii) the extent to which enterprises are contributing their own funds towards the project. Reviewers were also asked to verify the proposed budgets and discuss the extent to which the costs proposed were justified.
The scores from these peer reviewers were then averaged to give an overall score for the application. Applications were ranked on this score, and a cutoff point was set separately for each call for proposals, determined by the available budget for that funding round. This resulted in cutoff scores of 32 for the first call for proposals and 30.5 for the second. In addition to exceeding this threshold, to get funded applicants had to also achieve minimum scores of at least 2 or 3 out of 5 on most of the score sub-components. NCBiR program managers would then negotiate budgets with those selected if reviewers had identified queries on budget items. Applicants were provided with their scores and had the right to appeal. We use the initial score which does not reflect any revisions from these appeals, and has the advantage of not being influenced by endogenous applicant responses to their scores. This review process took 5 to 7 months, after which successful applicants signed funding agreements. Successful applicants for the first round received their funding starting in December 2012, while those in the second round received this funding starting February 2014. Eligible costs for the project could be incurred from the day of application, and refunded once the project was selected and a financing agreement signed. As a result, some of the second round projects could still be ongoing at the time of our study, although in practice we show the majority had been completed.

Regression Discontinuity Design and Survey
This scoring process has several features that make it suitable for use in regression discontinuity analysis. The score is a clear cutoff used to decide funding, and the cutoff score is unknown to peer reviewers and determined by funding availability. There is not a sharp discontinuity at the threshold for two reasons. First, there were two consortia applications that received scores at or above the threshold which were not funded -one withdrew their application and decided not to sign a funding agreement, and one was rejected to avoid a risk of double-funding. Second, 14 of the 159 funded applications had initial scores below the cutoff, and were approved following appeals on the original scores.
We combine the two rounds for analysis by subtracting the round-specific cutoff scores. Figure 1 then plots the proportion of applications with each score receiving funding, showing a large jump at the standardized cutoff point of zero. 8 The size of the jump is between 0.68 and 0.82 depending on the bandwidth and whether a local linear or local mean is fit, and is always highly significant.

Manipulation Tests
We conducted two different manipulation tests to check for potential sorting around the cutoff.
These tests estimate the density of scores on each side of the cutoff and then check whether there is a jump in these densities at the cutoff, which would suggest that the scores may have been manipulated. The first test, the McCrary test, gives an estimated log difference in the density functions at the cutoff of 0.382 (standard error 0.191). The second test uses the rddensity command in Stata and gives an estimated difference of -0.236 in the density at the cutoff (p-value 0.813).
While the McCrary test suggests that there is an upward jump in the density at the cutoff, the rddensity test does not show a significant difference in the density at the cutoff and in fact the sign of the coefficient here suggests a downward jump. We believe that the difference in results is due to the different procedures for smoothing the discrete score variable used in both approaches.
As a further check, we plot the distributions of scores on each side of the cutoff in Figure 2. The distributions look very similar on both sides of the cutoff. 9 Also, based on program information we have no reason to believe that scores could be manipulated around the cutoff since judges did not know the cutoff at the time of scoring applications. Overall, we thus conclude that a regression discontinuity approach is valid for estimating the impact of In-Tech.

Estimation and Bandwidth Selection
We estimate the average treatment effect of receiving In-Tech funding for firms with scores at the cutoff using the bias-corrected inference procedure of Calonico et al. (2014a), implemented with the Stata rdrobust package of Calonico et al. (2014b). This procedure estimates the sharp regression discontinuity (sharp-RD) treatment effect by using kernel-based local polynomials within a specified bandwidth or neighborhood (h) on either side of the score cutoff. Since the discontinuity in our case is fuzzy, not sharp, the treatment effect we estimate is the ratio of the sharp-RD estimate from local regression of the outcome of interest on the score, to the sharp-RD estimate from local regression of funding outcome on the score. Robust bias-corrected p-values are then obtained which account for the bias involved in estimating the optimal h to use.
The choice of bandwidth or neighborhood in which to do the RD estimation is the most important task in carrying out estimation in practice, since empirical results can at times be sensitive to which observations are used in the analysis (Cattaneo and Vazquez-Bare, 2016). We use the Calonico et al. (2014a) optimal mean-squared error bandwidth selection procedure with a uniform kernel for local linear estimation on the few variables available for the full sample from the application data (grant size, technical field, whether they applied for both the research and implementation phase, and whether the leader of the consortia is an enterprise or research institute). This gives optimal bandwidths between 5.0 and 8.9. Coupled with power calculations which showed that relatively wide bandwidths are necessary to have sufficient statistical power, this led us to choose a bandwidth of 8 points on either side of the cutoff for our main specifications, and for surveying. 10 This results in keeping 158 out of 159 of the funded applications, and 301 out of 596 of the nonfunded applications (51%), for a total sample size of 459 projects. These 459 projects had further information manually coded from their application forms, and were the target sample for a followup survey (described below).

11
We then also consider three additional specifications for robustness. The first is to do further bandwidth selection for local linear estimation of the treatment effect on each outcome variable that we collect within this interval of 8 points on either side of the cutoff. This will give a different optimal bandwidth for each outcome considered, and may give a smaller bandwidth than we would have chosen had data first been collected on the full sample before choosing the bandwidth. It has the advantage of potentially further reducing bias, but, by taking a smaller bandwidth than 8 points, will have less statistical power. Second, we consider local mean estimation with the optimally chosen bandwidth within the 8 point interval. This will typically choose an even smaller bandwidth than that chosen for the local linear estimation, and then just compare means on either side of the cutoff within this small bandwidth. Thirdly, we go back to the 8-point bandwidth, and add a set of controls for key application data variables that may be correlated with the outcomes of interest (call for proposals round, whether the leader of the consortia is an enterprise or not, the number of partners in the consortium, whether they applied for just the research phase or also the implementation phase, whether the proposal noted a plan to export, and the number of research employees at the consortia leader). We are interested in whether the magnitude of the estimated treatment effects is similar across specifications, even if the p-values lose significance when smaller bandwidths and smaller samples are used.

Approach to Multiple Hypothesis Testing
We examine the impact of receiving In-Tech funding on a number of different outcome variables.
We use two approaches to deal with multiple hypothesis testing. The first is to group the outcomes into domains or families (e.g. collaboration, research, commercialization), and then examine the impact on an aggregate standardized index measure within each domain. Following Katz et al. (2007) we create these aggregates as the average z-score (obtained by subtracting the mean of each variable and dividing it by its standard deviation). This approach is useful for answering questions like "Did In-Tech have a significant effect in the collaboration domain on average?" A second approach is to construct sharpened q-values following Anderson (2008) and Benjamini et al. (2006). This process uses a two-stage procedure to control the false discovery rate when reporting results for specific outcomes. This is useful for answering questions like "After accounting for multiple hypothesis testing, did In-Tech funding have a significant impact on the likelihood that members of a consortia collaborated together on a research and development project?"

Placebo Tests and Applicant Characteristics
We use data from In-Tech application forms to examine whether applicants on each side of the cutoff have similar characteristics. Table 1 shows averages of these characteristics for funded and non-funded proposals, as well as RD estimates for the difference in these characteristics at the cutoff, using the different bandwidths discussed above.
The averages show that the characteristics of funded and non-funded projects are very similar.
About 25% of projects applied only for the research phase (as opposed to research and implementation), which lowers the average project duration across all projects to 32 months instead of the 36 months that are the standard for research plus implementation. The majority (54%) had two consortium partners, with 33% having three partners and only 13% having four or more partners. In slightly less than half of the consortia the leader was an enterprise, as opposed to an R&D institute, scientific unit or a public university. About 26% of leaders were based in For about half of the projects, the application forms stated that they plan to develop a new product or process as opposed to improving an existing product or process. About 14% of projects had the aim of improving a business process. Finally, 44% projects had plans to sell their innovation on the export market.
The RD coefficients in Table 1 show a statistically significant difference in project characteristics at the cutoff for only two variables. First, the specification with bandwidth 8 suggests that project leaders above the cutoff are more likely to be based in Warsaw. Second, both the specifications with bandwidth 8 and with the local linear optimal bandwidth suggest that projects above the cutoff are more likely to develop a new product or process vs. improve an existing product or process.
However, both of these characteristics do not show a statistically significant difference in the specification with the local mean optimal bandwidth. Also, since we are looking at differences for 16 characteristics, we can expect a couple of them to be statistically significant due to chance. To 13 account for multiple hypothesis testing we calculate sharpened two-stage q-values as suggested in Benjamini et al. (2006). The results do not show any statistically significant differences in project characteristics at the cutoff. In fact, all sharpened q-values are above 0.7 (i.e. the probability that the observed differences are due to chance is 70% or greater -for the specifications with bandwidth 8 and the local mean optimal bandwidth this probability is 100% for all variables).
Appendix Table 1 replicates Table 1 keeping only the applicants that we are using in our impact analysis, i.e. the applicants who replied to the follow-up survey described in the next subsection.
The results are very similar to those in Table 1. We find only a handful of statistically significant differences in applicant characteristics at the cutoff and none of the sharpened q-values are statistically significant at conventional levels (the lowest sharpened q-value is 0.452).
To further describe the projects in our sample, Appendix Table 2 shows the distribution of projects across fields of science and technology. The majority of projects are in either electrical, mechanical, or materials engineering. The distribution of projects looks similar across funded and non-funded projects, but funded projects are more likely to be in material or mechanical engineering and less likely to be in electrical engineering.

Survey Design, Timing, and Attrition
In collaboration with NCBiR staff, we designed a follow-up survey questionnaire of In-Tech applicants. The survey asked specifically about the project that was the focus of their In-Tech application, and asked whether this project had been started, whether it had been completed, and the different outputs. It also asked about forms of collaboration, research and development, and commercialization activities for other projects. The survey was administered by an independent survey firm, GfK. It took place between June and September 2016. This timing corresponds to over four years since application and approximately 3.5 years since funding was given for those in the first call for proposals; and three years since application and approximately 2.5 years since funding was given in the second call for proposals. As such, all projects under the first round should be completed, while those under the second funding round should have completed the research phase, but some may still be in the implementation phase.
The sample frame for the survey was all projects with scores within the selected bandwidth of 8 points on either side of the funding cutoff. For each project, an attempt was made to interview the 14 leader of the consortia, and the main partner. When there were two or more partners in the consortia, the main partner was defined as the one who received the highest budget share among all partners to the project. We therefore receive up to two responses per project. We aggregate these responses up to the project level for analysis. Some of the questions are specific to either the enterprise or the research partner, while other questions are asked of both. Where a question is asked of both, and the responses differ, we follow a rule of defining an outcome as having had occurred if at least one of the two parties says it has occurred. For example, if one of the two parties in the consortium says a patent was obtained, we code this as there having been a patent obtained for the project. Table 2 shows the survey response rates by respondent type and funding status, and tests whether there is a significant difference in response rates at the discontinuity. Overall, responses were received for 87.1 percent of projects, including 93.7 percent of funded projects and 83.7 percent of non-funded projects. The difference in response rates at the discontinuity ranges from 0.5 to 6.4 percentage points, and is not statistically significant under any of our four specifications.
Examining responses by respondent type, we see that leaders had higher response rates than partners, and enterprises higher response rates than research institutions, but that there was no significant difference in the response rates at the discontinuity for any of the respondent types.
Coupled with the analysis in Appendix Table 1 which showed applicant characteristics also seem similar for the sample responding to the survey, this leads us to treat attrition as missing-at-random in our analysis.

Results
We begin with examining the question of additionality, that is, whether the In-Tech funding causes projects to be started and completed that would not otherwise be undertaken, or whether In-Tech funding merely crowds-out other funding sources. We then examine impacts on the three main objective domains for the Innotech program: collaboration, research and innovation, and commercialization. Table 3 examines the additionality impacts of In-Tech funding. In addition to considering both rounds pooled together, we also examine outcomes by funding round to see whether there are large differences in project completion rates due to the difference in time since funding was received.

Impact on Additionality
We first consider whether the project received any public money, whether from NCBiR or another funding agency. One hundred percent of funded projects should answer yes to this question, although in practice a couple of projects that the administrative data say were funded say they did not receive public funding in the survey. Projects which do not receive In-Tech funding are seen to be unlikely to receive other public funding. At the discontinuity the difference in the likelihood of funding is 75 percentage points, which is statistically significant and robust to bandwidth choice and specification. This is illustrated graphically in the left panel of Figure 3, which shows a clear jump at the funding threshold. In-Tech is thus mostly funding projects that would otherwise not get funded by another program or agency.
We next examine whether the proposed project on the application was started, and whether it was completed. Projects not funded through In-Tech may be started by applicants using internal or external funding sources. We find that 99 percent of the projects funded by In-Tech were started, compared to only 39 percent of those not funded, with a statistically significant gap of 53 to 59 percentage points at the discontinuity. The impact at the discontinuity on starting is similar for both calls for proposals. Projects funded under the first call for proposals are more likely to have been completed by the time of the survey than those funded under the second call (90 percent versus 76 percent). However, since those not-funded are also less likely to have been completed by the time of the survey if they were from the second call, the treatment impact is only slightly smaller for the second round at 50 percent, compared to 57 percent for the first round. That is, around the scoring cutoff, slightly more than half of all projects funded by In-Tech would not have been started and would not have been completed had their scores been slightly lower and they had not been funded This is illustrated graphically in the right panel of Figure 3. Since the impact on completion is similar across funding rounds, we continue to pool funding rounds for the remainder of the analysis. Finally, our aggregate summary index of additionality outcomes is large, positive, and statistically significant, showing a 1.2 standard deviation increase.
When asked the main reason for not completing the research, the modal answer of those not funded was that they lacked funding (80%). In contrast, for those which received funding, the main reasons for not completing the project were either that the research idea did not work in practice (32%), that implementation was not economically feasible even though the technical results had been promising (15%), and collaboration problems (15%).

Impact on Collaboration
Table 4 examines whether In-Tech funding has induced more collaboration between enterprises and researchers within a consortium. In most cases the leader and partner had collaborated before applying to the program: 87 percent of funded applications were to consortia in which this was the case, with no significant difference in this prior involvement at the funding threshold. The median length of time for this prior collaboration is 6 years. Those funded almost all collaborate together within their consortia after applying, at least working on the In-Tech project. This results a statistically significant 8 to 20 percentage point increase in the likelihood of collaboration taking place within the consortia since applying, and a 14 to 18 percentage point increase in the likelihood of a new collaboration taking place between the leader and partner, which is statistically significant at the 10 percent level over the full 8-point range, and at the 5 percent level with narrower bandwidths.
The two top panels in Figure 4 illustrate graphically this impact on collaboration, and where the sensitivity in bandwidth choice is coming from. We see that the likelihood of any collaboration within the consortia taking place since applying is increasing with score below the cutoff if we use the full 8 point range, but appears flat or even decreasing if we were to consider only half this bandwidth. Hence the estimated impact is larger for the optimal bandwidth methods that consider narrower ranges. We see this is less of an issue for new collaborations, but the issue here is that many of the consortia with scores 4 points or more above the cutoff had been collaborating already, so that there was no possibility of their collaboration being new.
The next six rows of Table 4 consider in greater depth the nature of this collaboration. The most common form of collaboration is for research and development, and receiving In-Tech funding increases the likelihood of this type of collaboration by a statistically significant 22 to 27 percentage points at the cut-off in three out of four specifications. The increase is even larger when local linear estimation is done using the optimal bandwidth. The bottom left panel of Figure 4 17 shows why -when looking in a very narrow bandwidth, the linear fit would actually be downward sloping to the left of the cutoff, rather than upward sloping as estimated with the full data. We see positive, but not statistically significant, impacts for other forms of collaboration such as prototyping, technological consulting, and research publications, with the estimated effect sizes between 2 and 7.7 percentage points using our preferred specification with the full 8-point bandwidth.
When asked the main reason they did not collaborate with their leader or partner since applying for the In-Tech program, the main reason given is a lack of funding for the project. Receiving In-Tech funding results in a significant 11.9 percentage point reduction in the likelihood that parties did not collaborate due to a lack of funding.
The survey also asks whether the leader and first partner have collaborated together since applying on other projects, and whether they currently work together. There is no significant impact on either outcome, indicating that the specific collaboration for the In-Tech project has not led to further collaboration outside of the project.
Fifty-four percent of the sample have only two parties in their consortium. Those with two or more partners were also asked about collaboration with the second major partner in the consortium. We see that, similar to the first partner, this second partner was typically an entity that was already collaborating with others in the consortium prior to applying. There is a positive, but small and statistically insignificant impact on the likelihood that they collaborate since applying, with the sample small for this analysis.
Finally our overall summary index of collaboration between the leader and principal partner shows that In-Tech funded resulted in a 0.2 to 0.3 standard deviation increase in collaboration, which is statistically significant at the 1 percent level for three out of four specifications, and at the 10 percent level for the local linear estimation with the smaller optimal bandwidth. The bottom right panel of Figure 4 shows this increase graphically.
We then turn in Table 5 to examining research-industry collaborations outside of the consortia that had applied for In-Tech. Our interest here is in seeing whether the In-Tech collaboration crowded out other types of collaboration that might have occurred, or conversely, whether the experience of collaboration between research and industry through the In-Tech project spurred additional such collaborations.
Most applicants are also involved in other collaborations apart from the one for which they applied for In-Tech. The first row shows a reduction of 4 to 13 percentage points in the likelihood they have collaborated with another entity outside of their consortium over the past year, which is statistically significant at the 10 percent level when using the full 8-point bandwidth, but not with smaller bandwidths. When we examine specific types of collaboration, this shows up in a lower likelihood of a research publication with another partner and a lower likelihood of technological consulting with others.
When researchers are asked if they are collaborating as much with enterprises as they would like to, only half of them say that they are. Receiving an In-Tech grant has a negative, but not statistically significant effect on this response. When asked what the most important reason for not collaborating more with firms is, less than one-third say the problem is lack of financing. The other main reasons given are bureaucratic barriers to this cooperation, that firms are difficult to work with or have no incentives to collaborate, and that they lack information about the needs of firms or knowledge of how to talk about business.
Firms are more likely to say they are doing as much collaboration with researchers as they would like, with only one-quarter of In-Tech funded firms saying they are not. In-Tech funding has a positive, but not statistically significant, impact on the likelihood of doing as much collaboration as they would like with researchers. The main barriers that those who would like to collaborate more identify are again mostly non-financial: only 19 percent say lack of funding is the main constraint, whereas the modal response is that researchers do not understand the needs of firms, while other frequent reasons given are lack of knowledge about the capabilities of researchers, and being too busy with day-to-day operations to set up these collaborations.
Almost all firms say they know of researchers they can ask for ideas or help in solving problems, and likewise almost all researchers say they know of firms in their industry they could talk to if they had ideas to take to market. In-Tech funding has a statistically significant 7 to 13 percentage point increase in the likelihood researchers around the cutoff have knowledge of firms they can ask, and a positive, but not statistically significant impact on firms' knowledge of researchers.
Considering all these non-consortia forms of collaboration together, in which some forms of other collaboration appear to have been crowded out, while other forms of overall collaboration-ability may have increased, we see that the impact on the summary index varies between -0.07 and +0.04 standard deviations, and is not statistically significant. Thus, any overall impact on other collaboration is small in magnitude. Table 6 examines the impact of In-Tech funding on research and innovation achievements over the past five years. The first five rows consider patenting activity. Survey participants were asked about patent applications related to the In-Tech project, as well as those related to other projects. 65 percent of funded In-Tech projects had applied for a patent related to their project, with this rate similar across both funding rounds. To date 41 percent have a patent approved, with only 2 percent of those applying having had their patent rejected, and almost all of those who applied but had not had their patent approved in the process of still waiting for an answer on their application (32 percent of patent applicants from the first funding round, and 44 percent of those from the second funding round, were still awaiting the decision). Almost all those applying for a patent apply at the Polish patent office, while approximately one-quarter additionally apply at the European patent office.

Impact on Research and Innovation
In-Tech funding has a treatment effect of 41 to 57 percentage points on the likelihood of applying for a patent for the In-tech project, and a 21 to 44 percentage point increase in the likelihood of having received a patent to date. Both effects are large and statistically significant. The top-left panel of Figure 5 illustrates this jump graphically. In-Tech funding has a small and not statistically significant impact on the likelihood of applying for, or receiving, patents for other projects (top center panel of Figure 5). There is a positive, but not statistically significant, impact on the likelihood that they had applied for any patent in Europe across all their different projects.
The next five rows of Table 6 consider research publication outcomes. 73 percent of In-Tech funded projects have published at least one research article related to the project, which represents a statistically significant treatment effect of 53 to 63 percentage points. This is illustrated in the top right panel of Figure 5. There is a small negative, but statistically insignificant, impact on the likelihood of publishing research articles on other projects during this time, also seen in the bottom 20 left panel of Figure 5. Considering the number of publications, we see no significant effect on the likelihood that the research team has published at least 10 articles in total over this time. 11 We then ask survey respondents to list their five most major publications produced in the last five years from any project, and search for these in Google Scholar. We construct two proxies for overall research quality: the number of these five top publications which were in English, and the number of citations their work has according to Google Scholar. 57 percent of the non-funded projects have produced no publications in English, and 63 percent have no citations. This compares to 44 percent of the funded projects having no publications in English, and 41 percent having no citations. We see positive impacts of funding on the likelihood of producing research that meets either measure of quality at the discontinuity when we use the full range or local mean, but the impacts are not statistically significant. Since these publications are relatively recent, the number of citations is currently low on average (with a mean of 13 and median of 9, conditional on having any citations), but with high variability (standard deviation of 15). The center panel of Figure 5 shows the variation in citations at each score level, which makes detecting any impact difficult.
The next rows of Table 6 examine different types of innovative activities. Survey participants were asked if they had done any of six different types of innovation in the past five years, whether related to the In-Tech project or not. The most common type of innovation for funded projects was to develop a new product prototype, which 84 percent had done. The next most common forms of innovation were developing a new product (78%), and developing a new process (78%), while developing a new industrial design was the least common form (41%). In-Tech funding has small, and not statistically significant, impacts on all of these measures, suggesting that it did not lead to an overall increase in these types of innovative activities.
The overall summary index of research and innovation shows a 0.13 to 0.31 standard deviation increase in research and innovation. This impact is only statistically significant when we use the local mean specification within the optimal bandwidth, and not using a local linear specification.
The bottom right panel of Figure 5 shows why this variation in treatment effect arises -the research 11 We use 10 or more publications, since the distribution of reported number of publications in the last 5 years is heavily skewed, with a 25 th percentile of 3, median of 12, mean of 41, and 90 th percentile of 100. The treatment effect is also insignificant for the number of publications, with an extremely wide confidence interval of (-32.8, 32.0) as a result. and innovation index increases with score, and so controlling for this increase lowers the treatment effect.
Taken together, the results show that In-Tech funding resulted in more research and innovation taking place for the particular project that was put forward on the application, with this research leading to publications and patents. However, total research and innovation output by the consortia does not appear to have significantly increased, suggesting some potential crowd-out of other research activities, or at least a lack of complementarities.

Impact on Commercialization
Commercialization refers to the process of introducing newly developed processes or products to market. Table 7 and Figure 6 examine the impact of In-Tech funding on different measures of commercialization. Fifty-three percent of funded projects have developed a product or process out of their In-Tech project to the stage where it is ready to be sold or is being sold. Eighty percent have achieved a broader measure of commercialization potential, which also includes having the product or process ready to implement for use in their own firm, or having a working prototype that is almost ready to be sold. In-Tech funding is found to have positive and statistically Some examples of some of these products or processes being sold include: armor which is resistant to grenades and bullets; a computer modelling method that can be used in shale gas exploration; liquid crystalline film to be used in breast cancer diagnosis; a special transport vehicle for underground mines; new chemical composites; modified orthopedic implants; a new process for continuous casting in manufacturing; a system for adjusting and monitoring wheel suspension components; a laser diode chamber that allows adjusting the wavelength; a new yeast feed for the production of biodiesel; a low pressure turbine blade motor; a new blade for rock drills; different combustion and adhesive components; and technology for making engine components.
One-third of funding applicants are currently selling in Poland a product or process developed through the In-Tech project, and one-sixth are selling outside Poland, with Western Europe the 22 main market. This is a 14 to 25 percentage point increase in the likelihood of having sales in Poland arising from the project for those receiving funding at the discontinuity, with this statistically significant in three out of four specifications. The point estimate is actually larger still in local linear within an optimal bandwidth specification, but the small bandwidth reduces the sample size and results in a lack of statistical significance. The impact on sales outside of Poland is more sensitive to specification, and is not statistically significant with the full eight point range.
Conditional on selling the product or process, the mean sales in 2015 coming from this product or process was 2.92 million PLN (approx.. USD830,000), and median was 1.5 million PLN (approx.. USD430,000). For the median funded project, these sales only constitute 1.0 percent of total sales in 2015, while the mean is 10.3 percent. At present then, sales from these products and processes are small relative to existing business for the firms. The small number of projects with reported sales and large skewness of the outcome makes the treatment impact sensitive to bandwidth choice, and the standard errors large, so that the impact is not statistically significant. The bottom left panel of Figure 6 shows how large, but variable, sales to the right of the cutoff lead to this sensitivity.
Another commercial use of the research from the project has been to develop a new or improved process that the enterprise in the consortia can use in their own production. 76 percent of funded firms have done this out of the project, which is a statistically significant 44 to 50 percentage point treatment effect. This large jump is seen in the bottom center panel of Figure 6.
The commercialization activities through the In-Tech project do not appear to have significantly crowded out commercialization from other projects, but nor have they spurred additional efforts.
Coupling this lack of impact on other commercialization with the positive impact on commercialization of the In-Tech project leads to an overall positive impact on our summary index measure: a 0.19 to 0.20 standard deviation increase in commercialization. This increase is statistically significant using the full 8-point range on either side of the cutoff, and of similar magnitude, but not significant, using the narrower ranges chosen by the optimal linear and optimal local mean estimators within this range. The bottom right panel of Figure 6 shows the jump in commercialization around the discontinuity.

Summary and Multiple Testing
Across Tables 3 through 7 we have estimated the impact of In-Tech funding on five different summary indices, and 52 different individual outcomes. 12 Table 8 re-examines our estimates using the sharpened q-values of Anderson (2008) and Benjamini et al. (2006) to control for this multiple testing. When we look at the overall summary index level, we see strong impacts of the In-Tech funding on additionality, and on collaboration within the consortia. There is no significant impact on collaboration outside the consortia. The impact on research and innovation is significant when we use our full bandwidth with controls, or the local mean specification, and has a sharpened qvalue of 0.109 using the full bandwidth without controls. The impact on commercialization remains significant after multiple-testing adjustment when either specification with the full eightpoint range is used.
When we switch to examining individual outcomes, we see all 13 individual outcomes with pvalues below 0.05 using the full bandwidth have sharpened q-values below 0.10 after accounting for testing 52 different outcomes. The four outcomes with p-values between 0.05 and 0.10 are no longer significant at conventional levels after the multiple testing correction. Eleven of these 13 significant outcomes are specific to the particular project proposed for In-Tech -they show the project proposed is more likely to have been funded, started, and completed if it was awarded funding (additionality); that it generated more collaboration within the consortia on R&D; it led to publications and patents being more likely to be produced relating to this project; and that it led to a product or process out of this project being commercialized. The other two outcomes show some broader collaborative benefits -researchers receiving funding now are more likely to know firms to talk to, and lack of finance is less likely to be a barrier to collaboration.

Conclusions and Discussion
Our regression-discontinuity analysis shows that the In-Tech project did have measurable impacts on the core outcomes of innovation it was designed to foster. It resulted in more collaboration, more patents and publications, and some commercialization of the resulting product that came out of the project. There is thus clear additionality, as the grant led research and innovation activities to be undertaken that would not otherwise have occurred. This additional innovation does not 24 appear to have significantly crowded out other research activities, but, to date, nor has it had positive spillovers within consortia in terms of additional research and innovation success on other projects.
Nevertheless, while more research and innovation has occurred, some of this appears to be very firm-specific in nature, and not necessarily of a quality that contributes to the world innovation frontier. The majority of the patents are registered just at the Polish patent office, and not at the European office as well, and most commercialized products are currently just sold in Poland. Just over 40 percent of the funded projects have no research publications in English, and no citations.
An important caveat in interpreting these results is that research and innovation is a long-term process. Our analysis measures outcomes 2.5 to 3.5 years after funding was received. This captures the time needed for the initial research activities to have occurred, but it can take more time for patents to be granted, publications to be cited, and new innovations to achieve commercial success.
We are not able to measure whether the innovation had spillover benefits for other firms or researchers in the economy, but suspect that any such benefits would also require longer time horizons to materialize. In future work, it would therefore be of interest for policy makers to attempt to track grant applicants over longer time horizons.      Surveyed indicates that at least one entity in the consortium was surveyed. Uniform kernel used. RD coef. Is the estimated fuzzy-RD treatment effect. P-value is the bias-adjusted robust p-value. BW is the bandwidth, N the sample size. Local linear full is estimated over full survey sample within 8 points of cutoff; local linear MSE and local mean MSE use MSE-optimizing bandwidths for local linear and local mean estimation within this interval. Last column adds the following controls: whether the leader is an enterprise, whether application is for research phase only, number of parties in consortia, whether the project aims for an export market, and number of researchers employed. Uniform kernel used. RD coef. Is the estimated fuzzy-RD treatment effect. P-value is the bias-adjusted robust p-value. BW is the bandwidth, N the sample size. Local linear full is estimated over full survey sample within 8 points of cutoff; local linear MSE and local mean MSE use MSE-optimizing bandwidths for local linear and local mean estimation within this interval. Last column adds the following controls: whether the leader is an enterprise, whether application is for research phase only, number of parties in consortia, whether the project aims for an export market, and number of researchers employed. RD coef. Is the estimated fuzzy-RD treatment effect. P-value is the bias-adjusted robust p-value. BW is the bandwidth, N the sample size. Local linear full is estimated over full survey sample within 8 points of cutoff; local linear MSE and local mean MSE use MSE-optimizing bandwidths for local linear and local mean estimation within this interval. Last column adds the following controls: whether the leader is an enterprise, whether application is for research phase only, number of parties in consortia, whether the project aims for an export market, and number of researchers employed. Summary index does not include prior collaboration, reverse-codes "do not collaborate with consortia due to a lack of funding", and does not include collaboration outcomes with partner 2.

Local linear full
Local linear MSE Local mean MSE Local linear with controls Means Uniform kernel used. RD coef. Is the estimated fuzzy-RD treatment effect. P-value is the bias-adjusted robust p-value. BW is the bandwidth, N the sample size. Local linear full is estimated over full survey sample within 8 points of cutoff; local linear MSE and local mean MSE use MSE-optimizing bandwidths for local linear and local mean estimation within this interval. Last column adds the following controls: whether the leader is an enterprise, whether application is for research phase only, number of parties in consortia, whether the project aims for an export market, and number of researchers employed. Summary index is the average of standardized z-scores of all other outcomes in the table. Uniform kernel used. RD coef. Is the estimated fuzzy-RD treatment effect. P-value is the bias-adjusted robust p-value. BW is the bandwidth, N the sample size. Local linear full is estimated over full survey sample within 8 points of cutoff; local linear MSE and local mean MSE use MSE-optimizing bandwidths for local linear and local mean estimation within this interval. Last column adds the following controls: whether the leader is an enterprise, whether application is for research phase only, number of parties in consortia, whether the project aims for an export market, and number of researchers employed. Summary index is the average of standardized z-scores of all other outcomes in the  Uniform kernel used. RD coef. Is the estimated fuzzy-RD treatment effect. P-value is the bias-adjusted robust p-value. BW is the bandwidth, N the sample size. Local linear full is estimated over full survey sample within 8 points of cutoff; local linear MSE and local mean MSE use MSE-optimizing bandwidths for local linear and local mean estimation within this interval. Last column adds the following controls: whether the leader is an enterprise, whether application is for research phase only, number of parties in consortia, whether the project aims for an export market, and number of researchers employed. Summary index is the average of standardized z-scores of all other outcomes in the table.