354 resultados para survivorship bias
Resumo:
Carlin and Finch, this issue, compare goodwill impairment discount rates used by a sample of large Australian firms with ‘independently’ generated discount rates. Their objective is to empirically determine whether managers opportunistically select goodwill discount rates subsequent to the 2005 introduction of International Financial Reporting Standards (IFRS) in Australia. This is a worthwhile objective given that IFRS introduced an impairment regime, and within this regime, discount rate selection plays a key role in goodwill valuation decisions. It is also timely to consider the goodwill valuation issue. Following the recent downturn in the economy, there is a high probability that many firms will be forced to write down impaired goodwill arising from boom period acquisitions. Hence, evidence of bias in rate selection is likely to be of major concern to investors, policymakers and corporate regulators. Carlin and Finch claim their findings provide evidence of such bias. In this commentary I review the validity of their claims.
Resumo:
Principal Topic : Nascent entrepreneurship has drawn the attention of scholars in the last few years (Davidsson, 2006, Wagner, 2004). However, most studies have asked why firms are created focussing on questions such as what are the characteristics (Delmar and Davidsson, 2000) and motivations (Carter, Gartner, Shaver & Reynolds, 2004) of nascent entrepreneurs, or what are the success factors in venture creation (Davidsson & Honig; 2003; Delmar and Shane, 2004). In contrast, the question of how companies emerge is still in its infancy. On a theoretical side, effectuation, developed by Sarasvathy (2001) offers one view of the strategies that may be at work during the venture creation process. Causation, the theorized inverse to effectuation, may be described as a rational reasoning method to create a company. After a comprehensive market analysis to discover opportunities, the entrepreneur will select the alternative with the higher expected return and implement it through the use of a business plan. In contrast, effectuation suggests that the future entrepreneur will develop her new venture in a more iterative way by selecting possibilities through flexibility and interaction with the market, affordability of loss of resources and time invested, development of pre-commitments and alliances from stakeholders. Another contrasting point is that causation is ''goal driven'' while an effectual approach is ''mean driven'' (Sarasvathy, 2001) One of the predictions of effectuation theory is effectuation is more likely to be used by entrepreneurs early in the venture creation process (Sarasvathy, 2001). However, this temporal aspect and the impact of the effectuation strategy on the venture outcomes has so far not been systematically and empirically tested on large samples. The reason behind this research gap is twofold. Firstly, few studies collect longitudinal data on emerging ventures at an early enough stage of development to avoid severe survivor bias. Second, the studies that collect such data have not included validated measures of effectuation. The research we are conducting attempts to partially fill this gap by combining an empirical investigation on a large sample of nascent and young firms with the effectuation/causation continuum as a basis (Sarasvathy, 2001). The objectives are to understand the strategies used by the firms during the creation process and measure their impacts on the firm outcomes. Methodology/Key Propositions : This study draws its data from the first wave of the CAUSEE project where 28,383 Australian households were randomly contacted by phone using a specific methodology to capture emerging firms (Davidsson, Steffens, Gordon, Reynolds, 2008). This screening led to the identification of 594 nascent ventures (i.e., firms that are not operating yet) and 514 young firms (i.e., firms that have started operating from 2004) that were willing to participate in the study. Comprehensive phone interviews were conducted with these 1108 ventures. In a likewise comprehensive follow-up 12 months later, 80% of the eligible cases completed the interview. The questionnaire contains specific sections designed to distinguish effectual and causal processes, innovation, gestation activities, business idea changes and ventures outcomes. The effectuation questions are based on the components of effectuation strategy as described by Sarasvathy (2001) namely: flexibility, affordable loss and pre-commitment from stakeholders. Results from two rounds of pre-testing informed the design of the instrument included in the main survey. The first two waves of data have will be used to test and compare the use of effectuation in the venture creation process. To increase the robustness of the results, temporal use of effectuation will be tested both directly and indirectly. 1. By comparing the use of effectuation in nascent and young firms from wave 1 to 2, we will be able to find out how effectuation is affected by time over a 12-month duration and if the stage of venture development has an impact on its use. 2. By comparing nascent ventures early in the creation process versus nascent ventures late in the creation process. Early versus late can be determined with the help of time-stamped gestation activity questions included in the survey. This will help us to determine the change on a small time scale during the creation phase of the venture. 3. By comparing nascent firms to young (already operational) firms. 4. By comparing young firms becoming operational in 2006 with those first becoming operational in 2004. Results and Implications : Wave 1 and 2 data have been completed and wave 2 is currently being checked and 'cleaned'. Analysis work will commence in September, 2009. This paper is expected to contribute to the body of knowledge on effectuation by measuring quantitatively its use and impact on nascent and young firms activities at different stages of their development. In addition, this study will also increase the understanding of the venture creation process by comparing over time nascent and young firms from a large sample of randomly selected ventures. We acknowledge the results from this study will be preliminary and will have to be interpreted with caution as the changes identified may be due to several factors and may not only be attributed to the use/not use of effectuation. Meanwhile, we believe that this study is important to the field of entrepreneurship as it provides some much needed insights on the processes used by nascent and young firms during their creation and early operating stages.
Resumo:
There is a notable shortage of empirical research directed at measuring the magnitude and direction of stress effects on performance in a controlled environment. One reason for this is the inherent difficulties in identifying and isolating direct performance measures for individuals. Additionally most traditional work environments contain a multitude of exogenous factors impacting individual performance, but controlling for all such factors is generally unfeasible (omitted variable bias). Moreover, instead of asking individuals about their self-reported stress levels we observe workers' behavior in situations that can be classified as stressful. For this reason we have stepped outside the traditional workplace in an attempt to gain greater controllability of these factors using the sports environment as our experimental space. We empirically investigate the relationship between stress and performance, in an extreme pressure situation (football penalty kicks) in a winner take all sporting environment (FIFA World Cup and UEFA European Cup competitions). Specifically, we examine all the penalty shootouts between 1976 and 2008 covering in total 16 events. The results indicate that extreme stressors can have a positive or negative impact on Individuals' performance. On the other hand, more commonly experienced stressors do not affect professionals' performances.
Resumo:
This article examines the relationship between the arts and national innovation policy in Australia, pivoting around the Venturous Australia report released in September 2008 as part of the Review of the National Innovation System (RNIS). This came at a time of optimism that the arts sector would be included in Australia’s federal innovation policy. However, despite the report’s broad vision for innovation and specific commentary on the arts, the more ambitious hopes of arts sector advocates remained unfulfilled. This article examines the entwining discourses of creativity and innovation which emerged globally and in Australia prior to the RNIS, before analysing Venturous Australia in terms of the arts and the ongoing science-and-technology bias to innovation policy. It ends by considering why sector-led policy research and lobbying has to date proved unsuccessful and then suggests what public policy development is now needed.
Resumo:
PURPOSE: We report our telephone-based system for selecting community control series appropriate for a complete Australia-wide series of Ewing's sarcoma cases. METHODS: We used electronic directory random sampling to select age-matched controls. The sampling has all listed telephone numbers on an up-dated CD-Rom. RESULTS: 95% of 2245 telephone numbers selected were successfully contacted. The mean number of attempts needed was 1.94, 58% answering at the first attempt. On average, we needed 4.5 contacts per control selected. Calls were more likely to be successful (reach a respondent) when made in the evening (except Saturdays). The overall response rate among contacted telephone numbers was 92.8%. Participation rates among female and male respondents were practically the same. The exclusion of unlisted numbers (13.5% of connected households) and unconnected households (3.7%) led to potential selection bias. However, restricting the case series to listed cases only, plus having external information on the direction of potential bias allow meaningful interpretation of our data. CONCLUSION: Sampling from an electronic directory is convenient, economical and simple, and gives a very good yield of eligible subjects compared to other methods.
Resumo:
Greyback canegrubs cost the Australian sugarcane industry around $13 million per annum in damage and control. A novel and cost effective biocontrol bacterium could play an important role in the integrated pest management program currently in place to reduce damage and control associated costs. During the course of this project, terminal restriction fragment length polymorphism (TRFLP), 16-S rDNA cloning, suppressive subtractive hybridisation (SSH) and entomopathogen-specific PCR screening were used to investigate the little studied canegrub-associated microflora in an attempt to discover novel pathogens from putatively-diseased specimens. Microflora associated with these soil-dwelling insects was found to be both highly diverse and divergent between individual specimens. Dominant members detected in live specimens were predominantly from taxa of known insect symbionts while dominant sequences amplified from dead grubs were homologous to putativelysaprophytic bacteria and bacteria able to grow during refrigeration. A number of entomopathogenic bacteria were identified such as Photorhabdus luminescens and Pseudomonas fluorescens. Dead canegrubs prior to decomposition need to be analysed if these bacteria are to be isolated. Novel strategies to enrich putative pathogen-associated sequences (SSH and PCR screening) were shown to be promising approaches for pathogen discovery and the investigation of canegrubsassociated microflora. However, due to inter- and intra-grub-associated community diversity, dead grub decomposition and PCR-specific methodological limitations (PCR bias, primer specificity, BLAST database restrictions, 16-S gene copy number and heterogeneity), recommendations have been made to improve the efficiency of such techniques. Improved specimen collection procedures and utilisation of emerging high-throughput sequencing technologies may be required to examine these complex communities in more detail. This is the first study to perform a whole-grub analysis and comparison of greyback canegrub-associated microbial communities. This work also describes the development of a novel V3-PCR based SSH technique. This was the first SSH technique to use V3-PCR products as a starting material and specifically compare bacterial species present in a complex community.
Resumo:
This study assessed the reliability and validity of a palm-top-based electronic appetite rating system (EARS) in relation to the traditional paper and pen method. Twenty healthy subjects [10 male (M) and 10 female (F)] — mean age M=31 years (S.D.=8), F=27 years (S.D.=5); mean BMI M=24 (S.D.=2), F=21 (S.D.=5) — participated in a 4-day protocol. Measurements were made on days 1 and 4. Subjects were given paper and an EARS to log hourly subjective motivation to eat during waking hours. Food intake and meal times were fixed. Subjects were given a maintenance diet (comprising 40% fat, 47% carbohydrate and 13% protein by energy) calculated at 1.6×Resting Metabolic Rate (RMR), as three isoenergetic meals. Bland and Altman's test for bias between two measurement techniques found significant differences between EARS and paper and pen for two of eight responses (hunger and fullness). Regression analysis confirmed that there were no day, sex or order effects between ratings obtained using either technique. For 15 subjects, there was no significant difference between results, with a linear relationship between the two methods that explained most of the variance (r2 ranged from 62.6 to 98.6). The slope for all subjects was less than 1, which was partly explained by a tendency for bias at the extreme end of results on the EARS technique. These data suggest that the EARS is a useful and reliable technique for real-time data collection in appetite research but that it should not be used interchangeably with paper and pen techniques.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
A national-level safety analysis tool is needed to complement existing analytical tools for assessment of the safety impacts of roadway design alternatives. FHWA has sponsored the development of the Interactive Highway Safety Design Model (IHSDM), which is roadway design and redesign software that estimates the safety effects of alternative designs. Considering the importance of IHSDM in shaping the future of safety-related transportation investment decisions, FHWA justifiably sponsored research with the sole intent of independently validating some of the statistical models and algorithms in IHSDM. Statistical model validation aims to accomplish many important tasks, including (a) assessment of the logical defensibility of proposed models, (b) assessment of the transferability of models over future time periods and across different geographic locations, and (c) identification of areas in which future model improvements should be made. These three activities are reported for five proposed types of rural intersection crash prediction models. The internal validation of the model revealed that the crash models potentially suffer from omitted variables that affect safety, site selection and countermeasure selection bias, poorly measured and surrogate variables, and misspecification of model functional forms. The external validation indicated the inability of models to perform on par with model estimation performance. Recommendations for improving the state of the practice from this research include the systematic conduct of carefully designed before-and-after studies, improvements in data standardization and collection practices, and the development of analytical methods to combine the results of before-and-after studies with cross-sectional studies in a meaningful and useful way.
Resumo:
We report the long term outcome of the flangeless, cemented all polyethylene Exeter cup at a mean of 14.6 years (range 10-17) after operation. Of the 263 hips in 243 patients, 122 hips are still in situ, 112 patients (119 hips) have died, eighteen hips were revised, and three patients (four hips) had moved abroad and were lost to follow-up (1.5%). Radiographs demonstrated two sockets had migrated and six more had radiolucent lines in all three zones. The Kaplan Meier survivorship at 15 years with endpoint revision for all causes is 89.9% (95% CI 84.6 to 95.2%) and for aseptic cup loosening or lysis 91.7% (CI 86.6 to 96.8%). In 210 hips with a diagnosis of primary osteoarthritis survivorship for all causes is 93.2% (95% CI 88.1 to 98.3%), and for aseptic cup loosening 95.0% (CI 90.3 to 99.7%). The cemented all polyethylene Exeter cup has an excellent long-term survivorship.
Resumo:
Advances in safety research—trying to improve the collective understanding of motor vehicle crash causation—rests upon the pursuit of numerous lines of inquiry. The research community has focused on analytical methods development (negative binomial specifications, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might think of different lines of inquiry in terms of ‘low lying fruit’—areas of inquiry that might provide significant improvements in understanding crash causation. It is the contention of this research that omitted variable bias caused by the exclusion of important variables is an important line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant ability to better understand contributing factors to crashes. This study—believed to represent a unique contribution to the safety literature—develops and examines the role of a sizeable set of spatial variables in intersection crash occurrence. In addition to commonly considered traffic and geometric variables, examined spatial factors include local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools. The results indicate that inclusion of these factors results in significant improvement in model explanatory power, and the results also generally agree with expectation. The research illuminates the importance of spatial variables in safety research and also the negative consequences of their omissions.
Resumo:
Red light cameras (RLCs) have been used in a number of US cities to yield a demonstrable reduction in red light violations; however, evaluating their impact on safety (crashes) has been relatively more difficult. Accurately estimating the safety impacts of RLCs is challenging for several reasons. First, many safety related factors are uncontrolled and/or confounded during the periods of observation. Second, “spillover” effects caused by drivers reacting to non-RLC equipped intersections and approaches can make the selection of comparison sites difficult. Third, sites selected for RLC installation may not be selected randomly, and as a result may suffer from the regression to the mean bias. Finally, crash severity and resulting costs need to be considered in order to fully understand the safety impacts of RLCs. Recognizing these challenges, a study was conducted to estimate the safety impacts of RLCs on traffic crashes at signalized intersections in the cities of Phoenix and Scottsdale, Arizona. Twenty-four RLC equipped intersections in both cities are examined in detail and conclusions are drawn. Four different evaluation methodologies were employed to cope with the technical challenges described in this paper and to assess the sensitivity of results based on analytical assumptions. The evaluation results indicated that both Phoenix and Scottsdale are operating cost-effective installations of RLCs: however, the variability in RLC effectiveness within jurisdictions is larger in Phoenix. Consistent with findings in other regions, angle and left-turn crashes are reduced in general, while rear-end crashes tend to increase as a result of RLCs.