976 resultados para Explicit hazard model
Resumo:
This dissertation studies newly founded U.S. firms' survival using three different releases of the Kauffman Firm Survey. I study firms' survival from a different perspective in each chapter. ^ The first essay studies firms' survival through an analysis of their initial state at startup and the current state of the firms as they gain maturity. The probability of survival is determined using three probit models, using both firm-specific variables and an industry scale variable to control for the environment of operation. The firm's specific variables include size, experience and leverage as a debt-to-value ratio. The results indicate that size and relevant experience are both positive predictors for the initial and current states. Debt appears to be a predictor of exit if not justified wisely by acquiring assets. As suggested previously in the literature, entering a smaller-scale industry is a positive predictor of survival from birth. Finally, a smaller-scale industry diminishes the negative effects of debt. ^ The second essay makes use of a hazard model to confirm that new service-providing (SP) firms are more likely to survive than new product providers (PPs). I investigate the possible explanations for the higher survival rate of SPs using a Cox proportional hazard model. I examine six hypotheses (variations in capital per worker, expenses per worker, owners' experience, industry wages, assets and size), none of which appear to explain why SPs are more likely than PPs to survive. Two other possibilities are discussed: tax evasion and human/social relations, but these could not be tested due to lack of data. ^ The third essay investigates women-owned firms' higher failure rates using a Cox proportional hazard on two models. I make use of a never-before used variable that proxies for owners' confidence. This variable represents the owners' self-evaluated competitive advantage. ^ The first empirical model allows me to compare women's and men's hazard rates for each variable. In the second model I successively add the variables that could potentially explain why women have a higher failure rate. Unfortunately, I am not able to fully explain the gender effect on the firms' survival. Nonetheless, the second empirical approach allows me to confirm that social and psychological differences among genders are important in explaining the higher likelihood to fail in women-owned firms.^
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. ^ The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm's capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being.^ The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another.^ The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.^
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^
Resumo:
Base excision repair (BER) and nucleotide excision repair (NER) pathways play critical role in maintaining genome integrity. Polymorphisms in BER and NER genes which modulate the DNA repair capacity may affect the susceptibility and prognosis of oral cancer. This study was conducted with genomic DNA from 92 patients with oral squamous cell carcinomas (OSCC) and 130 controls. The cases were followed up to explore the associations between BER and NER genes polymorphisms and the risk and prognosis of OSCC. Four single-nucleotide polymorphisms (SNPs) in XRCC1 (rs25487), APEX1 (rs1130409), XPD (rs13181) and XPF (rs1799797) genes were tested by polymerase chain reaction – quantitative real time method. The GraphPad Prism version 6.0.1 statistical software was applied for statistical analysis of association. Odds ratio (OR), hazard ratio (HR), and their 95 % confidence intervals (CIs) were calculated by logistic regression. Kaplan-Meier curve and Cox proportional hazard model were used for prognostic analysis. The presence of polymorphic variants in XRCC1, APEX1, XPD and XPF genes were not associated with an increased risk of OSCC. Gene-environment interactions with smoking were not significant for any polymorphism. The presence of polymorphic variants of the XPD gene in association with alcohol consumption conferred an increased risk of 1.86 (95% CI: 0.86 – 4.01, p=0.03) for OSCC. Only APEX1 was associated with decreased specific survival (HR 3.94, 95% CI: 1.31 – 11.88, p=0.01). These results suggest an interaction between polymorphic variants of the XPF gene and alcohol consumption. Additionally APEX1 may represent a prognostic marker for OSCC.
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
Resumo:
The paper exploits the unique strengths of Statistics Canada's Longitudinal Administrative Database ("LAD"), constructed from individuals' tax records, to shed new light on the extent and nature of the emigration of Canadians to other countries and their patterns of return over the period 1982-1999. The empirical evidence begins with some simple graphs of the overall rates of leaving over time, and follows with the presentation of the estimation results of a model that essentially addresses the question: "who moves?" The paper then analyses the rates of return for those observed to leave the country - something for which there is virtually no existing evidence. Simple return rates are reported first, followed by the results of a hazard model of the probability of returning which takes into account individuals' characteristics and the number of years they have already been out of the country. Taken together, these results provide a new empirical basis for discussions of emigration in general, and the brain drain in particular. Of particular interest are the ebb and flow of emigration rates observed over the last two decades, including a perhaps surprising turndown in the most recent years after climbing through the earlier part of the 1990s; the data on the number who return after leaving, the associated patterns by income level, and the increases observed over the last decade.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Well-designed marine protected area (MPA) networks can deliver a range of ecological, economic and social benefits, and so a great deal of research has focused on developing spatial conservation prioritization tools to help identify important areas. However, whilst these software tools are designed to identify MPA networks that both represent biodiversity and minimize impacts on stakeholders, they do not consider complex ecological processes. Thus, it is difficult to determine the impacts that proposed MPAs could have on marine ecosystem health, fisheries and fisheries sustainability. Using the eastern English Channel as a case study, this paper explores an approach to address these issues by identifying a series of MPA networks using the Marxan and Marxan with Zones conservation planning software and linking them with a spatially explicit ecosystem model developed in Ecopath with Ecosim. We then use these to investigate potential trade-offs associated with adopting different MPA management strategies. Limited-take MPAs, which restrict the use of some fishing gears, could have positive benefits for conservation and fisheries in the eastern English Channel, even though they generally receive far less attention in research on MPA network design. Our findings, however, also clearly indicate that no-take MPAs should form an integral component of proposed MPA networks in the eastern English Channel, as they not only result in substantial increases in ecosystem biomass, fisheries catches and the biomass of commercially valuable target species, but are fundamental to maintaining the sustainability of the fisheries. Synthesis and applications. Using the existing software tools Marxan with Zones and Ecopath with Ecosim in combination provides a powerful policy-screening approach. This could help inform marine spatial planning by identifying potential conflicts and by designing new regulations that better balance conservation objectives and stakeholder interests. In addition, it highlights that appropriate combinations of no-take and limited-take marine protected areas might be the most effective when making trade-offs between long-term ecological benefits and short-term political acceptability.
Resumo:
This dissertation examines how social insurance, family support and work capacity enhance individuals' economic well-being following significant health and income shocks. I first examine the extent to which the liquidity-enhancing effects of Worker's Compensation (WC) benefits outweigh the moral hazard costs. Analyzing administrative data from Oregon, I estimate a hazard model exploiting variation in the timing and size of a retroactive lump-sum WC payment to decompose the elasticity of claim duration with respect to benefits into the elasticity with respect to an increase in cash on hand, and a decrease in the opportunity cost of missing work. I find that the liquidity effect accounts for 60 to 65 percent of the increase in claim duration among lower-wage workers, but less than half of the increase for higher earners. Using the framework from Chetty (2008), I conclude that the insurance value of WC exceeds the distortionary cost, and increasing the benefit level could increase social welfare. Next, I investigate how government-provided disability insurance (DI) interacts with private transfers to disabled individuals from their grown children. Using the Health and Retirement Study, I estimate a fixed effects, difference in differences regression to compare transfers between DI recipients and two control groups: rejected applicants and a reweighted sample of disabled non-applicants. I find that DI reduces the probability of receiving a transfer by no more than 3 percentage points, or 10 percent. Additional analysis reveals that DI could increase the probability of receiving a transfer in cases where children had limited prior information about the disability, suggesting that DI could send a welfare-improving information signal. Finally, Zachary Morris and I examine how a functional assessment could complement medical evaluations in determining eligibility for disability benefits and in targeting return to work interventions. We analyze claimants' self-reported functional capacity in a survey of current DI beneficiaries to estimate the share of disability claimants able to do work-related activity. We estimate that 13 percent of current DI beneficiaries are capable of work-related activity. Furthermore, other characteristics of these higher-functioning beneficiaries are positively correlated with employment, making them an appropriate target for return to work interventions.
Resumo:
New constraints on isotope fractionation factors in inorganic aqueous sulfur systems based on theoretical and experimental techniques relevant to studies of the sulfur cycle in modern environments and the geologic rock record are presented in this dissertation. These include theoretical estimations of equilibrium isotope fractionation factors utilizing quantum mechanical software and a water cluster model approach for aqueous sulfur compounds that span the entire range of oxidation state for sulfur. These theoretical calculations generally reproduce the available experimental determinations from the literature and provide new constraints where no others are available. These theoretical calculations illustrate in detail the relationship between sulfur bonding environment and the mass dependence associated with equilibrium isotope exchange reactions involving all four isotopes of sulfur. I additionally highlight the effect of isomers of protonated compounds (compounds with the same chemical formula but different structure, where protons are bound to either sulfur or oxygen atoms) on isotope partitioning in the sulfite (S4+) and sulfoxylate (S2+) systems, both of which are key intermediates in oxidation-reduction processes in the sulfur cycle. I demonstrate that isomers containing the highest degree of coordination around sulfur (where protonation occurs on the sulfur atom) have a strong influence on isotopic fractionation factors, and argue that isomerization phenomenon should be considered in models of the sulfur cycle. Additionally, experimental results of the reaction rates and isotope fractionations associated with the chemical oxidation of aqueous sulfide are presented. Sulfide oxidation is a major process in the global sulfur cycle due largely to the sulfide-producing activity of anaerobic microorganisms in organic-rich marine sediments. These experiments reveal relationships between isotope fractionations and reaction rate as a function of both temperature and trace metal (ferrous iron) catalysis that I interpret in the context of the complex mechanism of sulfide oxidation. I also demonstrate that sulfide oxidation is a process associated with a mass dependence that can be described as not conforming to the mass dependence typically associated with equilibrium isotope exchange. This observation has implications for the inclusion of oxidative processes in environmental- and global-scale models of the sulfur cycle based on the mass balance of all four isotopes of sulfur. The contents of this dissertation provide key reference information on isotopic fractionation factors in aqueous sulfur systems that will have far-reaching applicability to studies of the sulfur cycle in a wide variety of natural settings.
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm’s capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being. The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another. The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.
Resumo:
In this work, we present the explicit series solution of a specific mathematical model from the literature, the Deng bursting model, that mimics the glucose-induced electrical activity of pancreatic beta-cells (Deng, 1993). To serve to this purpose, we use a technique developed to find analytic approximate solutions for strongly nonlinear problems. This analytical algorithm involves an auxiliary parameter which provides us with an efficient way to ensure the rapid and accurate convergence to the exact solution of the bursting model. By using the homotopy solution, we investigate the dynamical effect of a biologically meaningful bifurcation parameter rho, which increases with the glucose concentration. Our analytical results are found to be in excellent agreement with the numerical ones. This work provides an illustration of how our understanding of biophysically motivated models can be directly enhanced by the application of a newly analytic method.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.
Resumo:
Turtle Mountain in Alberta, Canada has become an important field laboratory for testing different techniques related to the characterization and monitoring of large slope mass movements as the stability of large portions of the eastern face of the mountain is still questionable. In order to better quantify the volumes potentially unstable and the most probable failure mechanisms and potential consequences, structural analysis and runout modeling were preformed. The structural features of the eastern face were investigated using a high resolution digital elevation model (HRDEM). According to displacement datasets and structural observations, potential failure mechanisms affecting different portions of the mountain have been assessed. The volumes of the different potentially unstable blocks have been calculated using the Sloping Local Base Level (SLBL) method. Based on the volume estimation, two and three dimensional dynamic runout analyses have been performed. Calibration of this analysis is based on the experience from the adjacent Frank Slide and other similar rock avalanches. The results will be used to improve the contingency plans within the hazard area.