989 resultados para Parametric Models
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described
Resumo:
Accelerated failure time models with a shared random component are described, and are used to evaluate the effect of explanatory factors and different transplant centres on survival times following kidney transplantation. Different combinations of the distribution of the random effects and baseline hazard function are considered and the fit of such models to the transplant data is critically assessed. A mixture model that combines short- and long-term components of a hazard function is then developed, which provides a more flexible model for the hazard function. The model can incorporate different explanatory variables and random effects in each component. The model is straightforward to fit using standard statistical software, and is shown to be a good fit to the transplant data. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
This paper reports the results of a parametric CFD study on idealized city models to investigate the potential of slope flow in ventilating a city located in a mountainous region when the background synoptic wind is absent. Examples of such a city include Tokyo in Japan, Los Angeles and Phoenix in the US, and Hong Kong. Two types of buoyancy-driven flow are considered, i.e., slope flow from the mountain slope (katabatic wind at night and anabatic wind in the daytime), and wall flow due to heated/cooled urban surfaces. The combined buoyancy-driven flow system can serve the purpose of dispersing the accumulated urban air pollutants when the background wind is weak or absent. The microscopic picture of ventilation performance within the urban structures was evaluated in terms of air change rate (ACH) and age of air. The simulation results reveal that the slope flow plays an important role in ventilating the urban area, especially in calm conditions. Katabatic flow at night is conducive to mitigating the nocturnal urban heat island. In the present parametric study, the mountain slope angle and mountain height are assumed to be constant, and the changing variables are heating/cooling intensity and building height. For a typical mountain of 500 m inclined at an angle of 20° to the horizontal level, the interactive structure is very much dependent on the ratio of heating/cooling intensity as well as building height. When the building is lower than 60 m, the slope wind dominates. When the building is as high as 100 m, the contribution from the urban wall flow cannot be ignored. It is found that katabatic wind can be very beneficial to the thermal environment as well as air quality at the pedestrian level. The air change rate for the pedestrian volume can be as high as 300 ACH.
Resumo:
This paper deals with the estimation and testing of conditional duration models by looking at the density and baseline hazard rate functions. More precisely, we foeus on the distance between the parametric density (or hazard rate) function implied by the duration process and its non-parametric estimate. Asymptotic justification is derived using the functional delta method for fixed and gamma kernels, whereas finite sample properties are investigated through Monte Carlo simulations. Finally, we show the practical usefulness of such testing procedures by carrying out an empirical assessment of whether autoregressive conditional duration models are appropriate to oIs for modelling price durations of stocks traded at the New York Stock Exchange.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Parametric Sensitivity Analysis of the Most Recent Computational Models of Rabbit Cardiac Pacemaking
Resumo:
The cellular basis of cardiac pacemaking activity, and specifically the quantitative contributions of particular mechanisms, is still debated. Reliable computational models of sinoatrial nodal (SAN) cells may provide mechanistic insights, but competing models are built from different data sets and with different underlying assumptions. To understand quantitative differences between alternative models, we performed thorough parameter sensitivity analyses of the SAN models of Maltsev & Lakatta (2009) and Severi et al (2012). Model parameters were randomized to generate a population of cell models with different properties, simulations performed with each set of random parameters generated 14 quantitative outputs that characterized cellular activity, and regression methods were used to analyze the population behavior. Clear differences between the two models were observed at every step of the analysis. Specifically: (1) SR Ca2+ pump activity had a greater effect on SAN cell cycle length (CL) in the Maltsev model; (2) conversely, parameters describing the funny current (If) had a greater effect on CL in the Severi model; (3) changes in rapid delayed rectifier conductance (GKr) had opposite effects on action potential amplitude in the two models; (4) within the population, a greater percentage of model cells failed to exhibit action potentials in the Maltsev model (27%) compared with the Severi model (7%), implying greater robustness in the latter; (5) confirming this initial impression, bifurcation analyses indicated that smaller relative changes in GKr or Na+-K+ pump activity led to failed action potentials in the Maltsev model. Overall, the results suggest experimental tests that can distinguish between models and alternative hypotheses, and the analysis offers strategies for developing anti-arrhythmic pharmaceuticals by predicting their effect on the pacemaking activity.
Resumo:
Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.
Resumo:
The spatial distribution of self-employment in India: evidence from semiparametric geoadditive models, Regional Studies. The entrepreneurship literature has rarely considered spatial location as a micro-determinant of occupational choice. It has also ignored self-employment in developing countries. Using Bayesian semiparametric geoadditive techniques, this paper models spatial location as a micro-determinant of self-employment choice in India. The empirical results suggest the presence of spatial occupational neighbourhoods and a clear north–south divide in self-employment when the entire sample is considered; however, spatial variation in the non-agriculture sector disappears to a large extent when individual factors that influence self-employment choice are explicitly controlled. The results further suggest non-linear effects of age, education and wealth on self-employment.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
This paper describes an implementation of a method capable of integrating parametric, feature based, CAD models based on commercial software (CATIA) with the SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities directly from the commercial CAD parameterisation is introduced, enabling the calculation of gradients with respect to CAD based design variables. To assess the accuracy and efficiency of the alternative approach, two aerodynamic optimisation problems are investigated: an inviscid, 3D, problem with multiple constraints, and a 2D high-lift aerofoil, viscous problem without any constraints. Initial results show the new parameterisation obtaining reliable optimums, with similar levels of performance of the software native parameterisations. In the final paper, details of computing CAD sensitivities will be provided, including accuracy as well as linking geometric sensitivities to aerodynamic objective functions and constraints; the impact in the robustness of the overall method will be assessed and alternative parameterisations will be included.
Resumo:
We propose a mechanism for testing the theory of collapse models such as continuous spontaneous localization (CSL) by examining the parametric heating rate of a trapped nanosphere. The random localizations of the center-of-mass for a given particle predicted by the CSL model can be understood as a stochastic force embodying a source of heating for the nanosphere. We show that by utilising a Paul trap to levitate the particle and optical cooling, it is possible to reduce environmental decoher- ence to such a level that CSL dominates the dynamics and contributes the main source of heating. We show that this approach allows measurements to be made on the timescale of seconds, and that the free parameter λcsl which characterises the model ought to be testable to values as low as 10^{−12} Hz.