966 resultados para Nondegenerate Parametric Oscillation
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
This paper does two things. First, it presents alternative approaches to the standard methods of estimating productive efficiency using a production function. It favours a parametric approach (viz. the stochastic production frontier approach) over a nonparametric approach (e.g. data envelopment analysis); and, further, one that provides a statistical explanation of efficiency, as well as an estimate of its magnitude. Second, it illustrates the favoured approach (i.e. the ‘single stage procedure’) with estimates of two models of explained inefficiency, using data from the Thai manufacturing sector, after the crisis of 1997. Technical efficiency is modelled as being dependent on capital investment in three major areas (viz. land, machinery and office appliances) where land is intended to proxy the effects of unproductive, speculative capital investment; and both machinery and office appliances are intended to proxy the effects of productive, non-speculative capital investment. The estimates from these models cast new light on the five-year long, post-1997 crisis period in Thailand, suggesting a structural shift from relatively labour intensive to relatively capital intensive production in manufactures from 1998 to 2002.
Resumo:
Public authorities and road users alike are increasingly concerned by recent trends in road safety outcomes in Barcelona, which is the European city with the highest number of registered Powered Two-Wheel (PTW) vehicles per inhabitant,. In this study we explore the determinants of motorcycle and moped accident severity in a large urban area, drawing on Barcelona’s local police database (2002-2008). We apply non-parametric regression techniques to characterize PTW accidents and parametric methods to investigate the factors influencing their severity. Our results show that PTW accident victims are more vulnerable, showing greater degrees of accident severity, than other traffic victims. Speed violations and alcohol consumption provide the worst health outcomes. Demographic and environment-related risk factors, in addition to helmet use, play an important role in determining accident severity. Thus, this study furthers our understanding of the most vulnerable vehicle types, while our results have direct implications for local policy makers in their fight to reduce the severity of PTW accidents in large urban areas.
Resumo:
Traditionally, it is assumed that the population size of cities in a country follows a Pareto distribution. This assumption is typically supported by nding evidence of Zipf's Law. Recent studies question this nding, highlighting that, while the Pareto distribution may t reasonably well when the data is truncated at the upper tail, i.e. for the largest cities of a country, the log-normal distribution may apply when all cities are considered. Moreover, conclusions may be sensitive to the choice of a particular truncation threshold, a yet overlooked issue in the literature. In this paper, then, we reassess the city size distribution in relation to its sensitivity to the choice of truncation point. In particular, we look at US Census data and apply a recursive-truncation approach to estimate Zipf's Law and a non-parametric alternative test where we consider each possible truncation point of the distribution of all cities. Results con rm the sensitivity of results to the truncation point. Moreover, repeating the analysis over simulated data con rms the di culty of distinguishing a Pareto tail from the tail of a log-normal and, in turn, identifying the city size distribution as a false or a weak Pareto law.
Resumo:
This paper proposes a novel way of testing exogeneity of an explanatory variable without any parametric assumptions in the presence of a "conditional" instrumental variable. A testable implication is derived that if an explanatory variable is endogenous, the conditional distribution of the outcome given the endogenous variable is not independent of its instrumental variable(s). The test rejects the null hypothesis with probability one if the explanatory variable is endogenous and it detects alternatives converging to the null at a rate n..1=2:We propose a consistent nonparametric bootstrap test to implement this testable implication. We show that the proposed bootstrap test can be asymptotically justi.ed in the sense that it produces asymptotically correct size under the null of exogeneity, and it has unit power asymptotically. Our nonparametric test can be applied to the cases in which the outcome is generated by an additively non-separable structural relation or in which the outcome is discrete, which has not been studied in the literature.
Resumo:
This paper develops a methodology to estimate the entire population distributions from bin-aggregated sample data. We do this through the estimation of the parameters of mixtures of distributions that allow for maximal parametric flexibility. The statistical approach we develop enables comparisons of the full distributions of height data from potential army conscripts across France's 88 departments for most of the nineteenth century. These comparisons are made by testing for differences-of-means stochastic dominance. Corrections for possible measurement errors are also devised by taking advantage of the richness of the data sets. Our methodology is of interest to researchers working on historical as well as contemporary bin-aggregated or histogram-type data, something that is still widely done since much of the information that is publicly available is in that form, often due to restrictions due to political sensitivity and/or confidentiality concerns.
Resumo:
There is a vast literature that specifies Bayesian shrinkage priors for vector autoregressions (VARs) of possibly large dimensions. In this paper I argue that many of these priors are not appropriate for multi-country settings, which motivates me to develop priors for panel VARs (PVARs). The parametric and semi-parametric priors I suggest not only perform valuable shrinkage in large dimensions, but also allow for soft clustering of variables or countries which are homogeneous. I discuss the implications of these new priors for modelling interdependencies and heterogeneities among different countries in a panel VAR setting. Monte Carlo evidence and an empirical forecasting exercise show clear and important gains of the new priors compared to existing popular priors for VARs and PVARs.
Resumo:
This paper aims at providing a Bayesian parametric framework to tackle the accessibility problem across space in urban theory. Adopting continuous variables in a probabilistic setting we are able to associate with the distribution density to the Kendall's tau index and replicate the general issues related to the role of proximity in a more general context. In addition, by referring to the Beta and Gamma distribution, we are able to introduce a differentiation feature in each spatial unit without incurring in any a-priori definition of territorial units. We are also providing an empirical application of our theoretical setting to study the density distribution of the population across Massachusetts.
Resumo:
Transport costs in address models of differentiation are usually modeled as separable of the consumption commodity and with a parametric price. However, there are many sectors in an economy where such modeling is not satisfactory either because transportation is supplied under oligopolistic conditions or because there is a difference (loss) between the amount delivered at the point of production and the amount received at the point of consumption. This paper is a first attempt to tackle these issues proposing to study competition in spatial models using an iceberg-like transport cost technology allowing for concave and convex melting functions.
Resumo:
Drawing on PISA data of 2006, this study examines the impact of socio-economic school composition on science test score achievement for Spanish students in compulsory secondary schools. We define school composition in terms of the average parental human capital of students in the same school. These contextual peer effects are estimated using a semi-parametric methodology, which enables the spillovers to affect all the parameters of the educational production function. We also deal with the potential problem of self-selection of student into schools, using an artificial sorting that we argue to be independent from unobserved student’s abilities. The results indicate that the association between socio-economic school composition and test score results is clearly positive and significantly higher when computed with the semi-parametric approach. However, we find that the endogenous sorting of students into schools plays a fundamental role, given that the spillovers are significantly reduced when this selection process is ruled out from our measure of school composition effects. Specifically, the estimations suggest that the contextual peer effects are moderately positive only in those schools where the socio-economic composition is considerably elevated. In addition, we find some evidence of asymmetry of how the external effects and the sorting process actually operate, which seem affect in a different way males and females as well as high and low performance students.
Resumo:
Purpose: To evaluate the sensitivity of the perfusion parameters derived from Intravoxel Incoherent Motion (IVIM) MR imaging to hypercapnia-induced vasodilatation and hyperoxygenation-induced vasoconstriction in the human brain. Materials and Methods: This study was approved by the local ethics committee and informed consent was obtained from all participants. Images were acquired with a standard pulsed-gradient spin-echo sequence (Stejskal-Tanner) in a clinical 3-T system by using 16 b values ranging from 0 to 900 sec/mm(2). Seven healthy volunteers were examined while they inhaled four different gas mixtures known to modify brain perfusion (pure oxygen, ambient air, 5% CO(2) in ambient air, and 8% CO(2) in ambient air). Diffusion coefficient (D), pseudodiffusion coefficient (D*), perfusion fraction (f), and blood flow-related parameter (fD*) maps were calculated on the basis of the IVIM biexponential model, and the parametric maps were compared among the four different gas mixtures. Paired, one-tailed Student t tests were performed to assess for statistically significant differences. Results: Signal decay curves were biexponential in the brain parenchyma of all volunteers. When compared with inhaled ambient air, the IVIM perfusion parameters D*, f, and fD* increased as the concentration of inhaled CO(2) was increased (for the entire brain, P = .01 for f, D*, and fD* for CO(2) 5%; P = .02 for f, and P = .01 for D* and fD* for CO(2) 8%), and a trend toward a reduction was observed when participants inhaled pure oxygen (although P > .05). D remained globally stable. Conclusion: The IVIM perfusion parameters were reactive to hyperoxygenation-induced vasoconstriction and hypercapnia-induced vasodilatation. Accordingly, IVIM imaging was found to be a valid and promising method to quantify brain perfusion in humans. © RSNA, 2012.
Resumo:
Our objective is to analyse fraud as an operational risk for the insurance company. We study the effect of a fraud detection policy on the insurer's results account, quantifying the loss risk from the perspective of claims auditing. From the point of view of operational risk, the study aims to analyse the effect of failing to detect fraudulent claims after investigation. We have chosen VAR as the risk measure with a non-parametric estimation of the loss risk involved in the detection or non-detection of fraudulent claims. The most relevant conclusion is that auditing claims reduces loss risk in the insurance company.
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
The effectiveness of R&D subsidies can vary substantially depending on their characteristics. Specifically, the amount and intensity of such subsidies are crucial issues in the design of public schemes supporting private R&D. Public agencies determine the intensities of R&D subsidies for firms in line with their eligibility criteria, although assessing the effects of R&D projects accurately is far from straightforward. The main aim of this paper is to examine whether there is an optimal intensity for R&D subsidies through an analysis of their impact on private R&D effort. We examine the decisions of a public agency to grant subsidies taking into account not only the characteristics of the firms but also, as few previous studies have done to date, those of the R&D projects. In determining the optimal subsidy we use both parametric and nonparametric techniques. The results show a non-linear relationship between the percentage of subsidy received and the firms’ R&D effort. These results have implications for technology policy, particularly for the design of R&D subsidies that ensure enhanced effectiveness.
Resumo:
Parasitological analysis of 237 Menticirrhus ophicephalus, 124 Paralonchurus peruanus, 249 Sciaena deliciosa, 50 Sciaena fasciata and 308 Stellifer minor from Callao (Perú) yielded 37 species of metazoan parasites (14 Monogenea, 11 Copepoda, 4 Nematoda, 3 Acanthocephala, 1 Digenea, 1 Aspidobothrea, 1 Eucestoda, 1 Isopoda and 1 Hirudinea). Only one species, the copepoda Bomolochus peruensis, was common to all five hosts. The majority of the components of the infracommunities analyzed are ectoparasites. The Brillouin index (H) and evenness (J´) were applied to the fully sampled metazoan parasite infracommunities. High values of prevalence and mean abundance of infection are associated to the polyonchoinean monogeneans; the low values of J' reinforce the strong dominance of this group in the studied communities. The paucity of the endoparasite fauna may be a consequence of the unstable environment due to an upwelling system, aperiodically affected by the El Niño Southern Oscillation phenomena.