948 resultados para Non-parametric methods
Resumo:
The fall of the Berlin Wall opened the way for a reform path – the transition process – which accompanied ten former Socialist countries in Central and South Eastern Europe to knock at the EU doors. By the way, at the time of the EU membership several economic and structural weaknesses remained. A tendency towards convergence between the new Member States (NMS) and the EU average income level emerged, together with a spread of inequality at the sub-regional level, mainly driven by the backwardness of the agricultural and rural areas. Several progresses were made in evaluating the policies for rural areas, but a shared definition of rurality is still missing. Numerous indicators were calculated for assessing the effectiveness of the Common Agricultural Policy and Rural Development Policy. Previous analysis on the Central and Eastern European countries found that the characteristics of the most backward areas were insufficiently addressed by the policies enacted; the low data availability and accountability at a sub-regional level, and the deficiencies in institutional planning and implementation represented an obstacle for targeting policies and payments. The next pages aim at providing a basis for understanding the connections between the peculiarities of the transition process, the current development performance of NMS and the EU role, with particular attention to the agricultural and rural areas. Applying a mixed methodological approach (multivariate statistics, non-parametric methods, spatial econometrics), this study contributes to the identification of rural areas and to the analysis of the changes occurred during the EU membership in Hungary, assessing the effect of CAP introduction and its contribution to the convergence of the Hungarian agricultural and rural. The author believes that more targeted – and therefore efficient – policies for agricultural and rural areas require a deeper knowledge of their structural and dynamic characteristics.
Resumo:
„Risikomaße in der Finanzmathematik“ Der Value-at -Risk (VaR) ist ein Risikomaß, dessen Verwendung von der Bankenaufsicht gefordert wird. Der Vorteil des VaR liegt – als Quantil der Ertrags- oder Verlustverteilung - vor allem in seiner einfachen Interpretierbarkeit. Nachteilig ist, dass der linke Rand der Wahrscheinlichkeitsverteilung nicht beachtet wird. Darüber hinaus ist die Berechnung des VaR schwierig, da Quantile nicht additiv sind. Der größte Nachteil des VaR ist in der fehlenden Subadditivität zu sehen. Deswegen werden Alternativen wie Expected Shortfall untersucht. In dieser Arbeit werden zunächst finanzielle Risikomaße eingeführt und einige ihre grundlegenden Eigenschaften festgehalten. Wir beschäftigen uns mit verschiedenen parametrischen und nichtparametrischen Methoden zur Ermittlung des VaR, unter anderen mit ihren Vorteilen und Nachteilen. Des Weiteren beschäftigen wir uns mit parametrischen und nichtparametrischen Schätzern vom VaR in diskreter Zeit. Wir stellen Portfoliooptimierungsprobleme im Black Scholes Modell mit beschränktem VaR und mit beschränkter Varianz vor. Der Vorteil des erstens Ansatzes gegenüber dem zweiten wird hier erläutert. Wir lösen Nutzenoptimierungsprobleme in Bezug auf das Endvermögen mit beschränktem VaR und mit beschränkter Varianz. VaR sagt nichts über den darüber hinausgehenden Verlust aus, während dieser von Expected Shortfall berücksichtigt wird. Deswegen verwenden wir hier den Expected Shortfall anstelle des von Emmer, Korn und Klüppelberg (2001) betrachteten Risikomaßes VaR für die Optimierung des Portfolios im Black Scholes Modell.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
OBJECTIVE: To characterize the impact of hepatitis C (HCV) serostatus on adherence to antiretroviral treatment (ART) among HIV-infected adults initiating ART. METHODS: The British Columbia HIV/AIDS Drug Treatment Program distributes, at no cost, all ART in this Canadian province. Eligible individuals used triple combination ART as their first HIV therapy and had documented HCV serology. Statistical analyses used parametric and non-parametric methods, including multivariate logistic regression. The primary outcome was > or = 95% adherence, defined as receiving > or = 95% of prescription refills during the first year of antiretroviral therapy. RESULTS: There were 1186 patients eligible for analysis, including 606 (51%) positive for HCV antibody and 580 (49%) who were negative. In adjusted analyses, adherence was independently associated with HCV seropositivity [adjusted odds ratio (AOR), 0.48; 95% confidence interval (CI), 0.23-0.97; P = 0.003], higher plasma albumin levels (AOR, 1.07; 95% CI, 1.01-1.12; P = 0.002) and male gender (AOR, 2.53; 95% CI, 1.04-6.15; P = 0.017), but not with injection drug use (IDU), age or other markers of liver injury. There was no evidence of an interaction between HCV and liver injury in adjusted analyses; comparing different strata of HCV and IDU confirmed that HCV was associated with poor adherence independent of IDU. CONCLUSIONS: HCV-coinfected individuals and those with lower albumin are less likely to be adherent to their ART.
Resumo:
Fractal and multifractal are concepts that have grown increasingly popular in recent years in the soil analysis, along with the development of fractal models. One of the common steps is to calculate the slope of a linear fit commonly using least squares method. This shouldn?t be a special problem, however, in many situations using experimental data the researcher has to select the range of scales at which is going to work neglecting the rest of points to achieve the best linearity that in this type of analysis is necessary. Robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non-parametric methods. In this method we don?t have to assume that the outlier point is simply an extreme observation drawn from the tail of a normal distribution not compromising the validity of the regression results. In this work we have evaluated the capacity of robust regression to select the points in the experimental data used trying to avoid subjective choices. Based on this analysis we have developed a new work methodology that implies two basic steps: ? Evaluation of the improvement of linear fitting when consecutive points are eliminated based on R pvalue. In this way we consider the implications of reducing the number of points. ? Evaluation of the significance of slope difference between fitting with the two extremes points and fitted with the available points. We compare the results applying this methodology and the common used least squares one. The data selected for these comparisons are coming from experimental soil roughness transect and simulated based on middle point displacement method adding tendencies and noise. The results are discussed indicating the advantages and disadvantages of each methodology.
Resumo:
Purpose The demand of rice by the increase in population in many countries has intensified the application of pesticides and the use of poor quality water to irrigate fields. The terrestrial environment is one compartment affected by these situations, where soil is working as a reservoir, retaining organic pollutants. Therefore, it is necessary to develop methods to determine insecticides in soil and monitor susceptible areas to be contaminated, applying adequate techniques to remediate them. Materials and methods This study investigates the occurrence of ten pyrethroid insecticides (PYs) and its spatio-temporal variance in soil at two different depths collected in two periods (before plow and during rice production), in a paddy field area located in the Mediterranean coast. Pyrethroids were quantified using gas chromatography?mass spectrometry (GC?MS) after ultrasound-assisted extraction with ethyl acetate. The results obtained were assessed statistically using non-parametric methods, and significant statistical differences (p < 0.05) in pyrethroids content with soil depth and proximity to wastewater treatment plants were evaluated. Moreover, a geographic information system (GIS) was used to monitor the occurrence of PYs in paddy fields and detect risk areas. Results and discussion Pyrethroids were detected at concentrations ?57.0 ng g?1 before plow and ?62.3 ng g?1 during rice production, being resmethrin and cyfluthrin the compounds found at higher concentrations in soil. Pyrethroids were detected mainly at the top soil, and a GIS program was used to depict the obtained results, showing that effluents from wastewater treatment plants (WWTPs) were the main sources of soil contamination. No toxic effects were expected to soil organisms, but it is of concern that PYs may affect aquatic organisms, which represents the worst case scenario. Conclusions A methodology to determine pyrethroids in soil was developed to monitor a paddy field area. The use of water fromWWTPs to irrigate rice fields is one of the main pollution sources of pyrethroids. It is a matter of concern that PYs may present toxic effects on aquatic organisms, as they can be desorbed from soil. Phytoremediation may play an important role in this area, reducing the possible risk associated to PYs levels in soil.
Resumo:
Objective To determine the pharmacokinetics of doxorubicin in sulphur-crested cockatoos, so that its use in clinical studies in birds can be considered. Design A pharmacokinetic study of doxorubicin, following a single intravenous (IV) infusion over 20 min, was performed in four healthy sulphur-crested cockatoos (Cacatua galerita). Procedure Birds were anaesthetised and both jugular veins were cannulated, one for doxorubicin infusion and the other for blood collection. Doxorubicin hydrochloride (2 mg/kg) in normal saline was infused IV over 20 min at a constant rate. Serial blood samples were collected for 96 h after initiation of the infusion. Plasma doxorubicin concentrations were assayed using an HPLC method involving ethyl acetate extraction, reverse-phase chromatography and fluorescence detection. The limit of quantification was 20 ng/mL. Established non-parametric methods were used for the analysis of plasma doxorubicin data. Results During the infusion the mean +/- SD for the C-max of doxorubicin was 4037 +/- 2577 ng/mL. Plasma concentrations declined biexponentially immediately after the infusion was ceased. There was considerable intersubject variability in all pharmacokinetic variables. The terminal (beta-phase) half-life was 41.4 +/- 18.5 min, the systemic clearance (Cl) was 45.7 +/- 18.0 mL/min/kg, the mean residence time (MRT) was 4.8 +/- 1.4 min, and the volume of distribution at steady state (V-SS) was 238 131 mL/kg. The extrapolated area under the curve (AUC(0-infinity)) was 950 +/- 677 ng/mL.h. The reduced metabolite, doxorubicinol, was detected in the plasma of all four parrots but could be quantified in only one bird with the profile suggesting formation rate-limited pharmacokinetics of doxorubicinol. Conclusions and clinical relevance Doxorubicin infusion in sulphur-crested cockatoos produced mild, transient inappetence. The volume of distribution per kilogram and terminal half-life were considerably smaller, but the clearance per kilogram was similar to or larger than reported in the dog, rat and humans. Traces of doxorubicinol, a metabolite of doxorubicin, were detected in the plasma.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
The efficiency literature, both using parametric and non-parametric methods, has been focusing mainly on cost efficiency analysis rather than on profit efficiency. In for-profit organisations, however, the measurement of profit efficiency and its decomposition into technical and allocative efficiency is particularly relevant. In this paper a newly developed method is used to measure profit efficiency and to identify the sources of any shortfall in profitability (technical and/or allocative inefficiency). The method is applied to a set of Portuguese bank branches first assuming long run and then a short run profit maximisation objective. In the long run most of the scope for profit improvement of bank branches is by becoming more allocatively efficient. In the short run most of profit gain can be realised through higher technical efficiency. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Non-parametric methods for efficiency evaluation were designed to analyse industries comprising multi-input multi-output producers and lacking data on market prices. Education is a typical example. In this chapter, we review applications of DEA in secondary and tertiary education, focusing on the opportunities that this offers for benchmarking at institutional level. At secondary level, we investigate also the disaggregation of efficiency measures into pupil-level and school-level effects. For higher education, while many analyses concern overall institutional efficiency, we examine also studies that take a more disaggregated approach, centred either around the performance of specific functional areas or that of individual employees.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente
Resumo:
Objective: To compare measurements of the upper arm cross-sectional areas (total arm area,arm muscle area, and arm fat area of healthy neonates) as calculated using anthropometry with the values obtained by ultrasonography. Materials and methods: This study was performed on 60 consecutively born healthy neonates: gestational age (mean6SD) 39.661.2 weeks, birth weight 3287.16307.7 g, 27 males (45%) and 33 females (55%). Mid-arm circumference and tricipital skinfold thickness measurements were taken on the left upper mid-arm according to the conventional anthropometric method to calculate total arm area, arm muscle area and arm fat area. The ultrasound evaluation was performed at the same arm location using a Toshiba sonolayer SSA-250AÒ, which allows the calculation of the total arm area, arm muscle area and arm fat area by the number of pixels enclosed in the plotted areas. Statistical analysis: whenever appropriate, parametric and non-parametric tests were used in order to compare measurements of paired samples and of groups of samples. Results: No significant differences between males and females were found in any evaluated measurements, estimated either by anthropometry or by ultrasound. Also the median of total arm area did not differ significantly with either method (P50.337). Although there is evidence of concordance of the total arm area measurements (r50.68, 95% CI: 0.55–0.77) the two methods of measurement differed for arm muscle area and arm fat area. The estimated median of measurements by ultrasound for arm muscle area were significantly lower than those estimated by the anthropometric method, which differed by as much as 111% (P,0.001). The estimated median ultrasound measurement of the arm fat was higher than the anthropometric arm fat area by as much as 31% (P,0.001). Conclusion: Compared with ultrasound measurements using skinfold measurements and mid-arm circumference without further correction may lead to overestimation of the cross-sectional area of muscle and underestimation of the cross-sectional fat area. The correlation between the two methods could be interpreted as an indication for further search of correction factors in the equations.
Resumo:
The Electrohysterogram (EHG) is a new instrument for pregnancy monitoring. It measures the uterine muscle electrical signal, which is closely related with uterine contractions. The EHG is described as a viable alternative and a more precise instrument than the currently most widely used method for the description of uterine contractions: the external tocogram. The EHG has also been indicated as a promising tool in the assessment of preterm delivery risk. This work intends to contribute towards the EHG characterization through the inventory of its components which are: • Contractions; • Labor contractions; • Alvarez waves; • Fetal movements; • Long Duration Low Frequency Waves; The instruments used for cataloging were: Spectral Analysis, parametric and non-parametric, energy estimators, time-frequency methods and the tocogram annotated by expert physicians. The EHG and respective tocograms were obtained from the Icelandic 16-electrode Electrohysterogram Database. 288 components were classified. There is not a component database of this type available for consultation. The spectral analysis module and power estimation was added to Uterine Explorer, an EHG analysis software developed in FCT-UNL. The importance of this component database is related to the need to improve the understanding of the EHG which is a relatively complex signal, as well as contributing towards the detection of preterm birth. Preterm birth accounts for 10% of all births and is one of the most relevant obstetric conditions. Despite the technological and scientific advances in perinatal medicine, in developed countries, prematurity is the major cause of neonatal death. Although various risk factors such as previous preterm births, infection, uterine malformations, multiple gestation and short uterine cervix in second trimester, have been associated with this condition, its etiology remains unknown [1][2][3].
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.