950 resultados para non-parametric smoothing
Resumo:
Tämän tutkielman tavoitteena on selvittää Venäjän, Slovakian, Tsekin, Romanian, Bulgarian, Unkarin ja Puolan osakemarkkinoiden heikkojen ehtojen tehokkuutta. Tämä tutkielma on kvantitatiivinen tutkimus ja päiväkohtaiset indeksin sulkemisarvot kerättiin Datastreamin tietokannasta. Data kerättiin pörssien ensimmäisestä kaupankäyntipäivästä aina vuoden 2006 elokuun loppuun saakka. Analysoinnin tehostamiseksi dataa tutkittiin koko aineistolla, sekä kahdella aliperiodilla. Osakemarkkinoiden tehokkuutta on testattu neljällä tilastollisella metodilla, mukaan lukien autokorrelaatiotesti ja epäparametrinen runs-testi. Tavoitteena on myös selvittääesiintyykö kyseisillä markkinoilla viikonpäiväanomalia. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Viikonpäiväanomalia on löydettävissä kaikilta edellä mainituilta osakemarkkinoilta paitsi Tsekin markkinoilta. Merkittävää, positiivista tai negatiivista autokorrelaatiota, on löydettävissä kaikilta osakemarkkinoilta, myös Ljung-Box testi osoittaa kaikkien markkinoiden tehottomuutta täydellä periodilla. Osakemarkkinoiden satunnaiskulku hylätään runs-testin perusteella kaikilta muilta paitsi Slovakian osakemarkkinoilla, ainakin tarkastellessa koko aineistoa tai ensimmäistä aliperiodia. Aineisto ei myöskään ole normaalijakautunut minkään indeksin tai aikajakson kohdalla. Nämä havainnot osoittavat, että kyseessä olevat markkinat eivät ole heikkojen ehtojen mukaan tehokkaita
Resumo:
Tämän tutkielman tavoitteena on tarkastella Kiinan osakemarkkinoiden tehokkuutta ja random walk -hypoteesin voimassaoloa. Tavoitteena on myös selvittää esiintyykö viikonpäiväanomalia Kiinan osakemarkkinoilla. Tutkimusaineistona käytetään Shanghain osakepörssin A-sarjan,B-sarjan ja yhdistelmä-sarjan ja Shenzhenin yhdistelmä-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 21.2.1992-30.12.2005 sekä Shenzhenin osakepörssin A-sarjan ja B-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 5.10.1992-30.12.2005. Tutkimusmenetelminä käytetään neljä tilastollista menetelmää, mukaan lukien autokorrelaatiotestiä, epäparametrista runs-testiä, varianssisuhdetestiä sekä Augmented Dickey-Fullerin yksikköjuuritestiä. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Testejä tehdään sekä koko aineistolla että kolmella erillisellä ajanjaksolla. Tämän tutkielman empiiriset tulokset tukevat aikaisempia tutkimuksia Kiinan osakemarkkinoiden tehottomuudesta. Lukuun ottamatta yksikköjuuritestien saatuja tuloksia, autokorrelaatio-, runs- ja varianssisuhdetestien perusteella random walk-hypoteesi hylättiin molempien Kiinan osakemarkkinoiden kohdalla. Tutkimustulokset osoittavat, että molemmilla osakepörssillä B-sarjan indeksien käyttäytyminenon ollut huomattavasti enemmän random walk -hypoteesin vastainen kuin A-sarjan indeksit. Paitsi B-sarjan markkinat, molempien Kiinan osakemarkkinoiden tehokkuus näytti myös paranevan vuoden 2001 markkinabuumin jälkeen. Tutkimustulokset osoittavat myös viikonpäiväanomalian esiintyvän Shanghain osakepörssillä, muttei kuitenkaan Shenzhenin osakepörssillä koko tarkasteluajanjaksolla.
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
El objetivo de este trabajo es analizar como ha evolucionado y los efectos que el tipo de propiedad tiene sobre el desempeño de los bancos en aquellos países de la Europa Central y del Este, que en los últimos años han experimentado con gran intensidad el proceso de integración europea. Con este fin, hemos analizado 242 bancos correspondientes a 12 países (10 nuevos miembros de la UE y 2 en fase de negociación). Para verificar la existencia de un efecto derivado del tipo de propiedad, analizamos las dimensiones de la eficiencia bancaria, rentabilidad, costes, e intermediación, mediante la aplicación de distintas técnicas, tanto paramétricas como no paramétricas. Los resultados muestran la existencia de ciertos efectos derivados del tipo de propiedad. Así, entre los principales resultados, destaca que los bancos privatizados tienden a presentar unos niveles de rentabilidad superiores a los presentados por otros tipos de propiedad, mientras que a su vez, los bancos de origen extranjero son los que de media presentan unos menores niveles de costes, si bien esta diferencia no es estadísticamente significativa. Analizamos también la importancia que supone la presencia de un inversor estratégico en la propiedad de los bancos, obteniendo una mejoría que si bien no es significativa en los ratios de rentabilidad, si lo es en relación a los gastos generales de gestión.
Resumo:
OBJECTIVES: To prospectively assess the stiffness of incidentally discovered focal liver lesions (FLL) with no history of chronic liver disease or extrahepatic cancer using shearwave elastography (SWE). METHODS: Between June 2011 and May 2012, all FLL fortuitously discovered on ultrasound examination were prospectively included. For each lesion, stiffness was measured (kPa). Characterization of the lesion relied on magnetic resonance imaging (MRI) and/or contrast-enhanced ultrasound, or biopsy. Tumour stiffness was analysed using ANOVA and non-parametric Mann-Whitney tests. RESULTS: 105 lesions were successfully evaluated in 73 patients (61 women, 84%) with a mean age of 44.8 (range: 20‒75). The mean stiffness was 33.3 ± 12.7 kPa for the 60 focal nodular hyperplasia (FNH), 19.7 ± 9.8 k Pa for the 17 hepatocellular adenomas (HCA), 17.1 ± 7 kPa for the 20 haemangiomas, 11.3 ± 4.3 kPa for the five focal fatty sparing, 34.1 ± 7.3 kPa for the two cholangiocarcinomas, and 19.6 kPa for one hepatocellular carcinoma (p < 0.0001). There was no difference between the benign and the malignant groups (p = 0.64). FNHs were significantly stiffer than HCAs (p < 0.0001). Telangiectatic/inflammatory HCAs were significantly stiffer than the steatotic HCAs (p = 0.014). The area under the ROC curve (AUROC) for differentiating FNH from other lesions was 0.86 ± 0.04. CONCLUSION: SWE may provide additional information for the characterization of FFL, and may help in differentiating FNH from HCAs, and in subtyping HCAs. KEY POINTS: ? SWE might be helpful for the characterization of solid focal liver lesions ? SWE cannot differentiate benign from malignant liver lesions ? FNHs are significantly stiffer than other benign lesions ? Telangiectatic/inflammatory HCA are significantly stiffer than steatotic ones.
Resumo:
Psychophysical studies suggest that humans preferentially use a narrow band of low spatial frequencies for face recognition. Here we asked whether artificial face recognition systems have an improved recognition performance at the same spatial frequencies as humans. To this end, we estimated recognition performance over a large database of face images by computing three discriminability measures: Fisher Linear Discriminant Analysis, Non-Parametric Discriminant Analysis, and Mutual Information. In order to address frequency dependence, discriminabilities were measured as a function of (filtered) image size. All three measures revealed a maximum at the same image sizes, where the spatial frequency content corresponds to the psychophysical found frequencies. Our results therefore support the notion that the critical band of spatial frequencies for face recognition in humans and machines follows from inherent properties of face images, and that the use of these frequencies is associated with optimal face recognition performance.
Resumo:
BACKGROUND: Hallux valgus is one of the most common forefoot problems in females. Studies have looked at gait alterations due to hallux valgus deformity, assessing temporal, kinematic or plantar pressure parameters individually. The present study, however, aims to assess all listed parameters at once and to isolate the most clinically relevant gait parameters for moderate to severe hallux valgus deformity with the intent of improving post-operative patient prognosis and rehabilitation. METHODS: The study included 26 feet with moderate to severe hallux valgus deformity and 30 feet with no sign of hallux valgus in female participants. Initially, weight bearing radiographs and foot and ankle clinical scores were assessed. Gait assessment was then performed utilizing pressure insoles (PEDAR®) and inertial sensors (Physilog®) and the two groups were compared using a non-parametric statistical hypothesis test (Wilcoxon rank sum, P<0.05). Furthermore, forward stepwise regression was used to reduce the number of gait parameters to the most clinically relevant and correlation of these parameters was assessed with the clinical score. FINDINGS: Overall, the results showed clear deterioration in several gait parameters in the hallux valgus group compared to controls and 9 gait parameters (effect size between 1.03 and 1.76) were successfully isolated to best describe the altered gait in hallux valgus deformity (r(2)=0.71) as well as showed good correlation with clinical scores. INTERPRETATION: Our results, and nine listed parameters, could serve as benchmark for characterization of hallux valgus and objective evaluation of treatment efficacy.
Resumo:
This thesis studies techniques used for detection of distributed denial of service attacks which during last decade became one of the most serious network security threats. To evaluate different detection algorithms and further improve them we need to test their performance under conditions as close to real-life situations as possible. Currently the only feasible solution for large-scale tests is the simulated environment. The thesis describes implementation of recursive non-parametric CUSUM algorithm for detection of distributed denial of service attacks in ns-2 network simulator – a standard de-facto for network simulation.
Resumo:
Background To determine generic utilities for Spanish chronic obstructive pulmonary disease (COPD) patients stratified by different classifications: GOLD 2007, GOLD 2013, GesEPOC 2012 and BODEx index. Methods Multicentre, observational, cross-sectional study. Patients were aged ≥40 years, with spirometrically confirmed COPD. Utility values were derived from EQ-5D-3 L. Means, standard deviations (SD), medians and interquartile ranges (IQR) were computed based on the different classifications. Differences in median utilities between groups were assessed by non-parametric tests. Results 346 patients were included, of which 85.5% were male with a mean age of 67.9 (SD = 9.7) years and a mean duration of COPD of 7.6 (SD = 5.8) years; 80.3% were ex-smokers and the mean smoking history was 54.2 (SD = 33.2) pack-years. Median utilities (IQR) by GOLD 2007 were 0.87 (0.22) for moderate; 0.80 (0.26) for severe and 0.67 (0.42) for very-severe patients (p < 0.001 for all comparisons). Median utilities by GOLD 2013 were group A: 1.0 (0.09); group B: 0.87 (0.13); group C: 1.0 (0.16); group D: 0.74 (0.29); comparisons were statistically significant (p < 0.001) except A vs C. Median utilities by GesEPOC phenotypes were 0.84 (0.33) for non exacerbator; 0.80 (0.26) for COPD-asthma overlap; 0.71 (0.62) for exacerbator with emphysema; 0.72 (0.57) for exacerbator with chronic bronchitis (p < 0.001). Comparisons between patients with or without exacerbations and between patients with COPD-asthma overlap and exacerbator with chronic bronchitis were statistically-significant (p < 0.001). Median utilities by BODEx index were: group 02: 0.89 (0.20); group 34: 0.80 (0.27); group 56: 0.67 (0.29); group 79: 0.41 (0.31). All comparisons were significant (p < 0.001) except between groups 34 and 56. Conclusion Irrespective of the classification used utilities were associated to disease severity. Some clinical phenotypes were associated with worse utilities, probably related to a higher frequency of exacerbations. GOLD 2007 guidelines and BODEx index better discriminated patients with a worse health status than GOLD 2013 guidelines, while GOLD 2013 guidelines were better able to identify a smaller group of patients with the best health.
Resumo:
PURPOSE: Thoracic fat has been associated with an increased risk of coronary artery disease (CAD). As endothelium-dependent vasoreactivity is a surrogate of cardiovascular events and is impaired early in atherosclerosis, we aimed at assessing the possible relationship between thoracic fat volume (TFV) and endothelium-dependent coronary vasomotion. METHODS: Fifty healthy volunteers without known CAD or major cardiovascular risk factors (CRFs) prospectively underwent a (82)Rb cardiac PET/CT to quantify myocardial blood flow (MBF) at rest, and MBF response to cold pressor testing (CPT-MBF) and adenosine (i.e., stress-MBF). TFV was measured by a 2D volumetric CT method and common laboratory blood tests (glucose and insulin levels, HOMA-IR, cholesterol, triglyceride, hsCRP) were performed. Relationships between CPT-MBF, TFV and other CRFs were assessed using non-parametric Spearman rank correlation testing and multivariate linear regression analysis. RESULTS: All of the 50 participants (58 ± 10y) had normal stress-MBF (2.7 ± 0.6 mL/min/g; 95 % CI: 2.6-2.9) and myocardial flow reserve (2.8 ± 0.8; 95 % CI: 2.6-3.0) excluding underlying CAD. Univariate analysis revealed a significant inverse relation between absolute CPT-MBF and sex (ρ = -0.47, p = 0.0006), triglyceride (ρ = -0.32, p = 0.024) and insulin levels (ρ = -0.43, p = 0.0024), HOMA-IR (ρ = -0.39, p = 0.007), BMI (ρ = -0.51, p = 0.0002) and TFV (ρ = -0.52, p = 0.0001). MBF response to adenosine was also correlated with TFV (ρ = -0.32, p = 0.026). On multivariate analysis, TFV emerged as the only significant predictor of MBF response to CPT (p = 0.014). CONCLUSIONS: TFV is significantly correlated with endothelium-dependent and -independent coronary vasomotion. High TF burden might negatively influence MBF response to CPT and to adenosine stress, even in persons without CAD, suggesting a link between thoracic fat and future cardiovascular events.
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Comparação de duas metodologias de amostragem atmosférica com ferramenta estatística não paramétrica
Resumo:
In atmospheric aerosol sampling, it is inevitable that the air that carries particles is in motion, as a result of both externally driven wind and the sucking action of the sampler itself. High or low air flow sampling speeds may lead to significant particle size bias. The objective of this work is the validation of measurements enabling the comparison of species concentration from both air flow sampling techniques. The presence of several outliers and increase of residuals with concentration becomes obvious, requiring non-parametric methods, recommended for the handling of data which may not be normally distributed. This way, conversion factors are obtained for each of the various species under study using Kendall regression.
Resumo:
The purpose of the thesis is to analyze whether the returns of general stock market indices of Estonia, Latvia and Lithuania follow the random walk hypothesis (RWH), and in addition, whether they are consistent with the weak-form efficiency criterion. Also the existence of the day-of-the-week anomaly is examined in the same regional markets. The data consists of daily closing quotes of the OMX Tallinn, Riga and Vilnius total return indices for the sample period from January 3, 2000 to August 28, 2009. Moreover, the full sample period is also divided into two sub-periods. The RWH is tested by applying three quantitative methods (i.e. the Augmented Dickey-Fuller unit root test, serial correlation test and non-parametric runs test). Ordinary Least Squares (OLS) regression with dummy variables is employed to detect the day-of-the-week anomalies. The random walk hypothesis (RWH) is rejected in the Estonian and Lithuanian stock markets. The Latvian stock market exhibits more efficient behaviour, although some evidence of inefficiency is also found, mostly during the first sub-period from 2000 to 2004. Day-of-the-week anomalies are detected on every stock market examined, though no longer during the later sub-period.
Resumo:
Ten common doubts of chemistry students and professionals about their statistical applications are discussed. The use of the N-1 denominator instead of N is described for the standard deviation. The statistical meaning of the denominators of the root mean square error of calibration (RMSEC) and root mean square error of validation (RMSEV) are given for researchers using multivariate calibration methods. The reason why scientists and engineers use the average instead of the median is explained. Several problematic aspects about regression and correlation are treated. The popular use of triplicate experiments in teaching and research laboratories is seen to have its origin in statistical confidence intervals. Nonparametric statistics and bootstrapping methods round out the discussion.