943 resultados para Non-parametric Tests


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In worldwide studies, interleukin-6 (IL-6) is implicated in age-related disturbances. The aim of the present report was to determine the possible association of IL-6 -174 C/G promoter polymorphism with the cytokine profile as well as with the presence of selected cardiovascular risk features. This was a cross-sectional study on Brazilian women aged 60 years or older. A sample of 193 subjects was investigated for impaired glucose regulation, diabetes, hypertension, and dyslipidemia. Genotyping was done by direct sequencing of PCR products. IL-6 and C-reactive protein were quantified by high-sensitivity assays. General linear regression models or the Student t-test were used to compare continuous variables among genotypes, followed by adjustments for confounding variables. The chi-square test was used to compare categorical variables. The genotypes were consistent with Hardy-Weinberg equilibrium proportions. In a recessive model, mean waist-to-hip ratio, serum glycated hemoglobin and serum glucose were markedly lower in C homozygotes (P = 0.001, 0.028, and 0.047, respectively). In a dominant hypothesis, G homozygotes displayed a trend towards higher levels of circulating IL-6 (P = 0.092). Non-parametric analysis revealed that impaired fasting glucose and hypertension were findings approximately 2-fold more frequent among G homozygous subjects (P = 0.042 and 0.043, respectively). Taken together, our results show that the IL-6 -174 G-allele is implicated in a greater cardiovascular risk. To our knowledge, this is the first investigation of IL-6 promoter variants and age-related disturbances in the Brazilian elderly population.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The autonomic nervous system plays an important role in physiological and pathological conditions, and has been extensively evaluated by parametric and non-parametric spectral analysis. To compare the results obtained with fast Fourier transform (FFT) and the autoregressive (AR) method, we performed a comprehensive comparative study using data from humans and rats during pharmacological blockade (in rats), a postural test (in humans), and in the hypertensive state (in both humans and rats). Although postural hypotension in humans induced an increase in normalized low-frequency (LFnu) of systolic blood pressure, the increase in the ratio was detected only by AR. In rats, AR and FFT analysis did not agree for LFnu and high frequency (HFnu) under basal conditions and after vagal blockade. The increase in the LF/HF ratio of the pulse interval, induced by methylatropine, was detected only by FFT. In hypertensive patients, changes in LF and HF for systolic blood pressure were observed only by AR; FFT was able to detect the reduction in both blood pressure variance and total power. In hypertensive rats, AR presented different values of variance and total power for systolic blood pressure. Moreover, AR and FFT presented discordant results for LF, LFnu, HF, LF/HF ratio, and total power for pulse interval. We provide evidence for disagreement in 23% of the indices of blood pressure and heart rate variability in humans and 67% discordance in rats when these variables are evaluated by AR and FFT under physiological and pathological conditions. The overall disagreement between AR and FFT in this study was 43%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Microeconomic impacts of the mergers and acquisitions of energy industries in the World: an analysis for the 1990's. The energy industries have witnessed a significant growth of global mergers and acquisitions (M&A´s) process in the 1990´s. According to Unctad statistics, the total amount of global M&A deals (domestic and cross borders) on the electric, oil and gas sectors has recorded US$ 329 billions on the 1990-1999 period. The present paper sheds light on M&A process occurred on the energy industries during this period and, based on a sample of 248 transactions carried out by 18 big energy enterprises, develops an empirical microeconomic analysis about the impacts of these transactions over the performance of the firms involved. Overall, the results show significant improvements on the firms' performance after M&A operations, regarding the following variables: sales, net profits, assets, dividends, and, to a less extent, the ratio (net profits/sales).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the recent growth in cultural complexity, many organizations are faced with increasingly diverse employee pools. Gaining a greater understanding of the values that employees possess is the first step in effectively satisfying their needs and achieving a more productive workforce (lung & Avolio, 2000). Values playa significant role in influencing individual behaviours. It is therefore necessary to assess the qualities of employee value systems and directly link them to the values of the organization. The importance of values and value congruence has been emphasized by many organizational behaviour researchers (cf. Adkins & Caldwell, 2004; Erdogan, Kraimer, & Liden, 2004; Jung & Avolio, 2000; Rokeach, 1973); however the emphasis on value studies remains fairly stagnant within the sport industry (Amis, Slack, & Hinings, 2002). In order to examine the realities that were constructed by the participants in this study a holistic view of the impact of values within a specific sport organization were provided. The purpose of this case study was to examine organizational and employee values to understand the effects of values and value congruence on employee behaviours within the context of a large Canadian sport organization. A mUltiple methods case study approach was adopted in order to fully serve the purpose and provide a comprehensive view of the organization being examined. Document analysis, observations, surveys, as well as semi-structured interviews were conducted. The process allowed for triangulation and confirmability of the findings. Each method functioned to create an overarching understanding of the values and value congruence within this organization. The analysis of the findings was divided into qualitative and quantitative sections. The qualitative documents were analyzed twice, once manually by the researcher and once via AtIas.ti Version 4 (1998). The a priori and emergent coding that took place was based on triangulating the findings and uncovering common themes throughout the data. The Rokeach Value Survey (1973) that was incorporated into the survey design of the study was analyzed using descriptive statistics, as well as Mann-Whitney U, and Kruskal Wallis formulas. These were deemed appropriate for analysis given the non-parametric nature of the survey instrument (Kinnear & Gray, 2004). The quantitative survey served to help define the values and value congruence that was then holistically examined through the qualitative interviews, document analyses, and observations. The results of the study indicated incongruent value levels between employees and those stated or perceived as the organization's values. Each finding demonstrated that varying levels of congruence may have diverse affects on individual behaviours. These behaviours range from production levels to interactions with fellow employees to turnover. In addition to the findings pertaining to the research questions, a number of other key issues were uncovered regarding departmentalization, communication, and board relations. Each has contributed to a greater understanding of the organization and has created direction for further research within this area of study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research used a quantitative study approach to investigate the “boy crisis” in Canada. Boy crisis advocates suggest that boys are being surpassed by girls on reading assessments and promote strategies to assist male students. A feminist framework was used in this study that allowed for an investigation and discussion of the factors that mediate between gender and success at reading comprehension, interpretation, and response to text without ignoring female students. Reading scores and questionnaire data compiled by the Pan-Canadian Assessment Program were used in this research, specifically the PCAP-13 2007 assessment of approximately 30,000 13-year-old students from all Canadian provinces and Yukon Territory (CMEC, 2008). Approximately 20,000 participants wrote the reading assessment, while 30,000 students completed the questionnaire responses. Predictor variables were tested using parametric tests such as independent samples t-test, one-way ANOVA, chi-square analysis, and Pearson r. Findings from this study indicate that although boys scored lower than girls on the PCAP-13 2007 reading assessment, factors were found to influence the reading scores of both male and female students to varying degrees. Socioeconomic status, perceptions of the reading material used in language arts classrooms, reading preference, reading interest, parental involvement, parental encouragement for reading, and self-efficacy were all found to affect the reading performance of boys and girls. Relationships between variables were also found and are discussed in this research. The analysis presented in this study allows parents, educators, and policy makers to begin to critically examine and re-evaluate boy crisis literature and offers suggestions on how to improve reading performance for all students of all socioeconomic backgrounds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent work suggests that the conditional variance of financial returns may exhibit sudden jumps. This paper extends a non-parametric procedure to detect discontinuities in otherwise continuous functions of a random variable developed by Delgado and Hidalgo (1996) to higher conditional moments, in particular the conditional variance. Simulation results show that the procedure provides reasonable estimates of the number and location of jumps. This procedure detects several jumps in the conditional variance of daily returns on the S&P 500 index.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'un des modèles d'apprentissage non-supervisé générant le plus de recherche active est la machine de Boltzmann --- en particulier la machine de Boltzmann restreinte, ou RBM. Un aspect important de l'entraînement ainsi que l'exploitation d'un tel modèle est la prise d'échantillons. Deux développements récents, la divergence contrastive persistante rapide (FPCD) et le herding, visent à améliorer cet aspect, se concentrant principalement sur le processus d'apprentissage en tant que tel. Notamment, le herding renonce à obtenir un estimé précis des paramètres de la RBM, définissant plutôt une distribution par un système dynamique guidé par les exemples d'entraînement. Nous généralisons ces idées afin d'obtenir des algorithmes permettant d'exploiter la distribution de probabilités définie par une RBM pré-entraînée, par tirage d'échantillons qui en sont représentatifs, et ce sans que l'ensemble d'entraînement ne soit nécessaire. Nous présentons trois méthodes: la pénalisation d'échantillon (basée sur une intuition théorique) ainsi que la FPCD et le herding utilisant des statistiques constantes pour la phase positive. Ces méthodes définissent des systèmes dynamiques produisant des échantillons ayant les statistiques voulues et nous les évaluons à l'aide d'une méthode d'estimation de densité non-paramétrique. Nous montrons que ces méthodes mixent substantiellement mieux que la méthode conventionnelle, l'échantillonnage de Gibbs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'objectif principal de ce travail est d’étudier en profondeur certaines techniques biostatistiques avancées en recherche évaluative en chirurgie cardiaque adulte. Les études ont été conçues pour intégrer les concepts d'analyse de survie, analyse de régression avec “propensity score”, et analyse de coûts. Le premier manuscrit évalue la survie après la réparation chirurgicale de la dissection aigüe de l’aorte ascendante. Les analyses statistiques utilisées comprennent : analyses de survie avec régression paramétrique des phases de risque et d'autres méthodes paramétriques (exponentielle, Weibull), semi-paramétriques (Cox) ou non-paramétriques (Kaplan-Meier) ; survie comparée à une cohorte appariée pour l’âge, le sexe et la race utilisant des tables de statistiques de survie gouvernementales ; modèles de régression avec “bootstrapping” et “multinomial logit model”. L'étude a démontrée que la survie s'est améliorée sur 25 ans en lien avec des changements dans les techniques chirurgicales et d’imagerie diagnostique. Le second manuscrit est axé sur les résultats des pontages coronariens isolés chez des patients ayant des antécédents d'intervention coronarienne percutanée. Les analyses statistiques utilisées comprennent : modèles de régression avec “propensity score” ; algorithme complexe d'appariement (1:3) ; analyses statistiques appropriées pour les groupes appariés (différences standardisées, “generalized estimating equations”, modèle de Cox stratifié). L'étude a démontrée que l’intervention coronarienne percutanée subie 14 jours ou plus avant la chirurgie de pontages coronariens n'est pas associée à des résultats négatifs à court ou long terme. Le troisième manuscrit évalue les conséquences financières et les changements démographiques survenant pour un centre hospitalier universitaire suite à la mise en place d'un programme de chirurgie cardiaque satellite. Les analyses statistiques utilisées comprennent : modèles de régression multivariée “two-way” ANOVA (logistique, linéaire ou ordinale) ; “propensity score” ; analyses de coûts avec modèles paramétriques Log-Normal. Des modèles d’analyse de « survie » ont également été explorés, utilisant les «coûts» au lieu du « temps » comme variable dépendante, et ont menés à des conclusions similaires. L'étude a démontrée que, après la mise en place du programme satellite, moins de patients de faible complexité étaient référés de la région du programme satellite au centre hospitalier universitaire, avec une augmentation de la charge de travail infirmier et des coûts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ce mémoire porte sur la présentation des estimateurs de Bernstein qui sont des alternatives récentes aux différents estimateurs classiques de fonctions de répartition et de densité. Plus précisément, nous étudions leurs différentes propriétés et les comparons à celles de la fonction de répartition empirique et à celles de l'estimateur par la méthode du noyau. Nous déterminons une expression asymptotique des deux premiers moments de l'estimateur de Bernstein pour la fonction de répartition. Comme pour les estimateurs classiques, nous montrons que cet estimateur vérifie la propriété de Chung-Smirnov sous certaines conditions. Nous montrons ensuite que l'estimateur de Bernstein est meilleur que la fonction de répartition empirique en terme d'erreur quadratique moyenne. En s'intéressant au comportement asymptotique des estimateurs de Bernstein, pour un choix convenable du degré du polynôme, nous montrons que ces estimateurs sont asymptotiquement normaux. Des études numériques sur quelques distributions classiques nous permettent de confirmer que les estimateurs de Bernstein peuvent être préférables aux estimateurs classiques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mann–Kendall non-parametric test was employed for observational trend detection of monthly, seasonal and annual precipitation of five meteorological subdivisions of Central Northeast India (CNE India) for different 30-year normal periods (NP) viz. 1889–1918 (NP1), 1919–1948 (NP2), 1949–1978 (NP3) and 1979–2008 (NP4). The trends of maximum and minimum temperatures were also investigated. The slopes of the trend lines were determined using the method of least square linear fitting. An application of Morelet wavelet analysis was done with monthly rainfall during June– September, total rainfall during monsoon season and annual rainfall to know the periodicity and to test the significance of periodicity using the power spectrum method. The inferences figure out from the analyses will be helpful to the policy managers, planners and agricultural scientists to work out irrigation and water management options under various possible climatic eventualities for the region. The long-term (1889–2008) mean annual rainfall of CNE India is 1,195.1 mm with a standard deviation of 134.1 mm and coefficient of variation of 11%. There is a significant decreasing trend of 4.6 mm/year for Jharkhand and 3.2 mm/day for CNE India. Since rice crop is the important kharif crop (May– October) in this region, the decreasing trend of rainfall during themonth of July may delay/affect the transplanting/vegetative phase of the crop, and assured irrigation is very much needed to tackle the drought situation. During themonth of December, all the meteorological subdivisions except Jharkhand show a significant decreasing trend of rainfall during recent normal period NP4. The decrease of rainfall during December may hamper sowing of wheat, which is the important rabi crop (November–March) in most parts of this region. Maximum temperature shows significant rising trend of 0.008°C/year (at 0.01 level) during monsoon season and 0.014°C/year (at 0.01 level) during post-monsoon season during the period 1914– 2003. The annual maximum temperature also shows significant increasing trend of 0.008°C/year (at 0.01 level) during the same period. Minimum temperature shows significant rising trend of 0.012°C/year (at 0.01 level) during postmonsoon season and significant falling trend of 0.002°C/year (at 0.05 level) during monsoon season. A significant 4– 8 years peak periodicity band has been noticed during September over Western UP, and 30–34 years periodicity has been observed during July over Bihar subdivision. However, as far as CNE India is concerned, no significant periodicity has been noticed in any of the time series.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enhancement of financial inclusivity of rural communities is often recognised as a key strategy for achieving economic development in third world countries. The main objective of this study was to examine the factors that influence consumers’ choice of a rural bank in Gicumbi district of Rwanda. Data was collected using structured questionnaires and analysed using a binary probit regression model and non-parametric procedures. Most consumers were aware of Popular Bank of Rwanda (BPR) and Umurenge SACCO through radio advertisements, social networks and community meetings. Accessibility, interest rates and quality of services influenced choice of a given financial intermediary. Moreover, the decision to open a rural bank account was significantly influenced by education and farm size (p<0.1). These results indicate the need for financial managers to consider these findings for successful marketing campaigns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency