837 resultados para Chi-Squared Goodness of Fit Test
Resumo:
Linkage and association studies are major analytical tools to search for susceptibility genes for complex diseases. With the availability of large collection of single nucleotide polymorphisms (SNPs) and the rapid progresses for high throughput genotyping technologies, together with the ambitious goals of the International HapMap Project, genetic markers covering the whole genome will be available for genome-wide linkage and association studies. In order not to inflate the type I error rate in performing genome-wide linkage and association studies, multiple adjustment for the significant level for each independent linkage and/or association test is required, and this has led to the suggestion of genome-wide significant cut-off as low as 5 × 10 −7. Almost no linkage and/or association study can meet such a stringent threshold by the standard statistical methods. Developing new statistics with high power is urgently needed to tackle this problem. This dissertation proposes and explores a class of novel test statistics that can be used in both population-based and family-based genetic data by employing a completely new strategy, which uses nonlinear transformation of the sample means to construct test statistics for linkage and association studies. Extensive simulation studies are used to illustrate the properties of the nonlinear test statistics. Power calculations are performed using both analytical and empirical methods. Finally, real data sets are analyzed with the nonlinear test statistics. Results show that the nonlinear test statistics have correct type I error rates, and most of the studied nonlinear test statistics have higher power than the standard chi-square test. This dissertation introduces a new idea to design novel test statistics with high power and might open new ways to mapping susceptibility genes for complex diseases. ^
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
The purpose of this study was to evaluate students' lunch consumption compared to NSLP guidelines, the contribution of competitive foods to calorie intake at lunch, and the differences in nutrient and food group intake between the a la carte food consumers and non- a la carte food consumers.^ In Fall 2011, 1170 elementary and 440 intermediate students were observed anonymously during school lunch. The foods eaten, their source, grade level, and gender were recorded. All a la carte offerings met the Texas School Nutrition Policy.^ Differences in nutrient and food group intake by grade level and between students who consumed a la carte and those who did not were assessed using ANCOVA. A chi-squared analysis was conducted to evaluate differences in a la carte food consumption by grade level, gender, and the school's low income status.^ Average lunch intakes for elementary students were 457 (SD 164) calories for elementary students and 541 calories (SD 188) for intermediate students (p<0.001). 760 students (47%) consumed 937 a la carte foods, with the most often consumed items being chips (32%), ice cream (22%) and snack items (18%). Mean a la carte food intakes were 60 and 98 calories for elementary and intermediate schools respectively (p<0.001). Significantly more (p<0.000) intermediate students (34.3%) consumed a la carte items compared to elementary students (27.5%).^ Students who consumed a la carte foods had significantly higher intakes of calories (p<0.000), fat (p<0.000), sodium (p<0.002), fiber (p<0.000), added sugar (p<0.000), total grains (p<0.000), dessert foods (p<0.000), and snack chips (p<0.000) and lower intakes of vitamin A (p<0.001), iron (p<0.000), fruit (p<0.022), vegetables (p<0.031), milk (p<0.000), and juice (p<0.000) compared to students who did not eat a la carte foods.^ Although previous studies have found that reducing availability of unhealthy items at school decreased student consumption of these items, the results of this study indicate that even the strict guidelines set forth by the state of Texas are not sufficient to prevent increased caloric intake and poor nutrient intake. Strategies to improve student selection and consumption at school lunch when a la carte foods are available are warranted.^
Resumo:
We report oxygen and carbon stable isotope analyses of foraminifers, primarily planktonic, sampled at low resolution in the Cretaceous and Paleogene sections from Sites 1257, 1258, and 1260. Data from two samples from Site 1259 are also reported. The very low resolution of the data only allows us to detect climate-driven isotopic events on the timescale of more than 500 k.y. A several million-year-long interval of overall increase in planktonic 18O is seen in the Cenomanian at Site 1260. Before and after this interval, foraminifers from Cenomanian and Turonian black shales have d18O values in the range -4.2 per mil to -5.0 per mil, suggestive of upper ocean temperatures higher than modern tropical values. The d18O values of upper ocean dwelling Paleogene planktonics exhibit a long-term increase from the early Eocene to the middle Eocene. During shipboard and postcruise processing, it proved difficult to extract well-preserved foraminifer tests from black shales by conventional techniques. Here, we report results of a test of procedures for cleaning foraminifers in Cretaceous organic-rich mudstone sediments using various combinations of soaking in bleach, Calgon/hydrogen peroxide, or Cascade, accompanied by drying, repeat soaking, or sonication. A procedure that used 100% bleach, no detergent, and no sonication yielded the largest number of clean, whole individual foraminifers with the shortest preparation time. We found no significant difference in d18O or d13C values among sets of multiple samples of the planktonic foraminifer Whiteinella baltica extracted following each cleaning procedure.
Resumo:
A methodology is presented to measure the fiber/matrix interface shear strength in composites. The strategy is based on performing a fiber push-in test at the central fiber of highly-packed fiber clusters with hexagonal symmetry which are often found in unidirectional composites with a high volume fraction of fibers. The mechanics of this test was analyzed in detail by means of three-dimensional finite element simulations. In particular, the influence of different parameters (interface shear strength, toughness and friction as well as fiber longitudinal elastic modulus and curing stresses) on the critical load at the onset of debonding was established. From the results of the numerical simulations, a simple relationship between the critical load and the interface shear strength is proposed. The methodology was validated in an unidirectional C/epoxy composite and the advantages and limitations of the proposed methodology are indicated.
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
The pressuremeter test in boreholes has proven itself as a useful tool in geotechnical explorations, especially comparing its results with those obtained from a mathematical model ruled by a soil representative constitutive equation. The numerical model shown in this paper is aimed to be the reference framework for the interpretation of this test. The model analyses variables such as: the type of response, the initial state, the drainage regime and the constitutive equations. It is a model of finite elements able to work with a mesh without deformation or one adapted to it.
Resumo:
As sustainability reporting (SR) practices have being increasingly adopted by corporations over the last twenty years, most of the existing literature on SR has stressed the role of external determinants (such as institutional and stakeholder pressures) in explaining this uptake. However, given that recent evidence points to a broader range of motives and uses (both external and internal) of SR, we contend that its role within company-level activities deserves greater academic attention. In order to address this research gap, this paper seeks to provide a more detailed examination of the organizational characteristics acting as drivers and/or barriers of SR integration within corporate sustainability practices at the company-level. More specifically, we suggest that substantive SR implementation can be predicted by assessing the level of fit between the organization and the SR framework being adopted. Building on this hypothesis, our theoretical model defines three forms of fit (technical, cultural and political) and identifies organizational characteristics associated to each of these fits. Finally, implications for academic research, businesses and policy-makers are derived.
Resumo:
Flavonoids are secondary metabolites derived from phenylalanine and acetate metabolism that perform a variety of essential functions in higher plants. Studies over the past 30 years have supported a model in which flavonoid metabolism is catalyzed by an enzyme complex localized to the endoplasmic reticulum [Hrazdina, G. & Wagner, G. J. (1985) Arch. Biochem. Biophys. 237, 88–100]. To test this model further we assayed for direct interactions between several key flavonoid biosynthetic enzymes in developing Arabidopsis seedlings. Two-hybrid assays indicated that chalcone synthase, chalcone isomerase (CHI), and dihydroflavonol 4-reductase interact in an orientation-dependent manner. Affinity chromatography and immunoprecipitation assays further demonstrated interactions between chalcone synthase, CHI, and flavonol 3-hydroxylase in lysates from Arabidopsis seedlings. These results support the hypothesis that the flavonoid enzymes assemble as a macromolecular complex with contacts between multiple proteins. Evidence was also found for posttranslational modification of CHI. The importance of understanding the subcellular organization of elaborate enzyme systems is discussed in the context of metabolic engineering.
Resumo:
In three experiments, electric brain waves of 19 subjects were recorded under several different experimental conditions for two purposes. One was to test how well we could recognize which sentence, from a set of 24 or 48 sentences, was being processed in the cortex. The other was to study the invariance of brain waves between subjects. As in our earlier work, the analysis consisted of averaging over trials to create prototypes and test samples, to both of which Fourier transforms were applied, followed by filtering and an inverse transformation to the time domain. A least-squares criterion of fit between prototypes and test samples was used for classification. In all three experiments, averaging over subjects improved the recognition rates. The most significant finding was the following. When brain waves were averaged separately for two nonoverlapping groups of subjects, one for prototypes and the other for test samples, we were able to recognize correctly 90% of the brain waves generated by 48 different sentences about European geography.
Resumo:
In two experiments, electric brain waves of 14 subjects were recorded under several different conditions to study the invariance of brain-wave representations of simple patches of colors and simple visual shapes and their names, the words blue, circle, etc. As in our earlier work, the analysis consisted of averaging over trials to create prototypes and test samples, to both of which Fourier transforms were applied, followed by filtering and an inverse transformation to the time domain. A least-squares criterion of fit between prototypes and test samples was used for classification. The most significant results were these. By averaging over different subjects, as well as trials, we created prototypes from brain waves evoked by simple visual images and test samples from brain waves evoked by auditory or visual words naming the visual images. We correctly recognized from 60% to 75% of the test-sample brain waves. The general conclusion is that simple shapes such as circles and single-color displays generate brain waves surprisingly similar to those generated by their verbal names. These results, taken together with extensive psychological studies of auditory and visual memory, strongly support the solution proposed for visual shapes, by Bishop Berkeley and David Hume in the 18th century, to the long-standing problem of how the mind represents simple abstract ideas.
Resumo:
Data from three previous experiments were analyzed to test the hypothesis that brain waves of spoken or written words can be represented by the superposition of a few sine waves. First, we averaged the data over trials and a set of subjects, and, in one case, over experimental conditions as well. Next we applied a Fourier transform to the averaged data and selected those frequencies with high energy, in no case more than nine in number. The superpositions of these selected sine waves were taken as prototypes. The averaged unfiltered data were the test samples. The prototypes were used to classify the test samples according to a least-squares criterion of fit. The results were seven of seven correct classifications for the first experiment using only three frequencies, six of eight for the second experiment using nine frequencies, and eight of eight for the third experiment using five frequencies.
Resumo:
Sob as condições presentes de competitividade global, rápido avanço tecnológico e escassez de recursos, a inovação tornou-se uma das abordagens estratégicas mais importantes que uma organização pode explorar. Nesse contexto, a capacidade de inovação da empresa enquanto capacidade de engajar-se na introdução de novos processos, produtos ou ideias na empresa, é reconhecida como uma das principais fontes de crescimento sustentável, efetividade e até mesmo sobrevivência para as organizações. No entanto, apenas algumas empresas compreenderam na prática o que é necessário para inovar com sucesso e a maioria enxerga a inovação como um grande desafio. A realidade não é diferente no caso das empresas brasileiras e em particular das Pequenas e Médias Empresas (PMEs). Estudos indicam que o grupo das PMEs particularmente demonstra em geral um déficit ainda maior na capacidade de inovação. Em resposta ao desafio de inovar, uma ampla literatura emergiu sobre vários aspectos da inovação. Porém, ainda considere-se que há poucos resultados conclusivos ou modelos compreensíveis na pesquisa sobre inovação haja vista a complexidade do tema que trata de um fenômeno multifacetado impulsionado por inúmeros fatores. Além disso, identifica-se um hiato entre o que é conhecido pela literatura geral sobre inovação e a literatura sobre inovação nas PMEs. Tendo em vista a relevância da capacidade de inovação e o lento avanço do seu entendimento no contexto das empresas de pequeno e médio porte cujas dificuldades para inovar ainda podem ser observadas, o presente estudo se propôs identificar os determinantes da capacidade de inovação das PMEs a fim de construir um modelo de alta capacidade de inovação para esse grupo de empresas. O objetivo estabelecido foi abordado por meio de método quantitativo o qual envolveu a aplicação da análise de regressão logística binária para analisar, sob a perspectiva das PMEs, os 15 determinantes da capacidade de inovação identificados na revisão da literatura. Para adotar a técnica de análise de regressão logística, foi realizada a transformação da variável dependente categórica em binária, sendo grupo 0 denominado capacidade de inovação sem destaque e grupo 1 definido como capacidade de inovação alta. Em seguida procedeu-se com a divisão da amostra total em duas subamostras sendo uma para análise contendo 60% das empresas e a outra para validação (holdout) com os 40% dos casos restantes. A adequação geral do modelo foi avaliada por meio das medidas pseudo R2 (McFadden), chi-quadrado (Hosmer e Lemeshow) e da taxa de sucesso (matriz de classificação). Feita essa avaliação e confirmada a adequação do fit geral do modelo, foram analisados os coeficientes das variáveis incluídas no modelo final quanto ao nível de significância, direção e magnitude. Por fim, prosseguiu-se com a validação do modelo logístico final por meio da análise da taxa de sucesso da amostra de validação. Por meio da técnica de análise de regressão logística, verificou-se que 4 variáveis apresentaram correlação positiva e significativa com a capacidade de inovação das PMEs e que, portanto diferenciam as empresas com capacidade de inovação alta das empresas com capacidade de inovação sem destaque. Com base nessa descoberta, foi criado o modelo final de alta capacidade de inovação para as PMEs composto pelos 4 determinantes: base de conhecimento externo (externo), capacidade de gestão de projetos (interno), base de conhecimento interno (interno) e estratégia (interno).
Resumo:
We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).
Resumo:
Objetivo. Determinar los efectos del uso de la pelota de parto (PdP) durante el trabajo de parto en relación al tiempo de dilatación y expulsivo, la integridad perineal, la percepción de la intensidad del dolor y la seguridad. Método. Ensayo clínico controlado y aleatorizado. Participantes: nulíparas de 18 a 35 años, bajo riesgo, a término. Intervención: realización de movimientos sentadas sobre PdP durante el parto. Variables resultado: tiempo de dilatación y expulsivo; integridad perineal; percepción del dolor, recuerdo del dolor en el puerperio y pre-post intervención; tipo de parto; motivo de distocia; Apgar; ingreso en UCI neonatal. Análisis: comparación de grupos: t-Student para variables contínuas y Ji-cuadrado para categóricas. Significación p≤0,05. Resultados. 58 participantes (34 grupo experimental y 24 grupo control). El tiempo de dilatación y expulsivo, y la integridad perineal fue similar entre grupos. A los 4 cm el grupo experimental refirió menos dolor que el grupo control; 6,9 puntos vs 8,2 (p = 0,039). La diferencia en la percepción del dolor recordada en el puerperio inmediato fue de 1,48 puntos mayor en el grupo control (p = 0,003). La medición del dolor en el grupo experimental antes del uso de la PdP fue de 7,45 puntos y tras la intervención de 6,07 puntos (p < 0,001). En las variables relacionadas con la seguridad no hubo diferencias entre los grupos. Conclusión. El uso de pelotas de parto disminuye la percepción del dolor de parto y es segura.