945 resultados para joint hypothesis tests


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Impairment of cognitive performance during and after high-altitude climbing has been described in numerous studies and has mostly been attributed to cerebral hypoxia and resulting functional and structural cerebral alterations. To investigate the hypothesis that high-altitude climbing leads to cognitive impairment, we used of neuropsychological tests and measurements of eye movement (EM) performance during different stimulus conditions. The study was conducted in 32 mountaineers participating in an expedition to Muztagh Ata (7,546 m). Neuropsychological tests comprised figural fluency, line bisection, letter and number cancellation, and a modified pegboard task. Saccadic performance was evaluated under three stimulus conditions with varying degrees of cortical involvement: visually guided pro- and anti-saccades, and visuo-visual interaction. Typical saccade parameters (latency, mean sequence, post-saccadic stability, and error rate) were computed off-line. Measurements were taken at a baseline level of 440 m and at altitudes of 4,497, 5,533, 6,265, and again at 440 m. All subjects reached 5,533 m, and 28 reached 6,265 m. The neuropsychological test results did not reveal any cognitive impairment. Complete eye movement recordings for all stimulus conditions were obtained in 24 subjects at baseline and at least two altitudes and in 10 subjects at baseline and all altitudes. Measurements of saccade performances showed no dependence on any altitude-related parameter and were well within normal limits. Our data indicates that acclimatized climbers do not seem to suffer from significant cognitive deficits during or after climbs to altitudes above 7,500 m. We demonstrated that investigation of EMs is feasible during high-altitude expeditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present article, we examine the hypothesis that high-school students' motivation to engage in cognitive endeavors (i.e., their need for cognition; NFC) is positively related to their dispositional self-control capacity. Furthermore, we test the prediction that the relation between NFC and school achievement is mediated by self-control capacity. A questionnaire study with grade ten high-school students (N = 604) revealed the expected relations between NFC, self-control capacity, and school achievement. Sobel tests showed that self-control capacity mediated the relation between NFC and school grades as well as grade retention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply the efficient unit-roots tests of Elliott, Rothenberg, and Stock (1996), and Elliott (1998) to twenty-one real exchange rates using monthly data of the G-7 countries from the post-Bretton Woods floating exchange rate period. Our results indicate that, for eighteen out of the twenty-one real exchange rates, the null hypothesis of a unit root can be rejected at the 10% significance level or better using the Elliot et al (1996) DF-GLS test. The unit-root null hypothesis is also rejected for one additional real exchange rate when we allow for one endogenously determined break in the time series of the real exchange rate as in Perron (1997). In all, we find favorable evidence to support long-run purchasing power parity in nineteen out of twenty-one real exchange rates. Second, we find no strong evidence to suggest that the use of non-U.S. dollar-based real exchange rates tend to produce more favorable result for long-run PPP than the use of U.S. dollar-based real exchange rates as Lothian (1998) has concluded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the mean-reverting property of real exchange rates. Earlier studies have generally not been able to reject the null hypothesis of a unit-root in real exchange rates, especially for the post-Bretton Woods floating period. The results imply that long-run purchasing power parity does not hold. More recent studies, especially those using panel unit-root tests, have found more favorable results, however. But, Karlsson and Löthgren (2000) and others have recently pointed out several potential pitfalls of panel unit-root tests. Thus, the panel unit-root test results are suggestive, but they are far from conclusive. Moreover, consistent individual country time series evidence that supports long-run purchasing power parity continues to be scarce. In this paper, we test for long memory using Lo's (1991) modified rescaled range test, and the rescaled variance test of Giraitis, Kokoszka, Leipus, and Teyssière (2003). Our testing procedure provides a non-parametric alternative to the parametric tests commonly used in this literature. Our data set consists of monthly observations from April 1973 to April 2001 of the G-7 countries in the OECD. Our two tests find conflicting results when we use U.S. dollar real exchange rates. However, when non-U.S. dollar real exchange rates are used, we find only two cases out of fifteen where the null hypothesis of an unit-root with short-term dependence can be rejected in favor of the alternative hypothesis of long-term dependence using the modified rescaled range test, and only one case when using the rescaled variance test. Our results therefore provide a contrast to the recent favorable panel unit-root test results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: This study included two overarching objectives. Through a systematic review of the literature published between 1990 and 2012, the first objective aimed to assess whether insuring the uninsured would result in higher costs compared to insuring the currently insured. Studies that quantified the actual costs associated with insuring the uninsured in the U.S. were included. Based upon 2009 data from the Medical Expenditure Panel Survey (MEPS), the second objective aimed to assess and compare the self-reported health of populations with four different insurance statuses. The second part of this study involved a secondary data analysis of both currently insured and currently uninsured individuals who participated in the MEPS in 2009. The null hypothesis was that there were no differences across the four categories of health insurance status for self-reported health status and healthcare service use. The alternative hypothesis was that were differences across the four categories of health insurance status for self-reported health status and healthcare service use. Methods: For the systematic review, three databases were searched using search terms to identify studies that actually quantified the cost of insuring the uninsured. Thirteen studies were selected, discussed, and summarized in tables. For the secondary data analysis of MEPS data, this study compared four categories of health insurance status: (1) currently uninsured persons who will become eligible for Medicaid under the Patient Protection and Affordable Care Act (PPACA) healthcare reforms in 2014; (2) currently uninsured persons who will be required to buy private insurance through the PPACA health insurance exchanges in 2014; (3) persons currently insured under Medicaid or SCHIP; and (4) persons currently insured with private insurance. The four categories were compared on the basis of demographic information, health status information, and health conditions with relatively high prevalence. Chi-square tests were run to determine if there were differences between the four groups in regard to health insurance status and health status. With some exceptions, the two currently insured groups had worse self-reported health status compared to the two currently uninsured groups. Results: The thirteen studies that met the inclusion criteria for the systematic review included: (1) three cost studies from 1993, 1995, and 1997; (2) four cost studies from 2001, 2003, and 2004; (3) one study of disabilities and one study of immigrants; (4) two state specific studies of uninsured status; and (5) two current studies of healthcare reform. Of the thirteen studies reviewed, four directly addressed the study question about whether insuring the uninsured was more or less expensive than insuring the currently insured. All four of the studies provided support for the study finding that the cost of insuring the uninsured would generally not be higher than insuring those already insured. One study indicated that the cost of insuring the uninsured would be less expensive than insuring the population currently covered by Medicaid, but more expensive to insure than the populations of those covered by employer-sponsored insurance and non-group private insurance. While the nine other studies included in the systematic review discussed the costs associated with insuring the uninsured population, they did not directly compare the costs of insuring the uninsured population with the costs associated with insuring the currently insured population. For the MEPS secondary data analysis, the results of the chi-square tests indicated that there were differences in the distribution of disease status by health insurance status. As anticipated, with some exceptions, the uninsured reported lower rates of disease and healthcare service use. However, for the variable attention deficit disorder, the uninsured reported higher disease rates than the two insured groups. Additionally, for the variables high blood pressure, high cholesterol, and joint pain, the currently insured under Medicaid or SCHIP group reported a lower rate of disease than the two currently insured groups. This result may be due to the lower mean age of the currently insured under Medicaid or SCHIP group. Conclusion: Based on this study, with some exceptions, the costs for insuring the uninsured should not exceed healthcare-related costs for insuring the currently uninsured. The results of the systematic review indicated that the U.S. is already paying some of the costs associated with insuring the uninsured. PPACA will expand health insurance coverage to millions of Americans who are currently uninsured, as the individual mandate and insurance market reforms will require. Because many of the currently uninsured are relatively healthy young persons, the costs associated with expanding insurance coverage to the uninsured are anticipated to be relatively modest. However, for the purposes of construing these results, it is important to note that once individuals obtain insurance, it is anticipated that they will use more healthcare services, which will increase costs. (Abstract shortened by UMI.)^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cracking of reinforced concrete can occur in certain environments due to rebar corrosion. The oxide layer growing around the bars introduces a pressure which may be enough to lead to the fracture of concrete. To study such an effect, the results of accelerated corrosion tests and finite ele- ment simulations are combined in this work. In previous works, a numerical model for the expansive layer, called expansive joint element , was programmed by the authors to reproduce the effect of the oxide over the concrete. In that model, the expansion of the oxide layer in stress free conditions is simulated as an uniform expansion perpendicular to the steel surface. The cracking of concrete is simulated by means of finite elements with an embedded adaptable cohesive crack that follow the standard cohesive model. In the present work, further accelerated tests with imposed constant cur- rent have been carried out on the same type of specimens tested in previous works (with an embedded steel tube), while measuring, among other things, the main-crack mouth opening. Then, the tests have been numerically simulated using the expansive joint element and the tube as the corroding electrode (rather than a bar). As a result of the comparison of numerical and experimental results, both for the crack mouth opening and the crack pattern, new insight is gained into the behavior of the oxide layer. In particular, quantitative assessment of the oxide expansion relation is deduced from the ex- periments, and a narrower interval for the shear stiffness of the oxide layer is obtained, which could not be achieved using bars as the corroding element, because in that case the numerical results were insensitive to the shear stiffness of the oxide layer within many orders of magnitude

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La frecuencia con la que se producen explosiones sobre edificios, ya sean accidentales o intencionadas, es reducida, pero sus efectos pueden ser catastróficos. Es deseable poder predecir de forma suficientemente precisa las consecuencias de estas acciones dinámicas sobre edificaciones civiles, entre las cuales las estructuras reticuladas de hormigón armado son una tipología habitual. En esta tesis doctoral se exploran distintas opciones prácticas para el modelado y cálculo numérico por ordenador de estructuras de hormigón armado sometidas a explosiones. Se emplean modelos numéricos de elementos finitos con integración explícita en el tiempo, que demuestran su capacidad efectiva para simular los fenómenos físicos y estructurales de dinámica rápida y altamente no lineales que suceden, pudiendo predecir los daños ocasionados tanto por la propia explosión como por el posible colapso progresivo de la estructura. El trabajo se ha llevado a cabo empleando el código comercial de elementos finitos LS-DYNA (Hallquist, 2006), desarrollando en el mismo distintos tipos de modelos de cálculo que se pueden clasificar en dos tipos principales: 1) modelos basados en elementos finitos de continuo, en los que se discretiza directamente el medio continuo mediante grados de libertad nodales de desplazamientos; 2) modelos basados en elementos finitos estructurales, mediante vigas y láminas, que incluyen hipótesis cinemáticas para elementos lineales o superficiales. Estos modelos se desarrollan y discuten a varios niveles distintos: 1) a nivel del comportamiento de los materiales, 2) a nivel de la respuesta de elementos estructurales tales como columnas, vigas o losas, y 3) a nivel de la respuesta de edificios completos o de partes significativas de los mismos. Se desarrollan modelos de elementos finitos de continuo 3D muy detallados que modelizan el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con un modelo constitutivo del hormigón CSCM (Murray et al., 2007), que tiene un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura. El acero se representa con un modelo constitutivo elastoplástico bilineal con rotura. Se modeliza la geometría precisa del hormigón mediante elementos finitos de continuo 3D y cada una de las barras de armado mediante elementos finitos tipo viga, con su posición exacta dentro de la masa de hormigón. La malla del modelo se construye mediante la superposición de los elementos de continuo de hormigón y los elementos tipo viga de las armaduras segregadas, que son obligadas a seguir la deformación del sólido en cada punto mediante un algoritmo de penalización, simulando así el comportamiento del hormigón armado. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF de continuo. Con estos modelos de EF de continuo se analiza la respuesta estructural de elementos constructivos (columnas, losas y pórticos) frente a acciones explosivas. Asimismo se han comparado con resultados experimentales, de ensayos sobre vigas y losas con distintas cargas de explosivo, verificándose una coincidencia aceptable y permitiendo una calibración de los parámetros de cálculo. Sin embargo estos modelos tan detallados no son recomendables para analizar edificios completos, ya que el elevado número de elementos finitos que serían necesarios eleva su coste computacional hasta hacerlos inviables para los recursos de cálculo actuales. Adicionalmente, se desarrollan modelos de elementos finitos estructurales (vigas y láminas) que, con un coste computacional reducido, son capaces de reproducir el comportamiento global de la estructura con una precisión similar. Se modelizan igualmente el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con el modelo constitutivo del hormigón EC2 (Hallquist et al., 2013), que también presenta un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura, y se usa en elementos finitos tipo lámina. El acero se representa de nuevo con un modelo constitutivo elastoplástico bilineal con rotura, usando elementos finitos tipo viga. Se modeliza una geometría equivalente del hormigón y del armado, y se tiene en cuenta la posición relativa del acero dentro de la masa de hormigón. Las mallas de ambos se unen mediante nodos comunes, produciendo una respuesta conjunta. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF estructurales. Con estos modelos de EF estructurales se simulan los mismos elementos constructivos que con los modelos de EF de continuo, y comparando sus respuestas estructurales frente a explosión se realiza la calibración de los primeros, de forma que se obtiene un comportamiento estructural similar con un coste computacional reducido. Se comprueba que estos mismos modelos, tanto los modelos de EF de continuo como los modelos de EF estructurales, son precisos también para el análisis del fenómeno de colapso progresivo en una estructura, y que se pueden utilizar para el estudio simultáneo de los daños de una explosión y el posterior colapso. Para ello se incluyen formulaciones que permiten considerar las fuerzas debidas al peso propio, sobrecargas y los contactos de unas partes de la estructura sobre otras. Se validan ambos modelos con un ensayo a escala real en el que un módulo con seis columnas y dos plantas colapsa al eliminar una de sus columnas. El coste computacional del modelo de EF de continuo para la simulación de este ensayo es mucho mayor que el del modelo de EF estructurales, lo cual hace inviable su aplicación en edificios completos, mientras que el modelo de EF estructurales presenta una respuesta global suficientemente precisa con un coste asumible. Por último se utilizan los modelos de EF estructurales para analizar explosiones sobre edificios de varias plantas, y se simulan dos escenarios con cargas explosivas para un edificio completo, con un coste computacional moderado. The frequency of explosions on buildings whether they are intended or accidental is small, but they can have catastrophic effects. Being able to predict in a accurate enough manner the consequences of these dynamic actions on civil buildings, among which frame-type reinforced concrete buildings are a frequent typology is desirable. In this doctoral thesis different practical options for the modeling and computer assisted numerical calculation of reinforced concrete structures submitted to explosions are explored. Numerical finite elements models with explicit time-based integration are employed, demonstrating their effective capacity in the simulation of the occurring fast dynamic and highly nonlinear physical and structural phenomena, allowing to predict the damage caused by the explosion itself as well as by the possible progressive collapse of the structure. The work has been carried out with the commercial finite elements code LS-DYNA (Hallquist, 2006), developing several types of calculation model classified in two main types: 1) Models based in continuum finite elements in which the continuous medium is discretized directly by means of nodal displacement degrees of freedom; 2) Models based on structural finite elements, with beams and shells, including kinematic hypothesis for linear and superficial elements. These models are developed and discussed at different levels: 1) material behaviour, 2) response of structural elements such as columns, beams and slabs, and 3) response of complete buildings or significative parts of them. Very detailed 3D continuum finite element models are developed, modeling mass concrete and reinforcement steel in a segregated manner. Concrete is represented with a constitutive concrete model CSCM (Murray et al., 2007), that has an inelastic behaviour, with different tension and compression response, hardening, cracking and compression damage and failure. The steel is represented with an elastic-plastic bilinear model with failure. The actual geometry of the concrete is modeled with 3D continuum finite elements and every and each of the reinforcing bars with beam-type finite elements, with their exact position in the concrete mass. The mesh of the model is generated by the superposition of the concrete continuum elements and the beam-type elements of the segregated reinforcement, which are made to follow the deformation of the solid in each point by means of a penalty algorithm, reproducing the behaviour of reinforced concrete. In this work these models will be called continuum FE models as a simplification. With these continuum FE models the response of construction elements (columns, slabs and frames) under explosive actions are analysed. They have also been compared with experimental results of tests on beams and slabs with various explosive charges, verifying an acceptable coincidence and allowing a calibration of the calculation parameters. These detailed models are however not advised for the analysis of complete buildings, as the high number of finite elements necessary raises its computational cost, making them unreliable for the current calculation resources. In addition to that, structural finite elements (beams and shells) models are developed, which, while having a reduced computational cost, are able to reproduce the global behaviour of the structure with a similar accuracy. Mass concrete and reinforcing steel are also modeled segregated. Concrete is represented with the concrete constitutive model EC2 (Hallquist et al., 2013), which also presents an inelastic behaviour, with a different tension and compression response, hardening, compression and cracking damage and failure, and is used in shell-type finite elements. Steel is represented once again with an elastic-plastic bilineal with failure constitutive model, using beam-type finite elements. An equivalent geometry of the concrete and the steel is modeled, considering the relative position of the steel inside the concrete mass. The meshes of both sets of elements are bound with common nodes, therefore producing a joint response. These models will be called structural FE models as a simplification. With these structural FE models the same construction elements as with the continuum FE models are simulated, and by comparing their response under explosive actions a calibration of the former is carried out, resulting in a similar response with a reduced computational cost. It is verified that both the continuum FE models and the structural FE models are also accurate for the analysis of the phenomenon of progressive collapse of a structure, and that they can be employed for the simultaneous study of an explosion damage and the resulting collapse. Both models are validated with an experimental full-scale test in which a six column, two floors module collapses after the removal of one of its columns. The computational cost of the continuum FE model for the simulation of this test is a lot higher than that of the structural FE model, making it non-viable for its application to full buildings, while the structural FE model presents a global response accurate enough with an admissible cost. Finally, structural FE models are used to analyze explosions on several story buildings, and two scenarios are simulated with explosive charges for a full building, with a moderate computational cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have suggested that ionizing radiation causes irreparable DNA double-strand breaks in mice and cell lines harboring mutations in any of the three subunits of DNA-dependent protein kinase (DNA-PK) (the catalytic subunit, DNA-PKcs, or one of the DNA-binding subunits, Ku70 or Ku86). In actuality, these mutants vary in their ability to resolve double-strand breaks generated during variable (diversity) joining [V(D)J] recombination. Mutant cell lines and mice with targeted deletions in Ku70 or Ku86 are severely compromised in their ability to form coding and signal joints, the products of V(D)J recombination. It is noteworthy, however, that severe combined immunodeficient (SCID) mice, which bear a nonnull mutation in DNA-PKcs, are substantially less impaired in forming signal joints than coding joints. The current view holds that the defective protein encoded by the murine SCID allele retains enough residual function to support signal joint formation. An alternative hypothesis proposes that DNA-PKcs and Ku perform different roles in V(D)J recombination, with DNA-PKcs required only for coding joint formation. To resolve this issue, we examined V(D)J recombination in DNA-PKcs-deficient (SLIP) mice. We found that the effects of this mutation on coding and signal joint formation are identical to the effects of the SCID mutation. Signal joints are formed at levels 10-fold lower than in wild type, and one-half of these joints are aberrant. These data are incompatible with the notion that signal joint formation in SCID mice results from residual DNA-PKcs function, and suggest a third possibility: that DNA-PKcs normally plays an important but nonessential role in signal joint formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes index.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These tests were developed in the Detroit psychological clinic, a department of the Detroit public schools. cf. Pref.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prepared by William J. Helfrich and Richard L. Reynolds.