104 resultados para literacy and spatial theory
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Disparate ecological datasets are often organized into databases post hoc and then analyzed and interpreted in ways that may diverge from the purposes of the original data collections. Few studies, however, have attempted to quantify how biases inherent in these data (for example, species richness, replication, climate) affect their suitability for addressing broad scientific questions, especially in under-represented systems (for example, deserts, tropical forests) and wild communities. Here, we quantitatively compare the sensitivity of species first flowering and leafing dates to spring warmth in two phenological databases from the Northern Hemisphere. One-PEP725-has high replication within and across sites, but has low species diversity and spans a limited climate gradient. The other-NECTAR-includes many more species and a wider range of climates, but has fewer sites and low replication of species across sites. PEP725, despite low species diversity and relatively low seasonality, accurately captures the magnitude and seasonality of warming responses at climatically similar NECTAR sites, with most species showing earlier phenological events in response to warming. In NECTAR, the prevalence of temperature responders significantly declines with increasing mean annual temperature, a pattern that cannot be detected across the limited climate gradient spanned by the PEP725 flowering and leafing data. Our results showcase broad areas of agreement between the two databases, despite significant differences in species richness and geographic coverage, while also noting areas where including data across broader climate gradients may provide added value. Such comparisons help to identify gaps in our observations and knowledge base that can be addressed by ongoing monitoring and research efforts. Resolving these issues will be critical for improving predictions in understudied and under-sampled systems outside of the temperature seasonal mid-latitudes.
Resumo:
Simple reaction times (RTs) to auditory-somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d'), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information.
Resumo:
Our lives and careers are becoming ever more unpredictable. The "life-design paradigm" described in detail in this ground-breaking handbook helps counselors and others meet people's increasing need to develop and manage their own lives and careers. Life-design interventions, suited to a wide variety of cultural settings, help individuals become actors in their own lives and careers by activating, stimulating, and developing their personal resources. This handbook first addresses life-design theory, then shows how to apply life designing to different age groups and with more at-risk people, and looks at how to train life-design counselors
Resumo:
Cardiovascular disease is the leading cause of death worldwide. Within this subset, coronary artery disease (CAD) is the most prevalent. Magnetic resonance angiography (MRA) is an emerging technique that provides a safe, non-invasive way of assessing CAD progression. To generate contrast between tissues, MR images are weighted according to the magnetic properties of those tissues. In cardiac MRI, T2 contrast, which is governed by the rate of transverse signal loss, is often created through the use of a T2-Preparation module. T2-Preparation, or T2-Prep, is a magnetization preparation scheme used to improve blood/myocardium contrast in cardiac MRI. T2-Prep methods generally use a non-selective +90°, 180°, 180°, -90° train of radiofrequency (RF) pulses (or variant thereof), to tip magnetization into the transverse plane, allow it to evolve, and then to restore it to the longitudinal plane. A key feature in this process is the combination of a +90° and -90° RF pulse. By changing either one of these, a mismatch occurs between signal excitation and restoration. This feature can be exploited to provide additional spectral or spatial selectivity. In this work, both of these possibilities are explored. The first - spectral selectivity - has been examined as a method of improving fat saturation in coronary MRA. The second - spatial selectivity - has been examined as a means of reducing imaging time by decreasing the field of view, and as a method of reducing artefacts originating from the tissues surrounding the heart. Two additional applications, parallel imaging and self-navigation, are also presented. This thesis is thus composed of four sections. The first, "A Fat Signal Suppression for Coronary MRA at 3T using a Water-Selective Adiabatic T2-Preparation Technique", was originally published in the journal Magnetic Resonance in Medicine (MRM) with co-authors Ruud B. van Heeswijk and Matthias Stuber. The second, "Combined T2-Preparation and 2D Pencil Beam Inner Volume Selection", again with co-authors Ruud van Heeswijk and Matthias Stuber, was also published in the journal MRM. The third, "A cylindrical, inner volume selecting 2D-T2-Prep improves GRAPPA-accelerated image quality in MRA of the right coronary artery", written with co-authors Jerome Yerly and Matthias Stuber, has been submitted to the "Journal of Cardiovascular Magnetic Resonance", and the fourth, "Combined respiratory self-navigation and 'pencil-beam' 2D-T2 -Prep for free-breathing, whole-heart coronary MRA", with co¬authors Jerome Chaptinel, Giulia Ginami, Gabriele Bonanno, Simone Coppo, Ruud van Heeswijk, Davide Piccini, and Matthias Stuber, is undergoing internal review prior to submission to the journal MRM. -- Les maladies cardiovasculaires sont la cause principale de décès dans le monde : parmi celles-ci, les maladies coronariennes sont les plus répandues. L'angiographie par résonance magnétique (ARM) est une technique émergente qui fournit une manière sûre, non invasive d'évaluer la progression de la coronaropathie. Pour obtenir un contraste entre les tissus, les images d'IRM sont pondérées en fonction des propriétés magnétiques de ces tissus. En IRM cardiaque, le contraste en T2, qui est lié à la décroissance du signal transversal, est souvent créé grâce à l'utilisàtion d'un module de préparation T2. La préparation T2, ou T2-Prep, est un système de préparation de l'aimantation qui est utilisé pour améliorer le contraste entre le sang et le myocarde lors d'une IRM cardiaque. Les méthodes de T2-Prep utilisent généralement une série non-sélective d'impulsions de radiofréquence (RF), typiquement [+ 90°, 180°, 180°, -90°] ou une variante, qui bascule l'aimantation dans le plan transversal, lui permet d'évoluer, puis la restaure dans le plan longitudinal. Un élément clé de ce processus est la combinaison des impulsions RF de +90° et -90°. En changeant l'une ou l'autre des impulsions, un décalage se produit entre l'excitation du signal et de la restauration. Cette fonction peut être exploitée pour fournir une sélectivité spectrale ou spatiale. Dans cette thèse, les deux possibilités sont explorées. La première - la sélectivité spectrale - a été examinée comme une méthode d'améliorer la saturation de la graisse dans l'IRM coronarienne. La deuxième - la sélectivité spatiale - a été étudiée comme un moyen de réduire le temps d'imagerie en diminuant le champ de vue, et comme une méthode de réduction des artefacts provenant des tissus entourant le coeur. Deux applications supplémentaires, l'imagerie parallèle et la self-navigation, sont également présentées. Cette thèse est ainsi composée de quatre sections. La première, "A Fat Signal Suppression for Coronary MRA at 3T using a Water-Selective Adiabatic T2-Preparation Technique", a été publiée dans la revue médicale Magnetic Resonance .in Medicine (MRM) avec les co-auteurs Ruud B. van Heeswijk et Matthias Stuber. La deuxième, Combined T2-Preparation and 2D Pencil Beam Inner Volume Selection", encore une fois avec les co-auteurs Ruud van Heeswijk et Matthias Stuber, a également été publiée dans le journal MRM. La troisième, "A cylindrical, inner volume selecting 2D-T2-Prep improves GRAPPA- accelerated image quality in MRA of the right coronary artery", écrite avec les co-auteurs Jérôme Yerly et Matthias Stuber, a été présentée au "Journal of Cardiovascular Magnetic Resonance", et la quatrième, "Combined respiratory self-navigation and 'pencil-beam' 2D-T2 -Prep for free-breathing, whole-heart coronary MRA", avec les co-auteurs Jérôme Chaptinel, Giulia Ginami, Gabriele Bonanno , Simone Coppo, Ruud van Heeswijk, Davide Piccini, et Matthias Stuber, subit un examen interne avant la soumission à la revue MRM.
Resumo:
A new metabolite profiling approach combined with an ultrarapid sample preparation procedure was used to study the temporal and spatial dynamics of the wound-induced accumulation of jasmonic acid (JA) and its oxygenated derivatives in Arabidopsis thaliana. In addition to well known jasmonates, including hydroxyjasmonates (HOJAs), jasmonoyl-isoleucine (JA-Ile), and its 12-hydroxy derivative (12-HOJA-Ile), a new wound-induced dicarboxyjasmonate, 12-carboxyjasmonoyl-l-isoleucine (12-HOOCJA-Ile) was discovered. HOJAs and 12-HOOCJA-Ile were enriched in the midveins of wounded leaves, strongly differentiating them from the other jasmonate metabolites studied. The polarity of these oxylipins at physiological pH correlated with their appearance in midveins. When the time points of accumulation of different jasmonates were determined, JA levels were found to increase within 2-5 min of wounding. Remarkably, these changes occurred throughout the plant and were not restricted to wounded leaves. The speed of the stimulus leading to JA accumulation in leaves distal to a wound is at least 3 cm/min. The data give new insights into the spatial and temporal accumulation of jasmonates and have implications in the understanding of long-distance wound signaling in plants.
Resumo:
OBJECTIVES: Comparison of doxorubicin uptake, leakage and spatial regional blood flow, and drug distribution was made for antegrade, retrograde, combined antegrade and retrograde isolated lung perfusion, and pulmonary artery infusion by endovascular inflow occlusion (blood flow occlusion), as opposed to intravenous administration in a porcine model. METHODS: White pigs underwent single-pass lung perfusion with doxorubicin (320 mug/mL), labeled 99mTc-microspheres, and Indian ink. Visual assessment of the ink distribution and perfusion scintigraphy of the perfused lung was performed. 99mTc activity and doxorubicin levels were measured by gamma counting and high-performance liquid chromatography on 15 tissue samples from each perfused lung at predetermined localizations. RESULTS: Overall doxorubicin uptake in the perfused lung was significantly higher (P = .001) and the plasma concentration was significantly lower (P < .0001) after all isolated lung perfusion techniques, compared with intravenous administration, without differences between them. Pulmonary artery infusion (blood flow occlusion) showed an equally high doxorubicin uptake in the perfused lung but a higher systemic leakage than surgical isolated lung perfusion (P < .0001). The geometric coefficients of variation of the doxorubicin lung tissue levels were 175%, 279%, 226%, and 151% for antegrade, retrograde, combined antegrade and retrograde isolated lung perfusion, and pulmonary artery infusion by endovascular inflow occlusion (blood flow occlusion), respectively, compared with 51% for intravenous administration (P = .09). 99mTc activity measurements of the samples paralleled the doxorubicin level measurements, indicating a trend to a more heterogeneous spatial regional blood flow and drug distribution after isolated lung perfusion and blood flow occlusion compared with intravenous administration. CONCLUSIONS: Cytostatic lung perfusion results in a high overall doxorubicin uptake, which is, however, heterogeneously distributed within the perfused lung.
Resumo:
Summary : 1. Measuring health literacy in Switzerland: a review of six surveys: 1.1 Comparison of questionnaires - 1.2 Measures of health literacy in Switzerland - 1.3 Discussion of Swiss data on HL - 1.4 Description of the six surveys: 1.4.1 Current health trends and health literacy in the Swiss population (gfs-UNIVOX), 1.4.2 Nutrition, physical exercise and body weight : opinions and perceptions of the Swiss population (USI), 1.4.3 Health Literacy in Switzerland (ISPMZ), 1.4.4 Swiss Health Survey (SHS), 1.4.5 Survey of Health, Ageing and Retirement in Europe (SHARE), 1.4.6 Adult literacy and life skills survey (ALL). - 2 . Economic costs of low health literacy in Switzerland: a rough calculation. Appendix: Screenshots cost model
Resumo:
Interspecific competition, life history traits, environmental heterogeneity and spatial structure as well as disturbance are known to impact the successful dispersal strategies in metacommunities. However, studies on the direction of impact of those factors on dispersal have yielded contradictory results and often considered only few competing dispersal strategies at the same time. We used a unifying modeling approach to contrast the combined effects of species traits (adult survival, specialization), environmental heterogeneity and structure (spatial autocorrelation, habitat availability) and disturbance on the selected, maintained and coexisting dispersal strategies in heterogeneous metacommunities. Using a negative exponential dispersal kernel, we allowed for variation of both species dispersal distance and dispersal rate. We showed that strong disturbance promotes species with high dispersal abilities, while low local adult survival and habitat availability select against them. Spatial autocorrelation favors species with higher dispersal ability when adult survival and disturbance rate are low, and selects against them in the opposite situation. Interestingly, several dispersal strategies coexist when disturbance and adult survival act in opposition, as for example when strong disturbance regime favors species with high dispersal abilities while low adult survival selects species with low dispersal. Our results unify apparently contradictory previous results and demonstrate that spatial structure, disturbance and adult survival determine the success and diversity of coexisting dispersal strategies in competing metacommunities.
Resumo:
The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.