211 resultados para spatial error
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Plants are sessile organisms, often characterized by limited dispersal. Seeds and pollen are the critical stages for gene flow. Here we investigate spatial genetic structure, gene dispersal and the relative contribution of pollen vs seed in the movement of genes in a stable metapopulation of the white campion Silene latifolia within its native range. This short-lived perennial plant is dioecious, has gravity-dispersed seeds and moth-mediated pollination. Direct measures of pollen dispersal suggested that large populations receive more pollen than small isolated populations and that most gene flow occurs within tens of meters. However, these studies were performed in the newly colonized range (North America) where the specialist pollinator is absent. In the native range (Europe), gene dispersal could fall on a different spatial scale. We genotyped 258 individuals from large and small (15) subpopulations along a 60 km, elongated metapopulation in Europe using six highly variable microsatellite markers, two X-linked and four autosomal. We found substantial genetic differentiation among subpopulations (global F(ST)=0.11) and a general pattern of isolation by distance over the whole sampled area. Spatial autocorrelation revealed high relatedness among neighboring individuals over hundreds of meters. Estimates of gene dispersal revealed gene flow at the scale of tens of meters (5-30 m), similar to the newly colonized range. Contrary to expectations, estimates of dispersal based on X and autosomal markers showed very similar ranges, suggesting similar levels of pollen and seed dispersal. This may be explained by stochastic events of extensive seed dispersal in this area and limited pollen dispersal.
Resumo:
Episodic memories for autobiographical events that happen in unique spatiotemporal contexts are central to defining who we are. Yet, before 2 years of age, children are unable to form or store episodic memories for recall later in life, a phenomenon known as infantile amnesia. Here, we studied the development of allocentric spatial memory, a fundamental component of episodic memory, in two versions of a real-world memory task requiring 18 month- to 5-year-old children to search for rewards hidden beneath cups distributed in an open-field arena. Whereas children 25-42-months-old were not capable of discriminating three reward locations among 18 possible locations in absence of local cues marking these locations, children older than 43 months found the reward locations reliably. These results support previous findings suggesting that allocentric spatial memory, if present, is only rudimentary in children under 3.5 years of age. However, when tested with only one reward location among four possible locations, children 25-39-months-old found the reward reliably in absence of local cues, whereas 18-23-month-olds did not. Our findings thus show that the ability to form a basic allocentric representation of the environment is present by 2 years of age, and its emergence coincides temporally with the offset of infantile amnesia. However, the ability of children to distinguish and remember closely related spatial locations improves from 2 to 3.5 years of age, a developmental period marked by persistent deficits in long-term episodic memory known as childhood amnesia. These findings support the hypothesis that the differential maturation of distinct hippocampal circuits contributes to the emergence of specific memory processes during early childhood.
Resumo:
OBJECTIVES: Comparison of doxorubicin uptake, leakage and spatial regional blood flow, and drug distribution was made for antegrade, retrograde, combined antegrade and retrograde isolated lung perfusion, and pulmonary artery infusion by endovascular inflow occlusion (blood flow occlusion), as opposed to intravenous administration in a porcine model. METHODS: White pigs underwent single-pass lung perfusion with doxorubicin (320 mug/mL), labeled 99mTc-microspheres, and Indian ink. Visual assessment of the ink distribution and perfusion scintigraphy of the perfused lung was performed. 99mTc activity and doxorubicin levels were measured by gamma counting and high-performance liquid chromatography on 15 tissue samples from each perfused lung at predetermined localizations. RESULTS: Overall doxorubicin uptake in the perfused lung was significantly higher (P = .001) and the plasma concentration was significantly lower (P < .0001) after all isolated lung perfusion techniques, compared with intravenous administration, without differences between them. Pulmonary artery infusion (blood flow occlusion) showed an equally high doxorubicin uptake in the perfused lung but a higher systemic leakage than surgical isolated lung perfusion (P < .0001). The geometric coefficients of variation of the doxorubicin lung tissue levels were 175%, 279%, 226%, and 151% for antegrade, retrograde, combined antegrade and retrograde isolated lung perfusion, and pulmonary artery infusion by endovascular inflow occlusion (blood flow occlusion), respectively, compared with 51% for intravenous administration (P = .09). 99mTc activity measurements of the samples paralleled the doxorubicin level measurements, indicating a trend to a more heterogeneous spatial regional blood flow and drug distribution after isolated lung perfusion and blood flow occlusion compared with intravenous administration. CONCLUSIONS: Cytostatic lung perfusion results in a high overall doxorubicin uptake, which is, however, heterogeneously distributed within the perfused lung.
Resumo:
PURPOSE: The purposes of this study were to (1) develop a high-resolution 3-T magnetic resonance angiography (MRA) technique with an in-plane resolution approximate to that of multidetector coronary computed tomography (MDCT) and a voxel size of 0.35 × 0.35 × 1.5 mm³ and to (2) investigate the image quality of this technique in healthy participants and preliminarily in patients with known coronary artery disease (CAD). MATERIALS AND METHODS: A 3-T coronary MRA technique optimized for an image acquisition voxel as small as 0.35 × 0.35 × 1.5 mm³ (high-resolution coronary MRA [HRC]) was implemented and the coronary arteries of 22 participants were imaged. These included 11 healthy participants (average age, 28.5 years; 5 men) and 11 participants with CAD (average age, 52.9 years; 5 women) as identified on MDCT. In addition, the 11 healthy participants were imaged using a method with a more common spatial resolution of 0.7 × 1 × 3 mm³ (regular-resolution coronary MRA [RRC]). Qualitative and quantitative comparisons were made between the 2 MRA techniques. RESULTS: Normal vessels and CAD lesions were successfully depicted at 350 × 350 μm² in-plane resolution with adequate signal-to-noise ratio (SNR) and contrast-to-noise ratio. The CAD findings were consistent among MDCT and HRC. The HRC showed a 47% improvement in sharpness despite a reduction in SNR (by 72%) and in contrast-to-noise ratio (by 86%) compared with the regular-resolution coronary MRA. CONCLUSION: This study, as a first step toward substantial improvement in the resolution of coronary MRA, demonstrates the feasibility of obtaining at 3 T a spatial resolution that approximates that of MDCT. The acquisition in-plane pixel dimensions are as small as 350 × 350 μm² with a 1.5-mm slice thickness. Although SNR is lower, the images have improved sharpness, resulting in image quality that allows qualitative identification of disease sites on MRA consistent with MDCT.
Resumo:
Knockout mice lacking alphalb noradrenergic receptors were tested in behavioural experiments to test a possible effect of the absence of this receptor in reaction to novelty and spatial orientation. Reaction to novelty was tested in two experiments. In the first one the mice' latency to exit the first part of a two compartment set-up was measured. The knockout mice were faster to emerge then their littermate controls. Then they were tested in an open-field, in which new objects were added at the second trial. In the open-field without objects (first trial), the knockout mice showed a greater locomotor activity (path length). Then the same mice showed enhanced exploration of the newly introduced objects, relative to the control. The spatial orientation experiments were done on a homing board and in the water maze. The homing board did not yield a significant difference between the knock-out and the control mice. Both groups showed impaired results when the proximal (olfactory) and distal (visual) cues were disrupted by the rotation of the table. In the water maze however, the alphalb(-/-) mice were unable to solve the task (acquisition and retention), whereas the control mice showed a good acquisition and retention behaviour. The knockout mice' incapacity to learn to reach the submerged platform was not due to an incapacity to swim, as they were as good as their control littermates to reach the platform when it was visible.
Resumo:
Pantomimes of object use require accurate representations of movements and a selection of the most task-relevant gestures. Prominent models of praxis, corroborated by functional neuroimaging studies, predict a critical role for left parietal cortices in pantomime and advance that these areas store representations of tool use. In contrast, lesion data points to the involvement of left inferior frontal areas, suggesting that defective selection of movement features is the cause of pantomime errors. We conducted a large-scale voxel-based lesion-symptom mapping analyses with configural/spatial (CS) and body-part-as-object (BPO) pantomime errors of 150 left and right brain-damaged patients. Our results confirm the left hemisphere dominance in pantomime. Both types of error were associated with damage to left inferior frontal regions in tumor and stroke patients. While CS pantomime errors were associated with left temporoparietal lesions in both stroke and tumor patients, these errors appeared less associated with parietal areas in stroke than in tumor patients and less associated with temporal in tumor than stroke patients. BPO errors were associated with left inferior frontal lesions in both tumor and stroke patients. Collectively, our results reveal a left intrahemispheric dissociation for various aspects of pantomime, but with an unspecific role for inferior frontal regions.
Resumo:
Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.
Resumo:
OBJECTIVES: This study sought to establish an accurate and reproducible T(2)-mapping cardiac magnetic resonance (CMR) methodology at 3 T and to evaluate it in healthy volunteers and patients with myocardial infarct. BACKGROUND: Myocardial edema affects the T(2) relaxation time on CMR. Therefore, T(2)-mapping has been established to characterize edema at 1.5 T. A 3 T implementation designed for longitudinal studies and aimed at guiding and monitoring therapy remains to be implemented, thoroughly characterized, and evaluated in vivo. METHODS: A free-breathing navigator-gated radial CMR pulse sequence with an adiabatic T(2) preparation module and an empirical fitting equation for T(2) quantification was optimized using numerical simulations and was validated at 3 T in a phantom study. Its reproducibility for myocardial T(2) quantification was then ascertained in healthy volunteers and improved using an external reference phantom with known T(2). In a small cohort of patients with established myocardial infarction, the local T(2) value and extent of the edematous region were determined and compared with conventional T(2)-weighted CMR and x-ray coronary angiography, where available. RESULTS: The numerical simulations and phantom study demonstrated that the empirical fitting equation is significantly more accurate for T(2) quantification than that for the more conventional exponential decay. The volunteer study consistently demonstrated a reproducibility error as low as 2 ± 1% using the external reference phantom and an average myocardial T(2) of 38.5 ± 4.5 ms. Intraobserver and interobserver variability in the volunteers were -0.04 ± 0.89 ms (p = 0.86) and -0.23 ± 0.91 ms (p = 0.87), respectively. In the infarction patients, the T(2) in edema was 62.4 ± 9.2 ms and was consistent with the x-ray angiographic findings. Simultaneously, the extent of the edematous region by T(2)-mapping correlated well with that from the T(2)-weighted images (r = 0.91). CONCLUSIONS: The new, well-characterized 3 T methodology enables robust and accurate cardiac T(2)-mapping at 3 T with high spatial resolution, while the addition of a reference phantom improves reproducibility. This technique may be well suited for longitudinal studies in patients with suspected or established heart disease.
Resumo:
Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.
Parts, places, and perspectives : a theory of spatial relations based an mereotopology and convexity
Resumo:
This thesis suggests to carry on the philosophical work begun in Casati's and Varzi's seminal book Parts and Places, by extending their general reflections on the basic formal structure of spatial representation beyond mereotopology and absolute location to the question of perspectives and perspective-dependent spatial relations. We show how, on the basis of a conceptual analysis of such notions as perspective and direction, a mereotopological theory with convexity can express perspectival spatial relations in a strictly qualitative framework. We start by introducing a particular mereotopological theory, AKGEMT, and argue that it constitutes an adequate core for a theory of spatial relations. Two features of AKGEMT are of particular importance: AKGEMT is an extensional mereotopology, implying that sameness of proper parts is a sufficient and necessary condition for identity, and it allows for (lower- dimensional) boundary elements in its domain of quantification. We then discuss an extension of AKGEMT, AKGEMTS, which results from the addition of a binary segment operator whose interpretation is that of a straight line segment between mereotopological points. Based on existing axiom systems in standard point-set topology, we propose an axiomatic characterisation of the segment operator and show that it is strong enough to sustain complex properties of a convexity predicate and a convex hull operator. We compare our segment-based characterisation of the convex hull to Cohn et al.'s axioms for the convex hull operator, arguing that our notion of convexity is significantly stronger. The discussion of AKGEMTS defines the background theory of spatial representation on which the developments in the second part of this thesis are built. The second part deals with perspectival spatial relations in two-dimensional space, i.e., such relations as those expressed by 'in front of, 'behind', 'to the left/right of, etc., and develops a qualitative formalism for perspectival relations within the framework of AKGEMTS. Two main claims are defended in part 2: That perspectival relations in two-dimensional space are four- place relations of the kind R(x, y, z, w), to be read as x is i?-related to y as z looks at w; and that these four-place structures can be satisfactorily expressed within the qualitative theory AKGEMTS. To defend these two claims, we start by arguing for a unified account of perspectival relations, thus rejecting the traditional distinction between 'relative' and 'intrinsic' perspectival relations. We present a formal theory of perspectival relations in the framework of AKGEMTS, deploying the idea that perspectival relations in two-dimensional space are four-place relations, having a locational and a perspectival part and show how this four-place structure leads to a unified framework of perspectival relations. Finally, we present a philosophical motivation to the idea that perspectival relations are four-place, cashing out the thesis that perspectives are vectorial properties and argue that vectorial properties are relations between spatial entities. Using Fine's notion of "qua objects" for an analysis of points of view, we show at last how our four-place approach to perspectival relations compares to more traditional understandings.