924 resultados para spatial data analysis
Resumo:
After a rockfall event, a usual post event survey includes qualitative volume estimation, trajectory mapping and determination of departing zones. However, quantitative measurements are not usually made. Additional relevant quantitative information could be useful in determining the spatial occurrence of rockfall events and help us in quantifying their size. Seismic measurements could be suitable for detection purposes since they are non invasive methods and are relatively inexpensive. Moreover, seismic techniques could provide important information on rockfall size and location of impacts. On 14 February 2007 the Avalanche Group of the University of Barcelona obtained the seismic data generated by an artificially triggered rockfall event at the Montserrat massif (near Barcelona, Spain) carried out in order to purge a slope. Two 3 component seismic stations were deployed in the area about 200 m from the explosion point that triggered the rockfall. Seismic signals and video images were simultaneously obtained. The initial volume of the rockfall was estimated to be 75 m3 by laser scanner data analysis. After the explosion, dozens of boulders ranging from 10¿4 to 5 m3 in volume impacted on the ground at different locations. The blocks fell down onto a terrace, 120 m below the release zone. The impact generated a small continuous mass movement composed of a mixture of rocks, sand and dust that ran down the slope and impacted on the road 60 m below. Time, time-frequency evolution and particle motion analysis of the seismic records and seismic energy estimation were performed. The results are as follows: 1 ¿ A rockfall event generates seismic signals with specific characteristics in the time domain; 2 ¿ the seismic signals generated by the mass movement show a time-frequency evolution different from that of other seismogenic sources (e.g. earthquakes, explosions or a single rock impact). This feature could be used for detection purposes; 3 ¿ particle motion plot analysis shows that the procedure to locate the rock impact using two stations is feasible; 4 ¿ The feasibility and validity of seismic methods for the detection of rockfall events, their localization and size determination are comfirmed.
Resumo:
The present study proposes a modification in one of the most frequently applied effect size procedures in single-case data analysis the percent of nonoverlapping data. In contrast to other techniques, the calculus and interpretation of this procedure is straightforward and it can be easily complemented by visual inspection of the graphed data. Although the percent of nonoverlapping data has been found to perform reasonably well in N = 1 data, the magnitude of effect estimates it yields can be distorted by trend and autocorrelation. Therefore, the data correction procedure focuses on removing the baseline trend from data prior to estimating the change produced in the behavior due to intervention. A simulation study is carried out in order to compare the original and the modified procedures in several experimental conditions. The results suggest that the new proposal is unaffected by trend and autocorrelation and can be used in case of unstable baselines and sequentially related measurements.
Resumo:
The present study focuses on single-case data analysis and specifically on two procedures for quantifying differences between baseline and treatment measurements The first technique tested is based on generalized least squares regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The comparison is carried out in the context of generated data representing a variety of patterns (i.e., independent measurements, different serial dependence underlying processes, constant or phase-specific autocorrelation and data variability, different types of trend, and slope and level change). The results suggest that the two techniques perform adequately for a wide range of conditions and researchers can use both of them with certain guarantees. The regression-based procedure offers more efficient estimates, whereas the proposed non-regression procedure is more sensitive to intervention effects. Considering current and previous findings, some tentative recommendations are offered to applied researchers in order to help choosing among the plurality of single-case data analysis techniques.
Resumo:
The focus of my PhD research was the concept of modularity. In the last 15 years, modularity has become a classic term in different fields of biology. On the conceptual level, a module is a set of interacting elements that remain mostly independent from the elements outside of the module. I used modular analysis techniques to study gene expression evolution in vertebrates. In particular, I identified ``natural'' modules of gene expression in mouse and human, and I showed that expression of organ-specific and system-specific genes tends to be conserved between such distance vertebrates as mammals and fishes. Also with a modular approach, I studied patterns of developmental constraints on transcriptome evolution. I showed that none of the two commonly accepted models of the evolution of embryonic development (``evo-devo'') are exclusively valid. In particular, I found that the conservation of the sequences of regulatory regions is highest during mid-development of zebrafish, and thus it supports the ``hourglass model''. In contrast, events of gene duplication and new gene introduction are most rare in early development, which supports the ``early conservation model''. In addition to the biological insights on transcriptome evolution, I have also discussed in detail the advantages of modular approaches in large-scale data analysis. Moreover, I re-analyzed several studies (published in high-ranking journals), and showed that their conclusions do not hold out under a detailed analysis. This demonstrates that complex analysis of high-throughput data requires a co-operation between biologists, bioinformaticians, and statisticians.
Resumo:
The present research studies the spatial patterns of the distribution of the Swiss population (DSP). This description is carried out using a wide variety of global spatial structural analysis tools such as topological, statistical and fractal measures, which enable the estimation of the spatial degree of clustering of a point pattern. A particular attention is given to the analysis of the multifractality to characterize the spatial structure of the DSP at different scales. This will be achieved by measuring the generalized q-dimensions and the singularity spectrum. This research is based on high quality data of the Swiss Population Census of the Year 2000 at a hectometric resolution (grid 100 x 100 m) issued by the Swiss Federal Statistical Office (FSO).
Resumo:
Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.
Resumo:
INTRODUCTION: infants hospitalised in neonatology are inevitably exposed to pain repeatedly. Premature infants are particularly vulnerable, because they are hypersensitive to pain and demonstrate diminished behavioural responses to pain. They are therefore at risk of developing short and long-term complications if pain remains untreated. CONTEXT: compared to acute pain, there is limited evidence in the literature on prolonged pain in infants. However, the prevalence is reported between 20 and 40 %. OBJECTIVE : this single case study aimed to identify the bio-contextual characteristics of neonates who experienced prolonged pain. METHODS : this study was carried out in the neonatal unit of a tertiary referral centre in Western Switzerland. A retrospective data analysis of seven infants' profile, who experienced prolonged pain ,was performed using five different data sources. RESULTS : the mean gestational age of the seven infants was 32weeks. The main diagnosis included prematurity and respiratory distress syndrome. The total observations (N=55) showed that the participants had in average 21.8 (SD 6.9) painful procedures that were estimated to be of moderate to severe intensity each day. Out of the 164 recorded pain scores (2.9 pain assessment/day/infant), 14.6 % confirmed acute pain. Out of those experiencing acute pain, analgesia was given in 16.6 % of them and 79.1 % received no analgesia. CONCLUSION: this study highlighted the difficulty in managing pain in neonates who are exposed to numerous painful procedures. Pain in this population remains underevaluated and as a result undertreated.Results of this study showed that nursing documentation related to pain assessment is not systematic.Regular assessment and documentation of acute and prolonged pain are recommended. This could be achieved with clear guidelines on the Assessment Intervention Reassessment (AIR) cyclewith validated measures adapted to neonates. The adequacy of pain assessment is a pre-requisite for appropriate pain relief in neonates.
Resumo:
Abstract Textual autocorrelation is a broad and pervasive concept, referring to the similarity between nearby textual units: lexical repetitions along consecutive sentences, semantic association between neighbouring lexemes, persistence of discourse types (narrative, descriptive, dialogal...) and so on. Textual autocorrelation can also be negative, as illustrated by alternating phonological or morpho-syntactic categories, or the succession of word lengths. This contribution proposes a general Markov formalism for textual navigation, and inspired by spatial statistics. The formalism can express well-known constructs in textual data analysis, such as term-document matrices, references and hyperlinks navigation, (web) information retrieval, and in particular textual autocorrelation, as measured by Moran's I relatively to the exchange matrix associated to neighbourhoods of various possible types. Four case studies (word lengths alternation, lexical repulsion, parts of speech autocorrelation, and semantic autocorrelation) illustrate the theory. In particular, one observes a short-range repulsion between nouns together with a short-range attraction between verbs, both at the lexical and semantic levels. Résumé: Le concept d'autocorrélation textuelle, fort vaste, réfère à la similarité entre unités textuelles voisines: répétitions lexicales entre phrases successives, association sémantique entre lexèmes voisins, persistance du type de discours (narratif, descriptif, dialogal...) et ainsi de suite. L'autocorrélation textuelle peut être également négative, comme l'illustrent l'alternance entre les catégories phonologiques ou morpho-syntaxiques, ou la succession des longueurs de mots. Cette contribution propose un formalisme markovien général pour la navigation textuelle, inspiré par la statistique spatiale. Le formalisme est capable d'exprimer des constructions bien connues en analyse des données textuelles, telles que les matrices termes-documents, les références et la navigation par hyperliens, la recherche documentaire sur internet, et, en particulier, l'autocorélation textuelle, telle que mesurée par le I de Moran relatif à une matrice d'échange associée à des voisinages de différents types possibles. Quatre cas d'étude illustrent la théorie: alternance des longueurs de mots, répulsion lexicale, autocorrélation des catégories morpho-syntaxiques et autocorrélation sémantique. On observe en particulier une répulsion à courte portée entre les noms, ainsi qu'une attraction à courte portée entre les verbes, tant au niveau lexical que sémantique.
Resumo:
Commercially available instruments for road-side data collection take highly limited measurements, require extensive manual input, or are too expensive for widespread use. However, inexpensive computer vision techniques for digital video analysis can be applied to automate the monitoring of driver, vehicle, and pedestrian behaviors. These techniques can measure safety-related variables that cannot be easily measured using existing sensors. The use of these techniques will lead to an improved understanding of the decisions made by drivers at intersections. These automated techniques allow the collection of large amounts of safety-related data in a relatively short amount of time. There is a need to develop an easily deployable system to utilize these new techniques. This project implemented and tested a digital video analysis system for use at intersections. A prototype video recording system was developed for field deployment. A computer interface was implemented and served to simplify and automate the data analysis and the data review process. Driver behavior was measured at urban and rural non-signalized intersections. Recorded digital video was analyzed and used to test the system.
Resumo:
We conducted this study to determine the relative influence of various mechanical and patient-related factors on the incidence of dislocation after primary total hip asthroplasty (THA). Of 2,023 THAs, 21 patients who had at least 1 dislocation were compared with a control group of 21 patients without dislocation, matched for age, gender, pathology, and year of surgery. Implant positioning, seniority of the surgeon, American Society of Anesthesiologists (ASA) score, and diminished motor coordination were recorded. Data analysis included univariate and multivariate methods. The dislocation risk was 6.9 times higher if total anteversion was not between 40 degrees and 60 degrees and 10 times higher in patients with high ASA scores. Surgeons should pay attention to total anteversion (cup and stem) of THA. The ASA score should be part of the preoperative assessment of the dislocation risk.
Resumo:
The present paper advocates for the creation of a federated, hybrid database in the cloud, integrating law data from all available public sources in one single open access system - adding, in the process, relevant meta-data to the indexed documents, including the identification of social and semantic entities and the relationships between them, using linked open data techniques and standards such as RDF. Examples of potential benefits and applications of this approach are also provided, including, among others, experiences from of our previous research, in which data integration, graph databases and social and semantic networks analysis were used to identify power relations, litigation dynamics and cross-references patterns both intra and inter-institutionally, covering most of the World international economic courts.
Resumo:
The quality of environmental data analysis and propagation of errors are heavily affected by the representativity of the initial sampling design [CRE 93, DEU 97, KAN 04a, LEN 06, MUL07]. Geostatistical methods such as kriging are related to field samples, whose spatial distribution is crucial for the correct detection of the phenomena. Literature about the design of environmental monitoring networks (MN) is widespread and several interesting books have recently been published [GRU 06, LEN 06, MUL 07] in order to clarify the basic principles of spatial sampling design (monitoring networks optimization) based on Support Vector Machines was proposed. Nonetheless, modelers often receive real data coming from environmental monitoring networks that suffer from problems of non-homogenity (clustering). Clustering can be related to the preferential sampling or to the impossibility of reaching certain regions.
Resumo:
Linezolid is used off-label to treat multidrug-resistant tuberculosis (MDR-TB) in absence of systematic evidence. We performed a systematic review and meta-analysis on efficacy, safety and tolerability of linezolid-containing regimes based on individual data analysis. 12 studies (11 countries from three continents) reporting complete information on safety, tolerability, efficacy of linezolid-containing regimes in treating MDR-TB cases were identified based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Meta-analysis was performed using the individual data of 121 patients with a definite treatment outcome (cure, completion, death or failure). Most MDR-TB cases achieved sputum smear (86 (92.5%) out of 93) and culture (100 (93.5%) out of 107) conversion after treatment with individualised regimens containing linezolid (median (inter-quartile range) times for smear and culture conversions were 43.5 (21-90) and 61 (29-119) days, respectively) and 99 (81.8%) out of 121 patients were successfully treated. No significant differences were detected in the subgroup efficacy analysis (daily linezolid dosage ≤600 mg versus >600 mg). Adverse events were observed in 63 (58.9%) out of 107 patients, of which 54 (68.4%) out of 79 were major adverse events that included anaemia (38.1%), peripheral neuropathy (47.1%), gastro-intestinal disorders (16.7%), optic neuritis (13.2%) and thrombocytopenia (11.8%). The proportion of adverse events was significantly higher when the linezolid daily dosage exceeded 600 mg. The study results suggest an excellent efficacy but also the necessity of caution in the prescription of linezolid.
Resumo:
Precision Viticulture (PV) is a concept that is beginning to have an impact on the wine-growing sector. Its practical implementation is dependant on various technological developments: crop sensors and yield monitors, local and remote sensors, Global Positioning Systems (GPS), VRA (Variable-Rate Application) equipment and machinery, Geographic Information Systems (GIS) and systems for data analysis and interpretation. This paper reviews a number of research lines related to PV. These areas of research have focused on four very specific fields: 1) quantification and evaluation of within-field variability, 2) delineation of zones of differential treatment at parcel level, based on the analysis and interpretation of this variability, 3) development of Variable-Rate Technologies (VRT) and, finally, 4) evaluation of the opportunities for site-specific vineyard management. Research in these fields should allow winegrowers and enologists to know and understand why yield variability exists within the same parcel, what the causes of this variability are, how the yield and its quality are interrelated and, if spatial variability exists, whether site-specific vineyard management is justifiable on a technical and economic basis.