870 resultados para Passing of time
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
Management Control System (MCS) research is undergoing turbulent times. For a long time related to cybernetic instruments of management accounting only, MCS are increasingly seen as complex systems comprising not only formal accounting-driven instruments, but also informal mechanisms of control based on organizational culture. But not only have the means of MCS changed; researchers increasingly ap-ply MCS to organizational goals other than strategy implementation.rnrnTaking the question of "How do I design a well-performing MCS?" as a starting point, this dissertation aims at providing a comprehensive and integrated overview of the "current-state" of MCS research. Opting for a definition of MCS, broad in terms of means (all formal as well as informal MCS instruments), but focused in terms of objectives (behavioral control only), the dissertation contributes to MCS theory by, a) developing an integrated (contingency) model of MCS, describing its contingencies, as well as its subcomponents, b) refining the equifinality model of Gresov/Drazin (1997), c) synthesizing research findings from contingency and configuration research concerning MCS, taking into account case studies on research topics such as ambi-dexterity, equifinality and time as a contingency.
Resumo:
Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.
Resumo:
When estimating the effect of treatment on HIV using data from observational studies, standard methods may produce biased estimates due to the presence of time-dependent confounders. Such confounding can be present when a covariate, affected by past exposure, is both a predictor of the future exposure and the outcome. One example is the CD4 cell count, being a marker for disease progression for HIV patients, but also a marker for treatment initiation and influenced by treatment. Fitting a marginal structural model (MSM) using inverse probability weights is one way to give appropriate adjustment for this type of confounding. In this paper we study a simple and intuitive approach to estimate similar treatment effects, using observational data to mimic several randomized controlled trials. Each 'trial' is constructed based on individuals starting treatment in a certain time interval. An overall effect estimate for all such trials is found using composite likelihood inference. The method offers an alternative to the use of inverse probability of treatment weights, which is unstable in certain situations. The estimated parameter is not identical to the one of an MSM, it is conditioned on covariate values at the start of each mimicked trial. This allows the study of questions that are not that easily addressed fitting an MSM. The analysis can be performed as a stratified weighted Cox analysis on the joint data set of all the constructed trials, where each trial is one stratum. The model is applied to data from the Swiss HIV cohort study.
Resumo:
Neglect is defined as the failure to attend and to orient to the contralesional side of space. A horizontal bias towards the right visual field is a classical finding in patients who suffered from a right-hemispheric stroke. The vertical dimension of spatial attention orienting has only sparsely been investigated so far. The aim of this study was to investigate the specificity of this vertical bias by means of a search task, which taps a more pronounced top-down attentional component. Eye movements and behavioural search performance were measured in thirteen patients with left-sided neglect after right hemispheric stroke and in thirteen age-matched controls. Concerning behavioural performance, patients found significantly less targets than healthy controls in both the upper and lower left quadrant. However, when targets were located in the lower left quadrant, patients needed more visual fixations (and therefore longer search time) to find them, suggesting a time-dependent vertical bias.
Resumo:
Abstract Background and Aims: Data on the influence of calibration on accuracy of continuous glucose monitoring (CGM) are scarce. The aim of the present study was to investigate whether the time point of calibration has an influence on sensor accuracy and whether this effect differs according to glycemic level. Subjects and Methods: Two CGM sensors were inserted simultaneously in the abdomen on either side of 20 individuals with type 1 diabetes. One sensor was calibrated predominantly using preprandial glucose (calibration(PRE)). The other sensor was calibrated predominantly using postprandial glucose (calibration(POST)). At minimum three additional glucose values per day were obtained for analysis of accuracy. Sensor readings were divided into four categories according to the glycemic range of the reference values (low, ≤4 mmol/L; euglycemic, 4.1-7 mmol/L; hyperglycemic I, 7.1-14 mmol/L; and hyperglycemic II, >14 mmol/L). Results: The overall mean±SEM absolute relative difference (MARD) between capillary reference values and sensor readings was 18.3±0.8% for calibration(PRE) and 21.9±1.2% for calibration(POST) (P<0.001). MARD according to glycemic range was 47.4±6.5% (low), 17.4±1.3% (euglycemic), 15.0±0.8% (hyperglycemic I), and 17.7±1.9% (hyperglycemic II) for calibration(PRE) and 67.5±9.5% (low), 24.2±1.8% (euglycemic), 15.5±0.9% (hyperglycemic I), and 15.3±1.9% (hyperglycemic II) for calibration(POST). In the low and euglycemic ranges MARD was significantly lower in calibration(PRE) compared with calibration(POST) (P=0.007 and P<0.001, respectively). Conclusions: Sensor calibration predominantly based on preprandial glucose resulted in a significantly higher overall sensor accuracy compared with a predominantly postprandial calibration. The difference was most pronounced in the hypo- and euglycemic reference range, whereas both calibration patterns were comparable in the hyperglycemic range.
Resumo:
While many time-series studies of ozone and daily mortality identified positive associations,others yielded null or inconclusive results. We performed a meta-analysis of 144 effect estimates from 39 time-series studies, and estimated pooled effects by lags, age groups,cause-specific mortality, and concentration metrics. We compared results to estimates from the National Morbidity, Mortality, and Air Pollution Study (NMMAPS), a time-series study of 95 large U.S. cities from 1987 to 2000. Both meta-analysis and NMMAPS results provided strong evidence of a short-term association between ozone and mortality, with larger effects for cardiovascular and respiratory mortality, the elderly, and current day ozone exposure as compared to other single day lags. In both analyses, results were not sensitive to adjustment for particulate matter and model specifications. In the meta-analysis we found that a 10 ppb increase in daily ozone is associated with a 0.83 (95% confidence interval: 0.53, 1.12%) increase in total mortality, whereas the corresponding NMMAPS estimate is 0.25%(0.12, 0.39%). Meta-analysis results were consistently larger than those from NMMAPS,indicating publication bias. Additional publication bias is evident regarding the choice of lags in time-series studies, and the larger heterogeneity in posterior city-specific estimates in the meta-analysis, as compared with NMAMPS.
Resumo:
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs.
Resumo:
Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.
Resumo:
This study tests whether cognitive failures mediate effects of work-related time pressure and time control on commuting accidents and near-accidents. Participants were 83 employees (56% female) who each commuted between their regular place of residence and place of work using vehicles. The Workplace Cognitive Failure Scale (WCFS) asked for the frequency of failure in memory function, failure in attention regulation, and failure in action execution. Time pressure and time control at work were assessed by the Instrument for Stress Oriented Task Analysis (ISTA). Commuting accidents in the last 12 months were reported by 10% of participants, and half of the sample reported commuting near-accidents in the last 4 weeks. Cognitive failure significantly mediated the influence of time pressure at work on near-accidents even when age, gender, neuroticism, conscientiousness, commuting duration, commuting distance, and time pressure during commuting were controlled for. Time control was negatively related to cognitive failure and neuroticism, but no association with commuting accidents or near-accidents was found. Time pressure at work is likely to increase cognitive load. Time pressure might, therefore, increase cognitive failures during work and also during commuting. Hence, time pressure at work can decrease commuting safety. The result suggests a reduction of time pressure at work should improve commuting safety.