47 resultados para distribution network SCADA measurement point
Resumo:
We investigated whether a single blood measurement using the minimally invasive technique of a finger prick to draw a blood sample of 5 µl (to yield a dried blood spot (DBS)) is suitable for the assessment of flurbiprofen (FLB) metabolic ratio (MR). Ten healthy volunteers who had been genotyped for CYP2C9 were recruited as subjects. They received FLB alone in session 1 and FLB with fluconazole in session 2. In session 3, the subjects were pretreated for 4 days with rifampicin and received FLB with the last dose of rifampicin on day 5. Plasma and DBS samples were obtained between 0 and 8 h after FLB administration, and urine was collected during the 8 h after administration. The pharmacokinetic profiles of the drugs were comparable in DBS and plasma. FLB's apparent clearance values decreased by 35% in plasma and DBS during session 2 and increased by 75% in plasma and by 30% in DBS during session 3. Good correlations were observed between MRs calculated from urine, plasma, and DBS samples.
Resumo:
Objectives: To compare the clinical characteristics, species distribution and antifungal susceptibility of Candida bloodstream isolates (BSI) in breakthrough (BTC) vs. non-breakthrough candidemia (NBTC) and to study the effect of prolonged vs. short fluconazole (F) exposure in BTC.Methods: Candida BSI were prospectively collected during 2004- 2006 from 27 hospitals (seven university, 20 affiliated) of the FUNGINOS network. Susceptibility to F, voriconazole (V) and caspofungin (C) was tested in the FUNGINOS mycology reference laboratory by microtitre broth dilution method with the Sensititre YeastOneTM test panel. Clinical data were collected using standardized CRFs. BTC was defined as occurring during antifungal treatment/prophylaxis of at least three days duration prior to the candidemia. Susceptibility of BSI was defined according to 2010/2011 CLSI clinical breakpoints.Results: Out of 567 candidemia episodes, 550 Candida BSI were available. Of these, 43 (7.6%) were from BTC (37/43, 86% were isolated after F exposure). 38 BTC (88.4%) and 315 NBTC (55.6%) occurred in university hospitals (P < 0.001). The majority of patients developing BTC were immunocompromised: higher proportions of haematological malignancies (62.8% in BTC vs. 47.1% in NBTC, P < 0.001), neutropenia (37.2% vs. 11.8%, P < 0.001), acute GvHD (14% vs. 0.2%, P < 0.001), immunosuppressive drugs (74.4% vs. 7.8%, P < 0.001), and mucositis (32.6% vs. 2.3%, P < 0.001) were observed. Other differences between BTC and NBTC were higher proportions of patients with central venous catheters in the 2 weeks preceding candidemia (95.3% vs. 83.4%, P = 0.047) and receiving total parenteral nutrition (62.8% vs. 35.9%, P < 0.001), but a lower proportion of patients treated with gastric proton pump inhibitors (23.3% vs. 72.1%, P < 0.001). Overall mortality of BTC and NBTC was not different (34.9% vs. 31.7%, P = 0.73), while a trend to higher attributable mortality in BTC was found (13.9% vs. 6.9%, P = 0.12). Species identification showed a majority of C. albicans in both groups (51.2% in BTC vs. 62.9% in NBTC, P = 0.26), followed by C. glabrata (18.6% vs. 18.5%), C. tropicalis (2.3% vs. 6.3%) and C. parapsilosis (7.0% vs. 4.7%). Significantly more C. krusei were detected in BTC versus NBTC (11.6% vs. 1.6%, P = 0.002). The geometric mean MIC for F, V and C between BTC and NBTC isolates was not significantly different. However, in BTC there was a significant association between duration of F exposure and the Candida spp.: >10 days of F was associated with a significant shift from susceptible Candida spp. (C. albicans, C. parapsilosis, C. tropicalis, C. famata) to non-susceptible species (C. glabrata, C. krusei, C. norvegensis). Among 21 BTC episodes occurring after £10 days of F, 19% of the isolates were non-susceptible, in contrast to 68.7% in 16 BTC episodes occurring after >10 days of F (P = 0.003).Conclusions: Breakthrough candidemia occurred more often in immunocompromised hosts. Fluconazole administered for >10 days was associated with a shift to non-susceptible Candida spp.. Length of fluconazole exposure should be taken into consideration for the choice of empirical antifungal treatment.
Resumo:
Computer-Aided Tomography Angiography (CTA) images are the standard for assessing Peripheral artery disease (PAD). This paper presents a Computer Aided Detection (CAD) and Computer Aided Measurement (CAM) system for PAD. The CAD stage detects the arterial network using a 3D region growing method and a fast 3D morphology operation. The CAM stage aims to accurately measure the artery diameters from the detected vessel centerline, compensating for the partial volume effect using Expectation Maximization (EM) and a Markov Random field (MRF). The system has been evaluated on phantom data and also applied to fifteen (15) CTA datasets, where the detection accuracy of stenosis was 88% and the measurement accuracy was with an 8% error.
Resumo:
OBJECTIVES: Little is known regarding the distribution and the determinants of leptin and adiponectin levels in the general population. DESIGN: Cross-sectional study. PATIENTS: Women (3004) and men (2552) aged 35-74 living in Lausanne, Switzerland. MEASUREMENTS: Plasma levels of leptin and adiponectin (ELISA measurement). RESULTS: Women had higher leptin and adiponectin levels than men. In both genders, leptin and adiponectin levels increased with age. After adjusting for fat mass, leptin levels were significantly and negatively associated with age in women: 18.1 +/- 0.3, 17.1 +/- 0.3, 16.7 +/- 0.3 and 15.5 +/- 0.4 ng/ml (adjusted mean +/- SE) for age groups [35-44], [45-54], [55-64] and [65-75], respectively, P < 0.001. A similar but nonsignificant trend was also found in men. Conversely, the age-related increase of adiponectin was unrelated to body fat in both genders. Post-menopausal women had higher leptin and adiponectin levels than premenopausal women, independently of hormone replacement therapy. Although body fat mass was associated with leptin and adiponectin, the associations were stronger with body mass index (BMI), waist and hip in both genders. Finally, after adjusting for age and anthropometry, no relationships were found between leptin or adiponectin levels with alcohol, caffeine consumption and physical activity, whereas smoking and diabetes decreased leptin and adiponectin levels in women only. CONCLUSIONS: The age-related increase in leptin levels is attributable to changes in fat mass in women and probably also in men. Leptin and adiponectin levels are more related to BMI than to body fat mass. The effects of smoking and diabetes appear to be gender-specific.
Resumo:
Low pressure partial melting of basanitic and ankaramitic dykes gave rise to unusual, zebra-like migmatites, in the contact aureole of a layered pyroxenite-gabbro intrusion, in the root zone of an ocean island (Basal Complex, Fuerteventura, Canary Islands). These migmatites are characterised by a dense network of closely spaced, millimetre-wide leucocratic segregations. Their mineralogy consists of plagioclase (An(32-36)), diopside, biotite, oxides (magnetite, ilmenite), +/-amphibole, dominated by plagioclase in the leucosome and diopside in the melanosome. The melanosome is almost completely recrystallised, with the preservation of large, relict igneous diopside phenocrysts in dyke centres. Comparison of whole-rock and mineral major- and trace-element data allowed us to assess the redistribution of elements between different mineral phases and generations during contact metamorphism and partial melting. Dykes within and outside the thermal aureole behaved like closed chemical systems. Nevertheless, Zr, Hf, Y and REEs were internally redistributed, as deduced by comparing the trace element contents of the various diopside generations. Neocrystallised diopside - in the melanosome, leucosome and as epitaxial phenocryst rims - from the migmatite zone, are all enriched in Zr, Hf, Y and REEs compared to relict phenocrysts. This has been assigned to the liberation of trace elements on the breakdown of enriched primary minerals, kaersutite and sphene, on entering the thermal aureole. Major and trace element compositions of minerals in migmatite melanosomes and leucosomes are almost identical, pointing to a syn- or post-solidus reequilibration on the cooling of the migmatite terrain i.e. mineral-melt equilibria were reset to mineral-mineral equilibria. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.
Resumo:
The spatial resolution visualized with hydrological models and the conceptualized images of subsurface hydrological processes often exceed resolution of the data collected with classical instrumentation at the field scale. In recent years it was possible to increasingly diminish the inherent gap to information from point like field data through the application of hydrogeophysical methods at field-scale. With regards to all common geophysical exploration techniques, electric and electromagnetic methods have arguably to greatest sensitivity to hydrologically relevant parameters. Of particular interest in this context are induced polarisation (IP) measurements, which essentially constrain the capacity of a probed subsurface region to store an electrical charge. In the absence of metallic conductors the IP- response is largely driven by current conduction along the grain surfaces. This offers the perspective to link such measurements to the characteristics of the solid-fluid-interface and thus, at least in unconsolidated sediments, should allow for first-order estimates of the permeability structure.¦While the IP-effect is well explored through laboratory experiments and in part verified through field data for clay-rich environments, the applicability of IP-based characterizations to clay-poor aquifers is not clear. For example, polarization mechanisms like membrane polarization are not applicable in the rather wide pore-systems of clay free sands, and the direct transposition of Schwarz' theory relating polarization of spheres to the relaxation mechanism of polarized cells to complex natural sediments yields ambiguous results.¦In order to improve our understanding of the structural origins of IP-signals in such environments as well as their correlation with pertinent hydrological parameters, various laboratory measurements have been conducted. We consider saturated quartz samples with a grain size spectrum varying from fine sand to fine gravel, that is grain diameters between 0,09 and 5,6 mm, as well as corresponding pertinent mixtures which can be regarded as proxies for widespread alluvial deposits. The pore space characteristics are altered by changing (i) the grain size spectra, (ii) the degree of compaction, and (iii) the level of sorting. We then examined how these changes affect the SIP response, the hydraulic conductivity, and the specific surface area of the considered samples, while keeping any electrochemical variability during the measurements as small as possible. The results do not follow simple assumptions on relationships to single parameters such as grain size. It was found that the complexity of natural occurring media is not yet sufficiently represented when modelling IP. At the same time simple correlation to permeability was found to be strong and consistent. Hence, adaptations with the aim of better representing the geo-structure of natural porous media were applied to the simplified model space used in Schwarz' IP-effect-theory. The resulting semi- empiric relationship was found to more accurately predict the IP-effect and its relation to the parameters grain size and permeability. If combined with recent findings about the effect of pore fluid electrochemistry together with advanced complex resistivity tomography, these results will allow us to picture diverse aspects of the subsurface with relative certainty. Within the framework of single measurement campaigns, hydrologiste can than collect data with information about the geo-structure and geo-chemistry of the subsurface. However, additional research efforts will be necessary to further improve the understanding of the physical origins of IP-effect and minimize the potential for false interpretations.¦-¦Dans l'étude des processus et caractéristiques hydrologiques des subsurfaces, la résolution spatiale donnée par les modèles hydrologiques dépasse souvent la résolution des données du terrain récoltées avec des méthodes classiques d'hydrologie. Récemment il est possible de réduire de plus en plus cet divergence spatiale entre modèles numériques et données du terrain par l'utilisation de méthodes géophysiques, notamment celles géoélectriques. Parmi les méthodes électriques, la polarisation provoquée (PP) permet de représenter la capacité des roches poreuses et des sols à stocker une charge électrique. En l'absence des métaux dans le sous-sol, cet effet est largement influencé par des caractéristiques de surface des matériaux. En conséquence les mesures PP offrent une information des interfaces entre solides et fluides dans les matériaux poreux que nous pouvons lier à la perméabilité également dirigée par ces mêmes paramètres. L'effet de la polarisation provoquée à été étudié dans différentes études de laboratoire, ainsi que sur le terrain. A cause d'une faible capacité de polarisation des matériaux sableux, comparé aux argiles, leur caractérisation par l'effet-PP reste difficile a interpréter d'une manière cohérente pour les environnements hétérogènes.¦Pour améliorer les connaissances sur l'importance de la structure du sous-sol sableux envers l'effet PP et des paramètres hydrologiques, nous avons fait des mesures de laboratoire variées. En détail, nous avons considéré des échantillons sableux de quartz avec des distributions de taille de grain entre sables fins et graviers fins, en diamètre cela fait entre 0,09 et 5,6 mm. Les caractéristiques de l'espace poreux sont changées en modifiant (i) la distribution de taille des grains, (ii) le degré de compaction, et (iii) le niveau d'hétérogénéité dans la distribution de taille de grains. En suite nous étudions comment ces changements influencent l'effet-PP, la perméabilité et la surface spécifique des échantillons. Les paramètres électrochimiques sont gardés à un minimum pendant les mesures. Les résultats ne montrent pas de relation simple entre les paramètres pétro-physiques comme par exemples la taille des grains. La complexité des media naturels n'est pas encore suffisamment représenté par les modèles des processus PP. Néanmoins, la simple corrélation entre effet PP et perméabilité est fort et consistant. En conséquence la théorie de Schwarz sur l'effet-PP a été adapté de manière semi-empirique pour mieux pouvoir estimer la relation entre les résultats de l'effet-PP et les paramètres taille de graines et perméabilité. Nos résultats concernant l'influence de la texture des matériaux et celles de l'effet de l'électrochimie des fluides dans les pores, permettront de visualiser des divers aspects du sous-sol. Avec des telles mesures géo-électriques, les hydrologues peuvent collectionner des données contenant des informations sur la structure et la chimie des fluides des sous-sols. Néanmoins, plus de recherches sur les origines physiques de l'effet-PP sont nécessaires afin de minimiser le risque potentiel d'une mauvaise interprétation des données.
Resumo:
We propose robust estimators of the generalized log-gamma distribution and, more generally, of location-shape-scale families of distributions. A (weighted) Q tau estimator minimizes a tau scale of the differences between empirical and theoretical quantiles. It is n(1/2) consistent; unfortunately, it is not asymptotically normal and, therefore, inconvenient for inference. However, it is a convenient starting point for a one-step weighted likelihood estimator, where the weights are based on a disparity measure between the model density and a kernel density estimate. The one-step weighted likelihood estimator is asymptotically normal and fully efficient under the model. It is also highly robust under outlier contamination. Supplementary materials are available online.
Resumo:
The nuclear matrix, a proteinaceous network believed to be a scaffolding structure determining higher-order organization of chromatin, is usually prepared from intact nuclei by a series of extraction steps. In most cell types investigated the nuclear matrix does not spontaneously resist these treatments but must be stabilized before the application of extracting agents. Incubation of isolated nuclei at 37C or 42C in buffers containing Mg++ has been widely employed as stabilizing agent. We have previously demonstrated that heat treatment induces changes in the distribution of three nuclear scaffold proteins in nuclei prepared in the absence of Mg++ ions. We studied whether different concentrations of Mg++ (2.0-5 mM) affect the spatial distribution of nuclear matrix proteins in nuclei isolated from K562 erythroleukemia cells and stabilized by heat at either 37C or 42C. Five proteins were studied, two of which were RNA metabolism-related proteins (a 105-kD component of splicing complexes and an RNP component), one a 126-kD constituent of a class of nuclear bodies, and two were components of the inner matrix network. The localization of proteins was determined by immunofluorescent staining and confocal scanning laser microscope. Mg++ induced significant changes of antigen distribution even at the lowest concentration employed, and these modifications were enhanced in parallel with increase in the concentration of the divalent cation. The different sensitivity to heat stabilization and Mg++ of these nuclear proteins might reflect a different degree of association with the nuclear scaffold and can be closely related to their functional or structural role.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
Electrical impedance tomography (EIT) is a non-invasive imaging technique that can measure cardiac-related intra-thoracic impedance changes. EIT-based cardiac output estimation relies on the assumption that the amplitude of the impedance change in the ventricular region is representative of stroke volume (SV). However, other factors such as heart motion can significantly affect this ventricular impedance change. In the present case study, a magnetic resonance imaging-based dynamic bio-impedance model fitting the morphology of a single male subject was built. Simulations were performed to evaluate the contribution of heart motion and its influence on EIT-based SV estimation. Myocardial deformation was found to be the main contributor to the ventricular impedance change (56%). However, motion-induced impedance changes showed a strong correlation (r = 0.978) with left ventricular volume. We explained this by the quasi-incompressibility of blood and myocardium. As a result, EIT achieved excellent accuracy in estimating a wide range of simulated SV values (error distribution of 0.57 ± 2.19 ml (1.02 ± 2.62%) and correlation of r = 0.996 after a two-point calibration was applied to convert impedance values to millilitres). As the model was based on one single subject, the strong correlation found between motion-induced changes and ventricular volume remains to be verified in larger datasets.
Resumo:
Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.
Resumo:
This study shows how a new generation of terrestrial laser scanners can be used to investigate glacier surface ablation and other elements of glacial hydrodynamics at exceptionally high spatial and temporal resolution. The study area is an Alpine valley glacier, Haut Glacier d'Arolla, Switzerland. Here we use an ultra-long-range lidar RIEGL VZ-6000 scanner, having a laser specifically designed for measurement of snow- and ice-cover surfaces. We focus on two timescales: seasonal and daily. Our results show that a near-infrared scanning laser system can provide high-precision elevation change and ablation data from long ranges, and over relatively large sections of the glacier surface. We use it to quantify spatial variations in the patterns of surface melt at the seasonal scale, as controlled by both aspect and differential debris cover. At the daily scale, we quantify the effects of ogive-related differences in ice surface debris content on spatial patterns of ablation. Daily scale measurements point to possible hydraulic jacking of the glacier associated with short-term water pressure rises. This latter demonstration shows that this type of lidar may be used to address subglacial hydrologic questions, in addition to motion and ablation measurements.
Resumo:
The analysis of rockfall characteristics and spatial distribution is fundamental to understand and model the main factors that predispose to failure. In our study we analysed LiDAR point clouds aiming to: (1) detect and characterise single rockfalls; (2) investigate their spatial distribution. To this end, different cluster algorithms were applied: 1a) Nearest Neighbour Clutter Removal (NNCR) in combination with the Expectation?Maximization (EM) in order to separate feature points from clutter; 1b) a density based algorithm (DBSCAN) was applied to isolate the single clusters (i.e. the rockfall events); 2) finally we computed the Ripley's K-function to investigate the global spatial pattern of the extracted rockfalls. The method allowed proper identification and characterization of more than 600 rockfalls occurred on a cliff located in Puigcercos (Catalonia, Spain) during a time span of six months. The spatial distribution of these events proved that rockfall were clustered distributed at a welldefined distance-range. Computations were carried out using R free software for statistical computing and graphics. The understanding of the spatial distribution of precursory rockfalls may shed light on the forecasting of future failures.
Resumo:
Chromosome 22q11.2 deletion syndrome (22q11DS) is a genetic disease known to lead to cerebral structural alterations, which we study using the framework of the macroscopic white-matter connectome. We create weighted connectomes of 44 patients with 22q11DS and 44 healthy controls using diffusion tensor magnetic resonance imaging, and perform a weighted graph theoretical analysis. After confirming global network integration deficits in 22q11DS (previously identified using binary connectomes), we identify the spatial distribution of regions responsible for global deficits. Next, we further characterize the dysconnectivity of the deficient regions in terms of sub-network properties, and investigate their relevance with respect to clinical profiles. We define the subset of regions with decreased nodal integration (evaluated using the closeness centrality measure) as the affected core (A-core) of the 22q11DS structural connectome. A-core regions are broadly bilaterally symmetric and consist of numerous network hubs - chiefly parietal and frontal cortical, as well as subcortical regions. Using a simulated lesion approach, we demonstrate that these core regions and their connections are particularly important to efficient network communication. Moreover, these regions are generally densely connected, but less so in 22q11DS. These specific disturbances are associated to a rerouting of shortest network paths that circumvent the A-core in 22q11DS, "de-centralizing" the network. Finally, the efficiency and mean connectivity strength of an orbito-frontal/cingulate circuit, included in the affected regions, correlate negatively with the extent of negative symptoms in 22q11DS patients, revealing the clinical relevance of present findings. The identified A-core overlaps numerous regions previously identified as affected in 22q11DS as well as in schizophrenia, which approximately 30-40% of 22q11DS patients develop.