914 resultados para scenario-based assessment
Resumo:
This study aimed to use the plantar pressure insole for estimating the three-dimensional ground reaction force (GRF) as well as the frictional torque (T(F)) during walking. Eleven subjects, six healthy and five patients with ankle disease participated in the study while wearing pressure insoles during several walking trials on a force-plate. The plantar pressure distribution was analyzed and 10 principal components of 24 regional pressure values with the stance time percentage (STP) were considered for GRF and T(F) estimation. Both linear and non-linear approximators were used for estimating the GRF and T(F) based on two learning strategies using intra-subject and inter-subjects data. The RMS error and the correlation coefficient between the approximators and the actual patterns obtained from force-plate were calculated. Our results showed better performance for non-linear approximation especially when the STP was considered as input. The least errors were observed for vertical force (4%) and anterior-posterior force (7.3%), while the medial-lateral force (11.3%) and frictional torque (14.7%) had higher errors. The result obtained for the patients showed higher error; nevertheless, when the data of the same patient were used for learning, the results were improved and in general slight differences with healthy subjects were observed. In conclusion, this study showed that ambulatory pressure insole with data normalization, an optimal choice of inputs and a well-trained nonlinear mapping function can estimate efficiently the three-dimensional ground reaction force and frictional torque in consecutive gait cycle without requiring a force-plate.
Resumo:
Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.
Resumo:
Although various foot models were proposed for kinematics assessment using skin makers, no objective justification exists for the foot segmentations. This study proposed objective kinematic criteria to define which foot joints are relevant (dominant) in skin markers assessments. Among the studied joints, shank-hindfoot, hindfoot-midfoot and medial-lateral forefoot joints were found to have larger mobility than flexibility of their neighbour bonesets. The amplitude and pattern consistency of these joint angles confirmed their dominancy. Nevertheless, the consistency of the medial-lateral forefoot joint amplitude was lower. These three joints also showed acceptable sensibility to experimental errors which supported their dominancy. This study concluded that to be reliable for assessments using skin markers, the foot and ankle complex could be divided into shank, hindfoot, medial forefoot, lateral forefoot and toes. Kinematics of foot models with more segments must be more cautiously used.
Resumo:
BACKGROUND: Infliximab (IFX), adalimumab (ADA), and certolizumab pegol (CZP) have similar efficacy in induction and maintenance of clinical remission in Crohn's disease (CD). Given the comparable nature of these drugs, patient preferences may influence the choice of the product. We aimed to identify factors that may contribute to CD patients' decision in selecting one anti-tumor necrosis factor (TNF) agent over the others. METHODS: A prospective survey was performed among anti-TNF-naïve CD patients. Prior to completion of a questionnaire, patients were provided with a written description of the three anti-TNF agents, focusing on indications, mode of administration, side effects, and scientific evidence of efficacy and safety for each drug. RESULTS: One hundred patients (47 females, mean age 45 ± 16 years, range 19-81) with an ileal, colonic, or ileocolonic (33%, 40%, and 27%, respectively) disease location completed the questionnaire. Based on the information provided, 36% of patients preferred ADA, 28% CZP, and 25% IFX, whereas 11% were undecided. The patients' decision in selecting a specific anti-TNF drug was influenced by the following factors: ease of use (69%), time required for therapy (34%), time interval between application of the drug (31%), scientific evidence for efficacy (19%), and fear of syringes (10%). CONCLUSIONS: The majority of patients preferred anti-TNF medications that were administered by subcutaneous injection rather than by intravenous infusion. Ease of use and time required for therapy were two major factors influencing the patients' selection of a specific anti-TNF drug. Patients' individual preferences should be taken into account when prescribing anti-TNF drugs. (Inflamm Bowel Dis 2012).
Resumo:
Background: Johanson-Blizzard syndrome (JBS; OMIM 243800) is an autosomal recessive disorder that includes congenital exocrine pancreatic insufficiency, facial dysmorphism with the characteristic nasal wing hypoplasia, multiple malformations, and frequent mental retardation. Our previous work has shown that JBS is caused by mutations in human UBR1, which encodes one of the E3 ubiquitin ligases of the N-end rule pathway. The N-end rule relates the regulation of the in vivo half-life of a protein to the identity of its N-terminal residue. One class of degradation signals (degrons) recognized by UBR1 are destabilizing N-terminal residues of protein substrates.Methodology/Principal Findings: Most JBS-causing alterations of UBR1 are nonsense, frameshift or splice-site mutations that abolish UBR1 activity. We report here missense mutations of human UBR1 in patients with milder variants of JBS. These single-residue changes, including a previously reported missense mutation, involve positions in the RING-H2 and UBR domains of UBR1 that are conserved among eukaryotes. Taking advantage of this conservation, we constructed alleles of the yeast Saccharomyces cerevisiae UBR1 that were counterparts of missense JBS-UBR1 alleles. Among these yeast Ubr1 mutants, one of them (H160R) was inactive in yeast-based activity assays, the other one (Q1224E) had a detectable but weak activity, and the third one (V146L) exhibited a decreased but significant activity, in agreement with manifestations of JBS in the corresponding JBS patients.Conclusions/Significance: These results, made possible by modeling defects of a human ubiquitin ligase in its yeast counterpart, verified and confirmed the relevance of specific missense UBR1 alleles to JBS, and suggested that a residual activity of a missense allele is causally associated with milder variants of JBS.
Resumo:
The interaction of tunneling with groundwater is a problem both from an environmental and an engineering point of view. In fact, tunnel drilling may cause a drawdown of piezometric levels and water inflows into tunnels that may cause problems during excavation of the tunnel. While the influence of tunneling on the regional groundwater systems may be adequately predicted in porous media using analytical solutions, such an approach is difficult to apply in fractured rocks. Numerical solutions are preferable and various conceptual approaches have been proposed to describe and model groundwater flow through fractured rock masses, ranging from equivalent continuum models to discrete fracture network simulation models. However, their application needs many preliminary investigations on the behavior of the groundwater system based on hydrochemical and structural data. To study large scale flow systems in fractured rocks of mountainous terrains, a comprehensive study was conducted in southern Switzerland, using as case studies two infrastructures actually under construction: (i) the Monte Ceneri base railway tunnel (Ticino), and the (ii) San Fedele highway tunnel (Roveredo, Graubiinden). The chosen approach in this study combines the temporal and spatial variation of geochemical and geophysical measurements. About 60 localities from both surface and underlying tunnels were temporarily and spatially monitored during more than one year. At first, the project was focused on the collection of hydrochemical and structural data. A number of springs, selected in the area surrounding the infrastructures, were monitored for discharge, electric conductivity, pH, and temperature. Water samples (springs, tunnel inflows and rains) were taken for isotopic analysis; in particular the stable isotope composition (δ2Η, δ180 values) can reflect the origin of the water, because of spatial (recharge altitude, topography, etc.) and temporal (seasonal) effects on precipitation which in turn strongly influence the isotopic composition of groundwater. Tunnel inflows in the accessible parts of the tunnels were also sampled and, if possible, monitored with time. Noble-gas concentrations and their isotope ratios were used in selected locations to better understand the origin and the circulation of the groundwater. In addition, electrical resistivity and VLF-type electromagnetic surveys were performed to identify water bearing fractures and/or weathered areas that could be intersected at depth during tunnel construction. The main goal of this work was to demonstrate that these hydrogeological data and geophysical methods, combined with structural and hydrogeological information, can be successfully used in order to develop hydrogeological conceptual models of the groundwater flow in regions to be exploited for tunnels. The main results of the project are: (i) to have successfully tested the application of electrical resistivity and VLF-electromagnetic surveys to asses water-bearing zones during tunnel drilling; (ii) to have verified the usefulness of noble gas, major ion and stable isotope compositions as proxies for the detection of faults and to understand the origin of the groundwater and its flow regimes (direct rain water infiltration or groundwater of long residence time); and (iii) to have convincingly tested the combined application of a geochemical and geophysical approach to assess and predict the vulnerability of springs to tunnel drilling. - L'interférence entre eaux souterraines et des tunnels pose des problèmes environnementaux et de génie civile. En fait, la construction d'un tunnel peut faire abaisser le niveau des nappes piézométriques et faire infiltrer de l'eau dans le tunnel et ainsi créer des problème pendant l'excavation. Alors que l'influence de la construction d'un tunnel sur la circulation régionale de l'eau souterraine dans des milieux poreux peut être prédite relativement facilement par des solution analytiques de modèles, ceci devient difficile dans des milieux fissurés. Dans ce cas-là, des solutions numériques sont préférables et plusieurs approches conceptuelles ont été proposées pour décrire et modéliser la circulation d'eau souterraine à travers les roches fissurées, en allant de modèles d'équivalence continue à des modèles de simulation de réseaux de fissures discrètes. Par contre, leur application demande des investigations importantes concernant le comportement du système d'eau souterraine basées sur des données hydrochimiques et structurales. Dans le but d'étudier des grands systèmes de circulation d'eau souterraine dans une région de montagnes, une étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction: (i) Le tunnel ferroviaire de base du Monte Ceneri (Tessin) et (ii) le tunnel routière de San Fedele (Roveredo, Grisons). L'approche choisie dans cette étude est la combinaison de variations temporelles et spatiales des mesures géochimiques et géophysiques. Environs 60 localités situées à la surface ainsi que dans les tunnels soujacents ont été suiviès du point de vue temporel et spatial pendant plus de un an. Dans un premier temps le projet se focalisait sur la collecte de données hydrochimiques et structurales. Un certain nombre de sources, sélectionnées dans les environs des infrastructures étudiées ont été suivies pour le débit, la conductivité électrique, le pH et la température. De l'eau (sources, infiltration d'eau de tunnel et pluie) a été échantillonnés pour des analyses isotopiques; ce sont surtout les isotopes stables (δ2Η, δ180) qui peuvent indiquer l'origine d'une eaux, à cause de la dépendance d'effets spatiaux (altitude de recharge, topographie etc.) ainsi que temporels (saisonaux) sur les précipitations météoriques , qui de suite influencent ainsi la composition isotopique de l'eau souterraine. Les infiltrations d'eau dans les tunnels dans les parties accessibles ont également été échantillonnées et si possible suivies au cours du temps. La concentration de gaz nobles et leurs rapports isotopiques ont également été utilisées pour quelques localités pour mieux comprendre l'origine et la circulation de l'eau souterraine. En plus, des campagnes de mesures de la résistivité électrique et électromagnétique de type VLF ont été menées afin d'identifier des zone de fractures ou d'altération qui pourraient interférer avec les tunnels en profondeur pendant la construction. Le but principal de cette étude était de démontrer que ces données hydrogéologiques et géophysiques peuvent être utilisées avec succès pour développer des modèles hydrogéologiques conceptionels de tunnels. Les résultats principaux de ce travail sont : i) d'avoir testé avec succès l'application de méthodes de la tomographie électrique et des campagnes de mesures électromagnétiques de type VLF afin de trouver des zones riches en eau pendant l'excavation d'un tunnel ; ii) d'avoir prouvé l'utilité des gaz nobles, des analyses ioniques et d'isotopes stables pour déterminer l'origine de l'eau infiltrée (de la pluie par le haut ou ascendant de l'eau remontant des profondeurs) et leur flux et pour déterminer la position de failles ; et iii) d'avoir testé d'une manière convainquant l'application combinée de méthodes géochimiques et géophysiques pour juger et prédire la vulnérabilité de sources lors de la construction de tunnels. - L'interazione dei tunnel con il circuito idrico sotterraneo costituisce un problema sia dal punto di vista ambientale che ingegneristico. Lo scavo di un tunnel puô infatti causare abbassamenti dei livelli piezometrici, inoltre le venute d'acqua in galleria sono un notevole problema sia in fase costruttiva che di esercizio. Nel caso di acquiferi in materiale sciolto, l'influenza dello scavo di un tunnel sul circuito idrico sotterraneo, in genere, puô essere adeguatamente predetta attraverso l'applicazione di soluzioni analitiche; al contrario un approccio di questo tipo appare inadeguato nel caso di scavo in roccia. Per gli ammassi rocciosi fratturati sono piuttosto preferibili soluzioni numeriche e, a tal proposito, sono stati proposti diversi approcci concettuali; nella fattispecie l'ammasso roccioso puô essere modellato come un mezzo discreto ο continuo équivalente. Tuttavia, una corretta applicazione di qualsiasi modello numerico richiede necessariamente indagini preliminari sul comportamento del sistema idrico sotterraneo basate su dati idrogeochimici e geologico strutturali. Per approfondire il tema dell'idrogeologia in ammassi rocciosi fratturati tipici di ambienti montani, è stato condotto uno studio multidisciplinare nel sud della Svizzera sfruttando come casi studio due infrastrutture attualmente in costruzione: (i) il tunnel di base del Monte Ceneri (canton Ticino) e (ii) il tunnel autostradale di San Fedele (Roveredo, canton Grigioni). L'approccio di studio scelto ha cercato di integrare misure idrogeochimiche sulla qualité e quantité delle acque e indagini geofisiche. Nella fattispecie sono state campionate le acque in circa 60 punti spazialmente distribuiti sia in superficie che in sotterraneo; laddove possibile il monitoraggio si è temporalmente prolungato per più di un anno. In una prima fase, il progetto di ricerca si è concentrato sull'acquisizione dati. Diverse sorgenti, selezionate nelle aree di possibile influenza attorno allé infrastrutture esaminate, sono state monitorate per quel che concerne i parametri fisico-chimici: portata, conduttività elettrica, pH e temperatura. Campioni d'acqua sono stati prelevati mensilmente su sorgenti, venute d'acqua e precipitazioni, per analisi isotopiche; nella fattispecie, la composizione in isotopi stabili (δ2Η, δ180) tende a riflettere l'origine delle acque, in quanto, variazioni sia spaziali (altitudine di ricarica, topografia, etc.) che temporali (variazioni stagionali) della composizione isotopica delle precipitazioni influenzano anche le acque sotterranee. Laddove possibile, sono state campionate le venute d'acqua in galleria sia puntualmente che al variare del tempo. Le concentrazioni dei gas nobili disciolti nell'acqua e i loro rapporti isotopici sono stati altresi utilizzati in alcuni casi specifici per meglio spiegare l'origine delle acque e le tipologie di circuiti idrici sotterranei. Inoltre, diverse indagini geofisiche di resistività elettrica ed elettromagnetiche a bassissima frequenza (VLF) sono state condotte al fine di individuare le acque sotterranee circolanti attraverso fratture dell'ammasso roccioso. Principale obiettivo di questo lavoro è stato dimostrare come misure idrogeochimiche ed indagini geofisiche possano essere integrate alio scopo di sviluppare opportuni modelli idrogeologici concettuali utili per lo scavo di opere sotterranee. I principali risultati ottenuti al termine di questa ricerca sono stati: (i) aver testato con successo indagini geofisiche (ERT e VLF-EM) per l'individuazione di acque sotterranee circolanti attraverso fratture dell'ammasso roccioso e che possano essere causa di venute d'acqua in galleria durante lo scavo di tunnel; (ii) aver provato l'utilità di analisi su gas nobili, ioni maggiori e isotopi stabili per l'individuazione di faglie e per comprendere l'origine delle acque sotterranee (acque di recente infiltrazione ο provenienti da circolazioni profonde); (iii) aver testato in maniera convincente l'integrazione delle indagini geofisiche e di misure geochimiche per la valutazione della vulnérabilité delle sorgenti durante lo scavo di nuovi tunnel. - "La NLFA (Nouvelle Ligne Ferroviaire à travers les Alpes) axe du Saint-Gothard est le plus important projet de construction de Suisse. En bâtissant la nouvelle ligne du Saint-Gothard, la Suisse réalise un des plus grands projets de protection de l'environnement d'Europe". Cette phrase, qu'on lit comme présentation du projet Alptransit est particulièrement éloquente pour expliquer l'utilité des nouvelles lignes ferroviaires transeuropéens pour le développement durable. Toutefois, comme toutes grandes infrastructures, la construction de nouveaux tunnels ont des impacts inévitables sur l'environnement. En particulier, le possible drainage des eaux souterraines réalisées par le tunnel peut provoquer un abaissement du niveau des nappes piézométriques. De plus, l'écoulement de l'eau à l'intérieur du tunnel, conduit souvent à des problèmes d'ingénierie. Par exemple, d'importantes infiltrations d'eau dans le tunnel peuvent compliquer les phases d'excavation, provoquant un retard dans l'avancement et dans le pire des cas, peuvent mettre en danger la sécurité des travailleurs. Enfin, l'infiltration d'eau peut être un gros problème pendant le fonctionnement du tunnel. Du point de vue de la science, avoir accès à des infrastructures souterraines représente une occasion unique d'obtenir des informations géologiques en profondeur et pour échantillonner des eaux autrement inaccessibles. Dans ce travail, nous avons utilisé une approche pluridisciplinaire qui intègre des mesures d'étude hydrogéochimiques effectués sur les eaux de surface et des investigations géophysiques indirects, tels que la tomographic de résistivité électrique (TRE) et les mesures électromagnétiques de type VLF. L'étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction, qui sont le tunnel ferroviaire de base du Monte Ceneri, une partie du susmentionné projet Alptransit, situé entièrement dans le canton Tessin, et le tunnel routière de San Fedele, situé a Roveredo dans le canton des Grisons. Le principal objectif était de montrer comment il était possible d'intégrer les deux approches, géophysiques et géochimiques, afin de répondre à la question de ce que pourraient être les effets possibles dû au drainage causés par les travaux souterrains. L'accès aux galeries ci-dessus a permis une validation adéquate des enquêtes menées confirmant, dans chaque cas, les hypothèses proposées. A cette fin, nous avons fait environ 50 profils géophysiques (28 imageries électrique bidimensionnels et 23 électromagnétiques) dans les zones de possible influence par le tunnel, dans le but d'identifier les fractures et les discontinuités dans lesquelles l'eau souterraine peut circuler. De plus, des eaux ont été échantillonnés dans 60 localités situées la surface ainsi que dans les tunnels subjacents, le suivi mensuelle a duré plus d'un an. Nous avons mesurés tous les principaux paramètres physiques et chimiques: débit, conductivité électrique, pH et température. De plus, des échantillons d'eaux ont été prélevés pour l'analyse mensuelle des isotopes stables de l'hydrogène et de l'oxygène (δ2Η, δ180). Avec ces analyses, ainsi que par la mesure des concentrations des gaz rares dissous dans les eaux et de leurs rapports isotopiques que nous avons effectués dans certains cas spécifiques, il était possible d'expliquer l'origine des différents eaux souterraines, les divers modes de recharge des nappes souterraines, la présence de possible phénomènes de mélange et, en général, de mieux expliquer les circulations d'eaux dans le sous-sol. Le travail, même en constituant qu'une réponse partielle à une question très complexe, a permis d'atteindre certains importants objectifs. D'abord, nous avons testé avec succès l'applicabilité des méthodes géophysiques indirectes (TRE et électromagnétiques de type VLF) pour prédire la présence d'eaux souterraines dans le sous-sol des massifs rocheux. De plus, nous avons démontré l'utilité de l'analyse des gaz rares, des isotopes stables et de l'analyses des ions majeurs pour la détection de failles et pour comprendre l'origine des eaux souterraines (eau de pluie par le haut ou eau remontant des profondeurs). En conclusion, avec cette recherche, on a montré que l'intégration des ces informations (géophysiques et géochimiques) permet le développement de modèles conceptuels appropriés, qui permettant d'expliquer comment l'eau souterraine circule. Ces modèles permettent de prévoir les infiltrations d'eau dans les tunnels et de prédire la vulnérabilité de sources et des autres ressources en eau lors de construction de tunnels.
Resumo:
The aim of this paper is to describe the process and challenges in building exposure scenarios for engineered nanomaterials (ENM), using an exposure scenario format similar to that used for the European Chemicals regulation (REACH). Over 60 exposure scenarios were developed based on information from publicly available sources (literature, books, and reports), publicly available exposure estimation models, occupational sampling campaign data from partnering institutions, and industrial partners regarding their own facilities. The primary focus was on carbon-based nanomaterials, nano-silver (nano-Ag) and nano-titanium dioxide (nano-TiO2), and included occupational and consumer uses of these materials with consideration of the associated environmental release. The process of building exposure scenarios illustrated the availability and limitations of existing information and exposure assessment tools for characterizing exposure to ENM, particularly as it relates to risk assessment. This article describes the gaps in the information reviewed, recommends future areas of ENM exposure research, and proposes types of information that should, at a minimum, be included when reporting the results of such research, so that the information is useful in a wider context.
Resumo:
We investigated the association between exposure to radio-frequency electromagnetic fields (RF-EMFs) from broadcast transmitters and childhood cancer. First, we conducted a time-to-event analysis including children under age 16 years living in Switzerland on December 5, 2000. Follow-up lasted until December 31, 2008. Second, all children living in Switzerland for some time between 1985 and 2008 were included in an incidence density cohort. RF-EMF exposure from broadcast transmitters was modeled. Based on 997 cancer cases, adjusted hazard ratios in the time-to-event analysis for the highest exposure category (>0.2 V/m) as compared with the reference category (<0.05 V/m) were 1.03 (95% confidence interval (CI): 0.74, 1.43) for all cancers, 0.55 (95% CI: 0.26, 1.19) for childhood leukemia, and 1.68 (95% CI: 0.98, 2.91) for childhood central nervous system (CNS) tumors. Results of the incidence density analysis, based on 4,246 cancer cases, were similar for all types of cancer and leukemia but did not indicate a CNS tumor risk (incidence rate ratio = 1.03, 95% CI: 0.73, 1.46). This large census-based cohort study did not suggest an association between predicted RF-EMF exposure from broadcasting and childhood leukemia. Results for CNS tumors were less consistent, but the most comprehensive analysis did not suggest an association.
Resumo:
Asphalt pavements suffer various failures due to insufficient quality within their design lives. The American Association of State Highway and Transportation Officials (AASHTO) Mechanistic-Empirical Pavement Design Guide (MEPDG) has been proposed to improve pavement quality through quantitative performance prediction. Evaluation of the actual performance (quality) of pavements requires in situ nondestructive testing (NDT) techniques that can accurately measure the most critical, objective, and sensitive properties of pavement systems. The purpose of this study is to assess existing as well as promising new NDT technologies for quality control/quality assurance (QC/QA) of asphalt mixtures. Specifically, this study examined field measurements of density via the PaveTracker electromagnetic gage, shear-wave velocity via surface-wave testing methods, and dynamic stiffness via the Humboldt GeoGauge for five representative paving projects covering a range of mixes and traffic loads. The in situ tests were compared against laboratory measurements of core density and dynamic modulus. The in situ PaveTracker density had a low correlation with laboratory density and was not sensitive to variations in temperature or asphalt mix type. The in situ shear-wave velocity measured by surface-wave methods was most sensitive to variations in temperature and asphalt mix type. The in situ density and in situ shear-wave velocity were combined to calculate an in situ dynamic modulus, which is a performance-based quality measurement. The in situ GeoGauge stiffness measured on hot asphalt mixtures several hours after paving had a high correlation with the in situ dynamic modulus and the laboratory density, whereas the stiffness measurement of asphalt mixtures cooled with dry ice or at ambient temperature one or more days after paving had a very low correlation with the other measurements. To transform the in situ moduli from surface-wave testing into quantitative quality measurements, a QC/QA procedure was developed to first correct the in situ moduli measured at different field temperatures to the moduli at a common reference temperature based on master curves from laboratory dynamic modulus tests. The corrected in situ moduli can then be compared against the design moduli for an assessment of the actual pavement performance. A preliminary study of microelectromechanical systems- (MEMS)-based sensors for QC/QA and health monitoring of asphalt pavements was also performed.
Resumo:
Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.
Resumo:
BACKGROUND: Exposure to combination antiretroviral therapy (cART) can lead to important metabolic changes and increased risk of coronary heart disease (CHD). Computerized clinical decision support systems have been advocated to improve the management of patients at risk for CHD but it is unclear whether such systems reduce patients' risk for CHD. METHODS: We conducted a cluster trial within the Swiss HIV Cohort Study (SHCS) of HIV-infected patients, aged 18 years or older, not pregnant and receiving cART for >3 months. We randomized 165 physicians to either guidelines for CHD risk factor management alone or guidelines plus CHD risk profiles. Risk profiles included the Framingham risk score, CHD drug prescriptions and CHD events based on biannual assessments, and were continuously updated by the SHCS data centre and integrated into patient charts by study nurses. Outcome measures were total cholesterol, systolic and diastolic blood pressure and Framingham risk score. RESULTS: A total of 3,266 patients (80% of those eligible) had a final assessment of the primary outcome at least 12 months after the start of the trial. Mean (95% confidence interval) patient differences where physicians received CHD risk profiles and guidelines, rather than guidelines alone, were total cholesterol -0.02 mmol/l (-0.09-0.06), systolic blood pressure -0.4 mmHg (-1.6-0.8), diastolic blood pressure -0.4 mmHg (-1.5-0.7) and Framingham 10-year risk score -0.2% (-0.5-0.1). CONCLUSIONS: Systemic computerized routine provision of CHD risk profiles in addition to guidelines does not significantly improve risk factors for CHD in patients on cART.
Resumo:
This work aimed at assessing the doses delivered in Switzerland to paediatric patients during computed tomography (CT) examinations of the brain, chest and abdomen, and at establishing diagnostic reference levels (DRLs) for various age groups. Forms were sent to the ten centres performing CT on children, addressing the demographics, the indication and the scanning parameters: number of series, kilovoltage, tube current, rotation time, reconstruction slice thickness and pitch, volume CT dose index (CTDI(vol)) and dose length product (DLP). Per age group, the proposed DRLs for brain, chest and abdomen are, respectively, in terms of CTDI(vol): 20, 30, 40, 60 mGy; 5, 8, 10, 12 mGy; 7, 9, 13, 16 mGy; and in terms of DLP: 270, 420, 560, 1,000 mGy cm; 110, 200, 220, 460 mGy cm; 130, 300, 380, 500 mGy cm. An optimisation process should be initiated to reduce the spread in dose recorded in this study. A major element of this process should be the use of DRLs.
Resumo:
Screening people without symptoms of disease is an attractive idea. Screening allows early detection of disease or elevated risk of disease, and has the potential for improved treatment and reduction of mortality. The list of future screening opportunities is set to grow because of the refinement of screening techniques, the increasing frequency of degenerative and chronic diseases, and the steadily growing body of evidence on genetic predispositions for various diseases. But how should we decide on the diseases for which screening should be done and on recommendations for how it should be implemented? We use the examples of prostate cancer and genetic screening to show the importance of considering screening as an ongoing population-based intervention with beneficial and harmful effects, and not simply the use of a test. Assessing whether screening should be recommended and implemented for any named disease is therefore a multi-dimensional task in health technology assessment. There are several countries that already use established processes and criteria to assess the appropriateness of screening. We argue that the Swiss healthcare system needs a nationwide screening commission mandated to conduct appropriate evidence-based evaluation of the impact of proposed screening interventions, to issue evidence-based recommendations, and to monitor the performance of screening programmes introduced. Without explicit processes there is a danger that beneficial screening programmes could be neglected and that ineffective, and potentially harmful, screening procedures could be introduced.
Resumo:
Summary Due to their conic shape and the reduction of area with increasing elevation, mountain ecosystems were early identified as potentially very sensitive to global warming. Moreover, mountain systems may experience unprecedented rates of warming during the next century, two or three times higher than that records of the 20th century. In this context, species distribution models (SDM) have become important tools for rapid assessment of the impact of accelerated land use and climate change on the distribution plant species. In my study, I developed and tested new predictor variables for species distribution models (SDM), specific to current and future geographic projections of plant species in a mountain system, using the Western Swiss Alps as model region. Since meso- and micro-topography are relevant to explain geographic patterns of plant species in mountain environments, I assessed the effect of scale on predictor variables and geographic projections of SDM. I also developed a methodological framework of space-for-time evaluation to test the robustness of SDM when projected in a future changing climate. Finally, I used a cellular automaton to run dynamic simulations of plant migration under climate change in a mountain landscape, including realistic distance of seed dispersal. Results of future projections for the 21st century were also discussed in perspective of vegetation changes monitored during the 20th century. Overall, I showed in this study that, based on the most severe A1 climate change scenario and realistic dispersal simulations of plant dispersal, species extinctions in the Western Swiss Alps could affect nearly one third (28.5%) of the 284 species modeled by 2100. With the less severe 61 scenario, only 4.6% of species are predicted to become extinct. However, even with B1, 54% (153 species) may still loose more than 80% of their initial surface. Results of monitoring of past vegetation changes suggested that plant species can react quickly to the warmer conditions as far as competition is low However, in subalpine grasslands, competition of already present species is probably important and limit establishment of newly arrived species. Results from future simulations also showed that heavy extinctions of alpine plants may start already in 2040, but the latest in 2080. My study also highlighted the importance of fine scale and regional. assessments of climate change impact on mountain vegetation, using more direct predictor variables. Indeed, predictions at the continental scale may fail to predict local refugees or local extinctions, as well as loss of connectivity between local populations. On the other hand, migrations of low-elevation species to higher altitude may be difficult to predict at the local scale. Résumé La forme conique des montagnes ainsi que la diminution de surface dans les hautes altitudes sont reconnues pour exposer plus sensiblement les écosystèmes de montagne au réchauffement global. En outre, les systèmes de montagne seront sans doute soumis durant le 21ème siècle à un réchauffement deux à trois fois plus rapide que celui mesuré durant le 20ème siècle. Dans ce contexte, les modèles prédictifs de distribution géographique de la végétation se sont imposés comme des outils puissants pour de rapides évaluations de l'impact des changements climatiques et de la transformation du paysage par l'homme sur la végétation. Dans mon étude, j'ai développé de nouvelles variables prédictives pour les modèles de distribution, spécifiques à la projection géographique présente et future des plantes dans un système de montagne, en utilisant les Préalpes vaudoises comme zone d'échantillonnage. La méso- et la microtopographie étant particulièrement adaptées pour expliquer les patrons de distribution géographique des plantes dans un environnement montagneux, j'ai testé les effets d'échelle sur les variables prédictives et sur les projections des modèles de distribution. J'ai aussi développé un cadre méthodologique pour tester la robustesse potentielle des modèles lors de projections pour le futur. Finalement, j'ai utilisé un automate cellulaire pour simuler de manière dynamique la migration future des plantes dans le paysage et dans quatre scénarios de changement climatique pour le 21ème siècle. J'ai intégré dans ces simulations des mécanismes et des distances plus réalistes de dispersion de graines. J'ai pu montrer, avec les simulations les plus réalistes, que près du tiers des 284 espèces considérées (28.5%) pourraient être menacées d'extinction en 2100 dans le cas du plus sévère scénario de changement climatique A1. Pour le moins sévère des scénarios B1, seulement 4.6% des espèces sont menacées d'extinctions, mais 54% (153 espèces) risquent de perdre plus 80% de leur habitat initial. Les résultats de monitoring des changements de végétation dans le passé montrent que les plantes peuvent réagir rapidement au réchauffement climatique si la compétition est faible. Dans les prairies subalpines, les espèces déjà présentes limitent certainement l'arrivée de nouvelles espèces par effet de compétition. Les résultats de simulation pour le futur prédisent le début d'extinctions massives dans les Préalpes à partir de 2040, au plus tard en 2080. Mon travail démontre aussi l'importance d'études régionales à échelle fine pour évaluer l'impact des changements climatiques sur la végétation, en intégrant des variables plus directes. En effet, les études à échelle continentale ne tiennent pas compte des micro-refuges, des extinctions locales ni des pertes de connectivité entre populations locales. Malgré cela, la migration des plantes de basses altitudes reste difficile à prédire à l'échelle locale sans modélisation plus globale.
Resumo:
INTRODUCTION: In this study we evaluated the validity of garment-based quadriceps stimulation (GQS) for assessment of muscle inactivation in comparison with femoral nerve stimulation (FNS). METHODS: Inactivation estimates (superimposed doublet torque), self-reported discomfort, and twitch and doublet contractile properties were compared between GQS and FNS in 15 healthy subjects. RESULTS: Superimposed doublet torque was significantly lower for GQS than for FNS at 20% and 40% maximum voluntary contraction (MVC) (P < 0.01), but not at 60%, 80%, and 100% MVC. Discomfort scores were systematically lower for GQS than for FNS (P < 0.05). Resting twitch and doublet peak torque were lower for GQS, and time to peak torque was shorter for GQS than for FNS (P < 0.01). CONCLUSIONS: GQS can be used with confidence for straightforward evaluation of quadriceps muscle inactivation, whereas its validity for assessment of contractile properties remains to be determined. Muscle Nerve 51: 117-124, 2015.