926 resultados para ASSESSMENT MODELS
Resumo:
Homology modeling is the most commonly used technique to build a three-dimensional model for a protein sequence. It heavily relies on the quality of the sequence alignment between the protein to model and related proteins with a known three dimensional structure. Alignment quality can be assessed according to the physico-chemical properties of the three dimensional models it produces.In this work, we introduce fifteen predictors designed to evaluate the properties of the models obtained for various alignments. They consist of an energy value obtained from different force fields (CHARMM, ProsaII or ANOLEA) computed on residue selected around misaligned regions. These predictors were evaluated on ten challenging test cases. For each target, all possible ungapped alignments are generated and their corresponding models are computed and evaluated.The best predictor, retrieving the structural alignment for 9 out of 10 test cases, is based on the ANOLEA atomistic mean force potential and takes into account residues around misaligned secondary structure elements. The performance of the other predictors is significantly lower. This work shows that substantial improvement in local alignments can be obtained by careful assessment of the local structure of the resulting models.
Resumo:
Characterizing the risks posed by nanomaterials is extraordinarily complex because these materials can have a wide range of sizes, shapes, chemical compositions and surface modifications, all of which may affect toxicity. There is an urgent need for a testing strategy that can rapidly and efficiently provide a screening approach for evaluating the potential hazard of nanomaterials and inform the prioritization of additional toxicological testing where necessary. Predictive toxicity models could form an integral component of such an approach by predicting which nanomaterials, as a result of their physico-chemical characteristics, have potentially hazardous properties. Strategies for directing research towards predictive models and the ancillary benefits of such research are presented here.
Resumo:
BACKGROUND: Accurate assessment of glenoid inclination is of interest for a variety of conditions and procedures. The purpose of this study was to develop an accurate and reproducible measurement for glenoid inclination on standardized anterior-posterior (AP) radiographs and on computed tomography (CT) images. MATERIALS AND METHODS: Three consistently identifiable angles were defined: Angle α by line AB connecting the superior and inferior glenoid tubercle (glenoid fossa) and the line identifying the scapular spine; angle β by line AB and the floor of the supraspinatus fossa; angle γ by line AB and the lateral margin of the scapula. Experimental study: these 3 angles were measured in function of the scapular position to test their resistance to rotation. Conventional AP radiographs and CT scans were acquired in extension/flexion and internal/external rotation in a range up to ±40°. Clinical study: the inter-rater reliability of all angles was assessed on AP radiographs and CT scans of 60 patients (30 with proximal humeral fractures, 30 with osteoarthritis) by 2 independent observers. RESULTS: The experimental study showed that angle α and β have a resistance to rotation of up to ±20°. The deviation from neutral position was not more than ±10°. The results for the inter-rater reliability analyzed by Bland-Altman plots for the angle β fracture group were (mean ± standard deviation) -0.1 ± 4.2 for radiographs and -0.3 ± 3.3 for CT scans; and for the osteoarthritis group were -1.2 ± 3.8 for radiographs and -3.0 ± 3.6 for CT scans. CONCLUSION: Angle β is the most reproducible measurement for glenoid inclination on conventional AP radiographs, providing a resistance to positional variability of the scapula and a good inter-rater reliability.
Resumo:
Static process simulation has traditionally been used to model complex processes for various purposes. However, the use of static processsimulators for the preparation of holistic examinations aiming at improving profit-making capability requires a lot of work because the production of results requires the assessment of the applicability of detailed data which may be irrelevant to the objective. The relevant data for the total assessment gets buried byirrelevant data. Furthermore, the models do not include an examination of the maintenance or risk management, and economic examination is often an extra property added to them which can be performed with a spreadsheet program. A process model applicable to holistic economic examinations has been developed in this work. The model is based on the life cycle profit philosophy developed by Hagberg and Henriksson in 1996. The construction of the model has utilized life cycle assessment and life cycle costing methodologies with a view to developing, above all, a model which would be applicable to the economic examinations of complete wholes and which would require the need for information focusing on aspects essential to the objectives. Life cycle assessment and costing differ from each other in terms of the modeling principles, but the features of bothmethodologies can be used in the development of economic process modeling. Methods applicable to the modeling of complex processes can be examined from the viewpoint of life cycle methodologies, because they involve the collection and management of large corpuses of information and the production of information for the needs of decision-makers as well. The results of the study shows that on the basis of the principles of life cycle modeling, a process model can be created which may be used to produce holistic efficiency examinations on the profit-making capability of the production line, with fewer resources thanwith traditional methods. The calculations of the model are based to the maximum extent on the information system of the factory, which means that the accuracyof the results can be improved by developing information systems so that they can provide the best information for this kind of examinations.
Resumo:
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.
Resumo:
We analyzed 42 models from 14 brands of refill liquids for e-cigarettes for the presence of micro-organisms, diethylene glycol, ethylene glycol, hydrocarbons, ethanol, aldehydes, tobacco-specific nitrosamines, and solvents. All the liquids under scrutiny complied with norms for the absence of yeast, mold, aerobic microbes, Staphylococcus aureus, and Pseudomonas aeruginosa. Diethylene glycol, ethylene glycol and ethanol were detected, but remained within limits authorized for food and pharmaceutical products. Terpenic compounds and aldehydes were found in the products, in particular formaldehyde and acrolein. No sample contained nitrosamines at levels above the limit of detection (1 μg/g). Residual solvents such as 1,3-butadiene, cyclohexane and acetone, to name a few, were found in some products. None of the products under scrutiny were totally exempt of potentially toxic compounds. However, for products other than nicotine, the oral acute toxicity of the e-liquids tested seems to be of minor concern. However, a minority of liquids, especially those with flavorings, showed particularly high ranges of chemicals, causing concerns about their potential toxicity in case of chronic oral exposure.
Resumo:
(13)C magnetic resonance spectroscopy (MRS) combined with the administration of (13)C labeled substrates uniquely allows to measure metabolic fluxes in vivo in the brain of humans and rats. The extension to mouse models may provide exclusive prospect for the investigation of models of human diseases. In the present study, the short-echo-time (TE) full-sensitivity (1)H-[(13)C] MRS sequence combined with high magnetic field (14.1 T) and infusion of [U-(13)C6] glucose was used to enhance the experimental sensitivity in vivo in the mouse brain and the (13)C turnover curves of glutamate C4, glutamine C4, glutamate+glutamine C3, aspartate C2, lactate C3, alanine C3, γ-aminobutyric acid C2, C3 and C4 were obtained. A one-compartment model was used to fit (13)C turnover curves and resulted in values of metabolic fluxes including the tricarboxylic acid (TCA) cycle flux VTCA (1.05 ± 0.04 μmol/g per minute), the exchange flux between 2-oxoglutarate and glutamate Vx (0.48 ± 0.02 μmol/g per minute), the glutamate-glutamine exchange rate V(gln) (0.20 ± 0.02 μmol/g per minute), the pyruvate dilution factor K(dil) (0.82 ± 0.01), and the ratio for the lactate conversion rate and the alanine conversion rate V(Lac)/V(Ala) (10 ± 2). This study opens the prospect of studying transgenic mouse models of brain pathologies.
Resumo:
Taloudellisen laskennan yhdistäminen elinkaariarviointiin (LCA) on alkanut kiinnostaa eri teollisuuden aloja maailmanlaajuisesti viime aikoina. Useat LCA-tietokoneohjelmat sisältävät kustannuslaskentaominaisuuksia ja yksittäiset projektit ovat yhdistäneet ympäristö- ja talouslaskentamenetelmiä. Tässä projektissa tutkitaan näiden yhdistelmien soveltuvuutta suomalaiselle sellu- ja paperiteollisuudelle, sekä kustannuslaskentaominaisuuden lisäämistä KCL:n LCA-ohjelmaan, KCL-ECO 3.0:aan. Kaikki tutkimuksen aikana löytyneet menetelmät, jotka yhdistävät LCA:n ja taloudellista laskentaa, on esitelty tässä työssä. Monet näistä käyttävät elinkaarikustannusarviointia (LCCA). Periaatteessa elinkaari määritellään eri tavalla LCCA:ssa ja LCA:ssa, mikä luo haasteita näiden menetelmien yhdistämiselle. Sopiva elinkaari tulee määritellä laskennan tavoitteiden mukaisesti. Työssä esitellään suositusmenetelmä, joka lähtee suomalaisen sellu- ja paperiteollisuuden erikoispiirteistä. Perusvaatimuksena on yhteensopivuus tavanomaisesti paperin LCA:ssa käytetyn elinkaaren kanssa. Menetelmän yhdistäminen KCL-ECO 3.0:aan on käsitelty yksityiskohtaisesti.
Resumo:
Healthcare accreditation models generally include indicators related to healthcare employees' perceptions (e.g. satisfaction, career development, and health safety). During the accreditation process, organizations are asked to demonstrate the methods with which assessments are made. However, none of the models provide standardized systems for the assessment of employees. In this study, we analyzed the psychometric properties of an instrument for the assessment of nurses' perceptions as indicators of human capital quality in healthcare organizations. The Human Capital Questionnaire was applied to a sample of 902 nurses in four European countries (Spain, Portugal, Poland, and the UK). Exploratory factor analysis identified six factors: satisfaction with leadership, identification and commitment, satisfaction with participation, staff well-being, career development opportunities, and motivation. The results showed the validity and reliability of the questionnaire, which when applied to healthcare organizations, provide a better understanding of nurses' perceptions, and is a parsimonious instrument for assessment and organizational accreditation. From a practical point of view, improving the quality of human capital, by analyzing nurses and other healthcare employees' perceptions, is related to workforce empowerment.
Resumo:
Experimental models demonstrated that therapeutic induction of CD8 T cell responses may offer protection against tumors or infectious diseases providing that T cells have sufficiently high TCR/CD8:pMHC avidity for efficient Ag recognition and consequently strong immune functions. However, comprehensive characterization of TCR/CD8:pMHC avidity in clinically relevant situations has remained elusive. In this study, using the novel NTA-His tag-containing multimer technology, we quantified the TCR:pMHC dissociation rates (koff) of tumor-specific vaccine-induced CD8 T cell clones (n = 139) derived from seven melanoma patients vaccinated with IFA, CpG, and the native/EAA or analog/ELA Melan-A(MART-1)(26-35) peptide, binding with low or high affinity to MHC, respectively. We observed substantial correlations between koff and Ca(2+) mobilization (p = 0.016) and target cell recognition (p < 0.0001), with the latter independently of the T cell differentiation state. Our strategy was successful in demonstrating that the type of peptide impacted on TCR/CD8:pMHC avidity, as tumor-reactive T cell clones derived from patients vaccinated with the low-affinity (native) peptide expressed slower koff rates than those derived from patients vaccinated with the high-affinity (analog) peptide (p < 0.0001). Furthermore, we observed that the low-affinity peptide promoted the selective differentiation of tumor-specific T cells bearing TCRs with high TCR/CD8:pMHC avidity (p < 0.0001). Altogether, TCR:pMHC interaction kinetics correlated strongly with T cell functions. Our study demonstrates the feasibility and usefulness of TCR/CD8:pMHC avidity assessment by NTA-His tag-containing multimers of naturally occurring polyclonal T cell responses, which represents a strong asset for the development of immunotherapy.
Resumo:
1. Species distribution models (SDMs) have become a standard tool in ecology and applied conservation biology. Modelling rare and threatened species is particularly important for conservation purposes. However, modelling rare species is difficult because the combination of few occurrences and many predictor variables easily leads to model overfitting. A new strategy using ensembles of small models was recently developed in an attempt to overcome this limitation of rare species modelling and has been tested successfully for only a single species so far. Here, we aim to test the approach more comprehensively on a large number of species including a transferability assessment. 2. For each species numerous small (here bivariate) models were calibrated, evaluated and averaged to an ensemble weighted by AUC scores. These 'ensembles of small models' (ESMs) were compared to standard Species Distribution Models (SDMs) using three commonly used modelling techniques (GLM, GBM, Maxent) and their ensemble prediction. We tested 107 rare and under-sampled plant species of conservation concern in Switzerland. 3. We show that ESMs performed significantly better than standard SDMs. The rarer the species, the more pronounced the effects were. ESMs were also superior to standard SDMs and their ensemble when they were independently evaluated using a transferability assessment. 4. By averaging simple small models to an ensemble, ESMs avoid overfitting without losing explanatory power through reducing the number of predictor variables. They further improve the reliability of species distribution models, especially for rare species, and thus help to overcome limitations of modelling rare species.
Resumo:
Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.
Resumo:
Through indisputable evidence of climate change and its link to the greenhouse gas emissions comes the necessity for change in energy production infrastructure during the coming decades. Through political conventions and restrictions energy industry is pushed toward using bigger share of renewable energy sources as energy supply. In addition to climate change, sustainable energy supply is another major issue for future development plans, but neither of these should come with unbearable price. All the power production types have environmental effects as well as strengths and weaknesses. Although each change comes with a price, right track in minimising the environmental impacts and energy supply security can be found by combining all possible low-carbon technologies and by improving energy efficiency in all sectors, for creating a new power production infrastructure of tolerable energy price and of minor environmental effects. GEMIS-Global Emission Model for Integrated Systems is a life-cycle analysis program which was used in this thesis to make indicative energy models for Finland’s future energy supply. Results indicate that the energy supply must comprise both high capacity nuclear power as well as large variation of renewable energy sources for minimization of all environmental effects and keeping energy price reasonable.
Resumo:
There is growing concern that flooding is becoming more frequent and severe in Europe. A better understanding of flood regime changes and their drivers is therefore needed. The paper reviews the current knowledge on flood regime changes in European rivers that has traditionally been obtained through two alternative research approaches. The first approach is the data-based detection of changes in observed flood events. Current methods are reviewed together with their challenges and opportunities. For example, observation biases, the merging of different data sources and accounting for nonlinear drivers and responses. The second approach consists of modelled scenarios of future floods. Challenges and opportunities associated with flood change scenarios are discussed such as fully accounting for uncertainties in the modelling cascade and feedbacks. To make progress in flood change research, we suggest that a synthesis of these two approaches is needed. This can be achieved by focusing on long duration records and flood-rich and flood-poor periods rather than on short duration flood trends only, by formally attributing causes of observed flood changes, by validating scenarios against observed flood regime dynamics, and by developing low-dimensional models of flood changes and feedbacks. The paper finishes with a call for a joint European flood change research network.