974 resultados para hybrid zone polymorphism long
Resumo:
A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.
Resumo:
A thulium-doped all-fiber laser passively mode-locked by the co-action of nonlinear polarization evolution and single-walled carbon nanotubes operating at 1860-1980 nm wavelength band is demonstrated. Pumped with the single-mode laser diode at 1.55 μm laser generates near 500-fs soliton pulses at repetition rate ranging from 6.3 to 72.5 MHz in single-pulse operation regime. Having 3-m long cavity average output power reached 300 mW, giving the peak power of 4.88 kW and the pulse energy of 2.93 nJ with slope efficiency higher than 30%. At a 21.6-m long ring cavity average output power of 117 mW is obtained, corresponding to the pulse energy up to 10.87 nJ and a pulse peak power of 21.7 kW, leading to the higher-order soliton generation.
Resumo:
Permeable reactive barriers (PRB) are constructed from soil solid amendments to support the growth of bacteria that are capable of degrading organic contaminants. The objective of this study was to identify low-cost soil solid amendments that could retard the movement of trichloroethylene (TCE) while serving as long-lived carbon sources to foster its biodegradation in shallow groundwater through the use of a PRB. The natural amendments high in organic carbon content such as eucalyptus mulch, compost, wetland peat, organic humus were compared based on their geophysical characteristics, such as pHw, porosity and total organic carbon (TOC), and as well as TCE sorption potentials. The pHw values were within neutral range except for pine bark mulch and wetland peat. All other geophysical characteristics of the amendments showed suitability for use in a PRB. While the Freundlich model showed better fit for compost and pine bark mulch, the linear sorption model was adequate for eucalyptus mulch, wetland peat and Everglades muck within the concentration range studied (0.2-0.8 mg/L TCE). According to these results, two composts and eucalyptus mulch were selected for laboratory column experiments to evaluate their effectiveness at creating and maintaining conditions suitable for TCE anaerobic dechlorination. The columns were monitored for pH, ORP, TCE degradation, longevity of nutrients and soluble TOC to support TCE dechlorination. Native bacteria in the columns had the ability to convert TCE to DCEs; however, the inoculation with the TCE-degrading culture greatly increased the rate of biodegradation. This caused a significant increase in by-product concentration, mostly in the form of DCEs and VC followed by a slow degradation to ethylene. Of the tested amendments eucalyptus mulch was the most effective at supporting the TCE dechlorination. The experimental results of TCE sequential dechlorination took place in eucalyptus mulch and commercial compost from Savannah River Site columns were then simulated using the Hydrus-1D model. The simulations showed good fit with the experimental data. The results suggested that sorption and degradation were the dominant fate and transport mechanisms for TCE and DCEs in the column, supporting the use of these amendments in a permeable reactive barrier to remediate the TCE.
Resumo:
Recent events such as Winter Storm [Hurricane] Sandy and Hurricane Katrina have demonstrated that local food supplies must last as long as possible. Current recommendations are to dispose of all refrigerated food four hours after the power is lost. The purpose of this study was to determine if it is possible to safely hold food longer than four hours without power. The results indicate that the food can be held for up to six hours if the door is not opened. If ice is added to the refrigerator, then it will take the food approximately 10 hours to reach 5°C (41°F).
Resumo:
The large intrinsic bandgap of NiO hinders its potential application as a photocatalyst under visible-light irradiation. In this study, we have performed first-principles screened exchange hybrid density functional theory with the HSE06 functional calculations of N- and C-doped NiO to investigate the effect of doping on the electronic structure of NiO. C-doping at an oxygen site induces gap states due to the dopant, the positions of which suggest that the top of the valence band is made up primarily of C 2p-derived states with some Ni 3d contributions, and the lowest-energy empty state is in the middle of the gap. This leads to an effective bandgap of 1.7 eV, which is of potential interest for photocatalytic applications. N-doping induces comparatively little dopant-Ni 3d interactions, but results in similar positions of dopant-induced states, i.e., the top of the valence band is made up of dopant 2p states and the lowest unoccupied state is the empty gap state derived from the dopant, leading to bandgap narrowing. With the hybrid density functional theory (DFT) results available, we discuss issues with the DFT corrected for on-site Coulomb description of these systems.
Resumo:
Long chain diols are lipids that have gained interest over the last years due to their high potential to serve as biomarkers and diol indices have been proposed to reconstruct upwelling conditions and sea surface temperature (SST). However, little is known about the sources of the diols and the mechanisms impacting their distribution. Here we studied the factors controlling diol distributions in the Iberian Atlantic margin, which is characterized by a dynamic continental shelf under the influence of upwelling of nutrient-rich cold deep waters, and fluvial input. We analyzed suspended particulate matter (SPM) of the Tagus river, marine SPM and marine surface sediments along five transects off the Iberian margin, as well as riverbank sediments and soil from the catchment area of the Tagus river. Relatively high fractional abundances of the C32 1,15-diol (normalized with respect to the 1,13- and 1,15-diols) were observed in surface sediments in front of major river mouths and this abundance correlates strongly with the BIT index, a tracer for continental input of organic carbon. Together with an even higher fractional abundance of the C32 1,15-diol in the Tagus river SPM, and the absence of long chain diols in the watershed riverbank sediments and soils, we suggest that this long chain diol is produced in-situ in the river. Further support for this hypothesis comes from the small but distinct stable carbon isotopic difference of 1.3? with the marine C28 1,13-diol. The 1,14-diols are relatively abundant in surface sediments directly along the northern part of the coast, close to the upwelling zone, suggesting that Diol Indices based on 1,14-diols would work well as upwelling tracers in this region. Strikingly, we observed a significant difference in stable carbon isotopic composition between the monounsaturated C30:1 1,14- and the saturated C28 1,14-diol (3.8±0.7 per mil), suggesting different sources, in accordance with their different distributions. In addition, the Long chain Diol Index (LDI), a proxy for sea surface temperature, was applied for the surface sediments. The results correlate well with satellite SSTs offshore but reveal a significant discrepancy with satellite-derived SSTs in front of the Tagus and Sado rivers. This suggests that river outflow might compromise the applicability of this proxy.
Resumo:
Anthropogenic climate change is causing unprecedented rapid responses in marine communities, with species across many different taxonomic groups showing faster shifts in biogeographic ranges than in any other ecosystem. Spatial and temporal trends for many marine species are difficult to quantify, however, due to the lack of long-term datasets across complete geographical distributions and the occurrence of small-scale variability from both natural and anthropogenic drivers. Understanding these changes requires a multidisciplinary approach to bring together patterns identified within long-term datasets and the processes driving those patterns using biologically relevant mechanistic information to accurately attribute cause and effect. This must include likely future biological responses, and detection of the underlying mechanisms in order to scale up from the organismal level to determine how communities and ecosystems are likely to respond across a range of future climate change scenarios. Using this multidisciplinary approach will improve the use of robust science to inform the development of fit-for-purpose policy to effectively manage marine environments in this rapidly changing world.
Resumo:
Anthropogenic climate change is causing unprecedented rapid responses in marine communities, with species across many different taxonomic groups showing faster shifts in biogeographic ranges than in any other ecosystem. Spatial and temporal trends for many marine species are difficult to quantify, however, due to the lack of long-term datasets across complete geographical distributions and the occurrence of small-scale variability from both natural and anthropogenic drivers. Understanding these changes requires a multidisciplinary approach to bring together patterns identified within long-term datasets and the processes driving those patterns using biologically relevant mechanistic information to accurately attribute cause and effect. This must include likely future biological responses, and detection of the underlying mechanisms in order to scale up from the organismal level to determine how communities and ecosystems are likely to respond across a range of future climate change scenarios. Using this multidisciplinary approach will improve the use of robust science to inform the development of fit-for-purpose policy to effectively manage marine environments in this rapidly changing world.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
La caractérisation détaillée de vastes territoires pose un défi de taille et est souvent limitée par les ressources disponibles et le temps. Les travaux de cette maîtrise s’incorporent au projet ParaChute qui porte sur le développement d’une Méthode québécoise d’Évaluation du Danger des Chutes de Pierres (MEDCP) le long d’infrastructures linéaires. Pour optimiser l’utilisation des ressources et du temps, une méthode partiellement automatisée facilitant la planification des travaux de terrain a été développée. Elle se base principalement sur la modélisation des trajectoires de chutes de pierres 3D pour mieux cibler les falaises naturelles potentiellement problématiques. Des outils d’automatisation ont été développés afin de permettre la réalisation des modélisations sur de vastes territoires. Les secteurs où l’infrastructure a le plus de potentiel d’être atteinte par d’éventuelles chutes de pierres sont identifiés à partir des portions de l’infrastructure les plus traversées par les trajectoires simulées. La méthode a été appliquée le long du chemin de fer de la compagnie ArcelorMittal Infrastructures Canada. Le secteur couvert par l’étude débute à une dizaine de kilomètres au nord de Port-Cartier (Québec) et s’étend sur 260 km jusqu’au nord des monts Groulx. La topographie obtenue de levés LiDAR aéroportés est utilisée afin de modéliser les trajectoires en 3D à l’aide du logiciel Rockyfor3D. Dans ce mémoire, une approche facilitant la caractérisation des chutes de pierres le long d’un tracé linéaire est présentée. Des études de trajectoires préliminaires sont réalisées avant les travaux sur le terrain. Les informations tirées de ces modélisations permettent de cibler les secteurs potentiellement problématiques et d’éliminer ceux qui ne sont pas susceptibles de générer des chutes de pierres avec le potentiel d’atteindre les éléments à risque le long de l’infrastructure linéaire.
Resumo:
Incidental findings on low-dose CT images obtained during hybrid imaging are an increasing phenomenon as CT technology advances. Understanding the diagnostic value of incidental findings along with the technical limitations is important when reporting image results and recommending follow-up, which may result in an additional radiation dose from further diagnostic imaging and an increase in patient anxiety. This study assessed lesions incidentally detected on CT images acquired for attenuation correction on two SPECT/CT systems. Methods: An anthropomorphic chest phantom containing simulated lesions of varying size and density was imaged on an Infinia Hawkeye 4 and a Symbia T6 using the low-dose CT settings applied for attenuation correction acquisitions in myocardial perfusion imaging. Twenty-two interpreters assessed 46 images from each SPECT/CT system (15 normal images and 31 abnormal images; 41 lesions). Data were evaluated using a jackknife alternative free-response receiver-operating-characteristic analysis (JAFROC). Results: JAFROC analysis showed a significant difference (P < 0.0001) in lesion detection, with the figures of merit being 0.599 (95% confidence interval, 0.568, 0.631) and 0.810 (95% confidence interval, 0.781, 0.839) for the Infinia Hawkeye 4 and Symbia T6, respectively. Lesion detection on the Infinia Hawkeye 4 was generally limited to larger, higher-density lesions. The Symbia T6 allowed improved detection rates for midsized lesions and some lower-density lesions. However, interpreters struggled to detect small (5 mm) lesions on both image sets, irrespective of density. Conclusion: Lesion detection is more reliable on low-dose CT images from the Symbia T6 than from the Infinia Hawkeye 4. This phantom-based study gives an indication of potential lesion detection in the clinical context as shown by two commonly used SPECT/CT systems, which may assist the clinician in determining whether further diagnostic imaging is justified.
Resumo:
Le travail examine le rôle des prix dans le processus d’ajustement de la balance commerciale des pays de la zone CFA et vérifie la pertinence de la condition Marshall Lerner. Nous avons dérivé des fonctions d'importation et d'exportation à partir d'un problème d'optimisation dynamique puisé dans l'article de Reinhart (1994). Nous nous sommes servis des tests de cointégration. L'étude est calibrée sur le Cameroun, la Côte d'Ivoire, le Gabon et le Togo. Les résultats laissent croire qu'il n'existe pas de relation de long terme entre les prix, les exportations et les importations. La condition Marshall Lerner s'est révélée non valable à court terme. La révision des hypothèses sous-jacentes à cette condition a mis en exergue les élasticités des importations dans les possibilités de réussite de cette dévaluation.
Resumo:
Background and Objectives: Schizophrenia is a severe chronic disease. Endpoint variables lack objectivity and the diagnostic criteria have evolved with time. In order to guide the development of new drugs, European Medicines Agency (EMA) issued a guideline on the clinical investigation of medicinal products for the treatment of schizophrenia. Methods: Authors reviewed and discussed the efficacy trial part of the Guideline. Results: The Guideline divides clinical efficacy trials into short-term trials and long-term trials. The short-term three-arm trial is recommended to replace the short-term two-arm active-controlled non-inferiority trial because the latter has sensitivity issues. The Guideline ultimately makes that three-arm trial a superiority trial. The Guideline discusses four types of long-term trial designs. The randomized withdrawal trial design has some disadvantages. Long-term two-arm active-controlled non-inferiority trial is not recommended due to the sensitivity issue. Extension of the short-term trial is only suitable for extension of the short-term two-arm active-controlled superiority trial. The Guideline suggests that a hybrid design of a randomized withdrawal trial incorporated into a long-term parallel trial might be optimal. However, such a design has some disadvantages and might be too complex to be carried out. Authors suggest instead a three-group long-term trial design, which could provide comparison between test drug and active comparator along with comparison between the test drug and placebo. This alternative could arguably be much easier to carry out compared with the hybrid design. Conclusions: The three-group long-term design merits further discussion and evaluation.
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.
Resumo:
L'Amérique latine se caractérise comme une région ayant la pire répartition de la richesse et le Mexique n'y fait pas exception. Malgré que la dernière décennie lui ait apporté la stabilisation économique et la libéralisation des échanges commerciaux, l'écart entre les riches et les pauvres continue de croître. Pour certains experts, la cause principale de cette situation réside dans les effets de la mondialisation. Bien qu'ils contribuent à déstabiliser les économies locales, d'autres éléments présents au Mexique menacent autant le développement durable des communautés mexicaines. Notons la fragilité des démocraties, la faiblesse des institutions financières, les histoires de corruption et de trafic de drogue, l'exclusion sociale et la dégradation de l'environnement. Plusieurs programmes de développement socioéconomiques ont été mis en place par différents gouvernements mexicains. Que ce soit, des programmes en matière de santé et d'éducation, des programmes alimentaires et agricoles ou de construction d'infrastructures, ils visent essentiellement à réduire la pauvreté en milieux ruraux. Les problèmes sociaux en zones urbaines ne font pas partie des priorités actuelles de l'agenda politique du gouvernement fédéral. Les communautés urbaines doivent donc se tourner vers d'autres moyens pour assurer leur développement et, la micro-finance est l'une des solutions qui a depuis longtemps fait ses preuves en matière de mobilisation des populations hasardeuses. En effet, elle permet aux populations exclues des systèmes financiers traditionnels d'avoir un plus grand contrôle de leur avenir par l'auto emploi et par le développement endogène de leur communauté. Elle introduit donc une dynamique d'autonomie et vise des changements économiques et sociaux à long terme. Par contre, une des plus grandes erreurs commises est pourtant de prétendre que la micro-finance est le remède de toutes les calamités. Les besoins des populations moins nanties en zones urbaines ne se limitent pas aux besoins de financement. Les pauvres ont également besoin de logements salubres, d'eau potable, d'électricité, de soins de santé, d'écoles et d'infrastructure, ce en quoi tout être humain est en droit de posséder. De plus, le développement durable n'est pas qu'une question de solution aux problèmes de pauvreté, il concerne également tous les citadins. Lorsque l'on parle de qualité de vie, on parle également d'emplois disponibles, de revitalisation de quartiers, d'aménagement d'espaces verts, de construction de centres sportifs et culturels, pour en nommer que quelques-uns. En l'absence de volonté ou de moyens politiques en la matière, la coopérative d'épargne et de crédit peut-elle être un levier de développement local pour une communauté urbaine mexicaine? C'est la question à laquelle je me suis attardée ces derniers mois, en analysant le contexte socio-économique de la ville de Querétaro au Mexique. Pour ce faire, j'ai exécuté d'abord une intervention dans une importante coopérative d'épargne et de crédit et je me suis ensuite documentée à travers des entrevues formelles et informelles, des observations, des conférences diverses et la littérature locale et internationale. Après avoir présenté, dans le premier chapitre, le contexte socio-politico-économique du Mexique et en particulier celui de la municipalité de Querétaro, je décris, au chapitre 2, les différents problèmes que vivent au quotidien les citadins. Le chapitre 3 est consacré à l'environnement et aux ressources qu'offrent les coopératives mexicaines d'épargne et de crédit: leur importance, les principes, la législation, les forces et les faiblesses, les menaces et les opportunités, etc. Le chapitre suivant définit le développement local en zone urbaine, ses principes, le processus qui l'accompagne, les acteurs impliqués et la finalité. Enfin le chapitre 5 nous amène au coeur même de la réflexion, c'est-à-dire évaluer si la coopérative d'épargne et de crédit possède le potentiel nécessaire pour être un acteur important de développement local en zones urbaines mexicaines.