835 resultados para critically important antimicrobials


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents AMR phenotypic evaluation and whole genome sequencing analysis of 288 Escherichia coli strains isolated from different sources (livestock, companion animal, wildlife, food and human) in Italy. Our data reflects general resistance trends in Europe, reporting tetracycline, ampicillin, sulfisoxazole and aminoglycosides resistance as the most common phenotypic AMR profile among livestock, pets, wildlife and humans. Identification of human and animal (livestock and companion animal) AMR profiles in niches with a rare (fishery, mollusc) or absent (vegetable, wild animal, wild boar) direct exposure to antimicrobials, suggests widespread environmental pollution with ARGs conferring resistance to these antimicrobials. Phenotypic resistance to highest priority critically important antimicrobials was mainly observed in food-producing animals and related food such as rabbit, poultry, beef and swine. Discrepancies between AMR phenotypic pattern and genetic profile were observed. In particular, phenotypic aminoglycoside, cephalosporin, meropenem, colistin resistance and ESBL profile did not have a genetic explanation in different cases. This data could suggest the diffusion of new genetic variants of ARGs, associated to these antimicrobial classes. Generally, our collection shows a virulence profile typical of extraintestinal pathogenic Escherichia coli (ExPEC) pathotype. Different pandemic and emerging ExPEC lineages were identified, in particular in poultry meat (ST10; ST23; ST69, ST117; ST131). Rabbit was suggested as a source of ST20-ST40 potential hybrid pathogens. Wildlife carried a high average number (10) of VAGs (mostly associated to ExPEC pathotype) and different predominant ExPEC lineages (ST23, ST117, ST648), suggesting its possible involvement in maintenance and diffusion of virulence determinants. In conclusion, our study provides important knowledge related to the phenotypic/genetic AMR and virulence profiles circulating in E. coli in Italy. The role of different niches in AMR dynamics has been discussed. In particular, food-producing animals are worthy of continued investigation as a source of potential zoonotic pathogens, meanwhile wildlife might contribute to VAGs spread.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: The aim of this article is to describe the anatomy of the cavernous sinus and to provide a guide for use when performing surgery in this complex area. Clinical cases are used to illustrate routes to the cavernous sinus and its contents and to demonstrate how the cavernous sinus can be used as a pathway for exposure of deeper structures. METHODS: Thirty cadaveric cavernous sinuses were examined using X3 to X40 magnification after the arteries and veins were injected with colored silicone. Distances between the entrance of the oculomotor and trochlear nerves and the posterior clinoid process were recorded. Stepwise dissections (if the cavernous sinuses, performed to demonstrate the intradural and extradural routes, are accompanied by intraoperative photographs of those approaches. RESULTS: The anatomy of the cavernous sinus is complex because of the high density of critically important neural and vascular structures. Selective cases demonstrate how a detailed knowledge of cavernous sinus anatomy can provide for safer surgery with low morbidity. CONCLUSION: A precise understanding of the bony relationships and neurovascular contents of the cavernous sinus, together with the use of cranial base and microsurgical techniques, has allowed neurosurgeons to approach the cavernous sinus with reduced morbidity and mortality, changing the natural history of selected lesions in this region. Complete resection of cavernous sinus meningiomas has proven to be difficult and, in many cases, impossible without causing significant morbidity. However, surgical reduction of such lesions enhances the chances for success of subsequent therapy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interactions between Eph receptors and their ligands the ephrin proteins are critically important in many key developmental processes. Emerging evidence also supports a role for these molecules in postembryonic tissues, particularly in pathological processes, including tissue injury and tumor metastasis. We review the signaling mechanisms that allow the 14 Eph and nine ephrin proteins to deliver intracellular signals that regulate cell shape and movement. What emerges is that the initiation of these signals is critically dependent on which Eph and ephrin proteins are expressed, the level of their expression, and, in some cases, which splice variants are expressed. Diversity at the level of initial interaction and in the downstream signaling processes regulated by Eph-ephrin signaling provides a subtle, versatile system of regulation of intercellular adhesion, cell shape, and cell motility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ensuring sustainable development conditions is presently world widely recognized as a critically important goal. This makes the use of electricity generation technologies based on renewable energy sources very relevant. Developing countries depend on an adequate availability of electrical energy to assure economic progress and are usually characterized by a high increase in electricity consumption. This makes sustainable development a huge challenge but it can also be taken as an opportunity, especially for countries which do not have fossil resources. This paper presents a study concerning the expansion of an already existent wind farm, located in Praia, the capital of Cape Verde Republic. The paper includes results from simulation studies that have been undertaken using PSCAD software and some economic considerations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Significant research efforts are being devoted to Body Area Networks (BAN) due to their potential for revolutionizing healthcare practices. Energy-efficiency and communication reliability are critically important for these networks. In an experimental study with three different mote platforms, we show that changes in human body shadowing as well as those in the relative distance and orientation of nodes caused by the common human body movements can result in significant fluctuations in the received signal strength within a BAN. Furthermore, regular movements, such as walking, typically manifest in approximately periodic variations in signal strength. We present an algorithm that predicts the signal strength peaks and evaluate it on real-world data. We present the design of an opportunistic MAC protocol, named BANMAC, that takes advantage of the periodic fluctuations of the signal strength to achieve high reliability even with low transmission power.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertation to obtain the degree of Doctor of Philosophy in Biomedical Engineering

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Estuaries and other transitional waters are complex ecosystems critically important as nursery and shelter areas for organisms. Also, humans depend on estuaries for multiple socio-economical activities such as urbanism, tourism, heavy industry, (taking advantage of shipping), fisheries and aquaculture, the development of which led to strong historical pressures, with emphasis on pollution. The degradation of estuarine environmental quality implies ecologic, economic and social prejudice, hence the importance of evaluating environmental quality through the identification of stressors and impacts. The Sado Estuary (SW Portugal) holds the characteristics of industrialized estuaries, which results in multiple adverse impacts. Still, it has recently been considered moderately contaminated. In fact, many studies were conducted in the past few years, albeit scattered due to the absence of true biomonitoring programmes. As such, there is a need to integrate the information, in order to obtain a holistic perspective of the area able to assist management and decision-making. As such, a geographical information system (GIS) was created based on sediment contamination and biomarker data collected from a decade-long time-series of publications. Four impacted and a reference areas were identified, characterized by distinct sediment contamination patterns related to different hot spots and diffuse sources of toxicants. The potential risk of sediment-bound toxicants was determined by contrasting the levels of pollutants with available sediment quality guidelines, followed by their integration through the Sediment Quality guideline Quotient (SQG-Q). The SQG-Q estimates per toxicant or class was then subjected to georreferencing and statistical analyses between the five distinct areas and seasons. Biomarker responses were integrated through the Biomarkers Consistency Indice and georreferenced as well through GIS. Overall, in spite of the multiple biological traits surveyed, the biomarker data (from several organisms) are accordant with sediment contamination. The most impacted areas were the shipyard area and adjacent industrial belt, followed by urban and agricultural grounds. It is evident that the estuary, although globally moderately impacted, is very heterogeneous and affected by a cocktail of contaminants, especially metals and polycyclic aromatic hydrocarbon. Although elements (like copper, zinc and even arsenic) may originate from the geology of the hydrographic basin of the Sado River, the majority of the remaining contaminants results from human activities. The present work revealed that the estuary should be divided into distinct biogeographic units, in order to implement effective measures to safeguard environmental quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Engenharia Clinica)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Executive Summary The first essay of this dissertation investigates whether greater exchange rate uncertainty (i.e., variation over time in the exchange rate) fosters or depresses the foreign investment of multinational firms. In addition to the direct capital financing it supplies, foreign investment can be a source of valuable technology and know-how, which can have substantial positive effects on a host country's economic growth. Thus, it is critically important for policy makers and central bankers, among others, to understand how multinationals base their investment decisions on the characteristics of foreign exchange markets. In this essay, I first develop a theoretical framework to improve our knowledge regarding how the aggregate level of foreign investment responds to exchange rate uncertainty when an economy consists of many firms, each of which is making decisions. The analysis predicts a U-shaped effect of exchange rate uncertainty on the total level of foreign investment of the economy. That is, the effect is negative for low levels of uncertainty and positive for higher levels of uncertainty. This pattern emerges because the relationship between exchange rate volatility and 'the probability of investment is negative for firms with low productivity at home (i.e., firms that find it profitable to invest abroad) and the relationship is positive for firms with high productivity at home (i.e., firms that prefer exporting their product). This finding stands in sharp contrast to predictions in the existing literature that consider a single firm's decision to invest in a unique project. The main contribution of this research is to show that the aggregation over many firms produces a U-shaped pattern between exchange rate uncertainty and the probability of investment. Using data from industrialized countries for the period of 1982-2002, this essay offers a comprehensive empirical analysis that provides evidence in support of the theoretical prediction. In the second essay, I aim to explain the time variation in sovereign credit risk, which captures the risk that a government may be unable to repay its debt. The importance of correctly evaluating such a risk is illustrated by the central role of sovereign debt in previous international lending crises. In addition, sovereign debt is the largest asset class in emerging markets. In this essay, I provide a pricing formula for the evaluation of sovereign credit risk in which the decision to default on sovereign debt is made by the government. The pricing formula explains the variation across time in daily credit spreads - a widely used measure of credit risk - to a degree not offered by existing theoretical and empirical models. I use information on a country's stock market to compute the prevailing sovereign credit spread in that country. The pricing formula explains a substantial fraction of the time variation in daily credit spread changes for Brazil, Mexico, Peru, and Russia for the 1998-2008 period, particularly during the recent subprime crisis. I also show that when a government incentive to default is allowed to depend on current economic conditions, one can best explain the level of credit spreads, especially during the recent period of financial distress. In the third essay, I show that the risk of sovereign default abroad can produce adverse consequences for the U.S. equity market through a decrease in returns and an increase in volatility. The risk of sovereign default, which is no longer limited to emerging economies, has recently become a major concern for financial markets. While sovereign debt plays an increasing role in today's financial environment, the effects of sovereign credit risk on the U.S. financial markets have been largely ignored in the literature. In this essay, I develop a theoretical framework that explores how the risk of sovereign default abroad helps explain the level and the volatility of U.S. equity returns. The intuition for this effect is that negative economic shocks deteriorate the fiscal situation of foreign governments, thereby increasing the risk of a sovereign default that would trigger a local contraction in economic growth. The increased risk of an economic slowdown abroad amplifies the direct effect of these shocks on the level and the volatility of equity returns in the U.S. through two channels. The first channel involves a decrease in the future earnings of U.S. exporters resulting from unfavorable adjustments to the exchange rate. The second channel involves investors' incentives to rebalance their portfolios toward safer assets, which depresses U.S. equity prices. An empirical estimation of the model with monthly data for the 1994-2008 period provides evidence that the risk of sovereign default abroad generates a strong leverage effect during economic downturns, which helps to substantially explain the level and the volatility of U.S. equity returns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of the present report is to outline, in concise from, the changes in vascular structure which accompany hypertension. Consideration will be given to their potential contribution to hypertensive end organ damage. In so doing, it is important to consider both the macrovascular and microvascular levels, because interactions between them are presently believed to be critically important. The links between hypertension and the pathogenesis of arteriosclerosis fall outside the scope of this short review.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Changes in vascular structure that accompany hypertension may contribute to hypertensive end-organ damage. Both the macrovascular and microvascular levels should be considered, as interactions between them are believed to be critically important. Regarding the macrocirculation, the article first reviews basic concepts of vascular biomechanics, such as arterial compliance, arterial distensibility, and stress-strain relationships of arterial wall material, and then reviews how hypertension affects the properties of conduit arteries, particularly examining evidence that it accelerates the progressive stiffening that normally occurs with advancing age. High arterial stiffness may increase central systolic and pulse pressure by two different mechanisms: 1) Abnormally high pulse wave velocity may cause pressure waves reflected in the periphery to reach the central aorta in systole, thus augmenting systolic pressure; 2) In the elderly, the interaction of the forward pressure wave with high arterial stiffness is mostly responsible for abnormally high pulse pressure. At the microvascular level, hypertensive disease is characterized by inward eutrophic or hypertrophic arteriolar remodeling and capillary rarefaction. These abnormalities may depend in part on the abnormal transmission of highly pulsatile blood pressure into microvascular networks, especially in highly perfused organs with low vascular resistance, such as the kidney, heart, and brain, where it contributes to hypertensive end-organ damage.