941 resultados para Bag Sampling
Resumo:
This thesis covers sampling and analytical procedures for isocyanates (R-NCO) and amines (R-NH2), two kinds of chemicals frequently used in association with the polymeric material polyurethane (PUR). Exposure to isocyanates may result in respiratory disorders and dermal sensitisation, and they are one of the main causes of occupational asthma. Several of the aromatic diamines associated with PUR production are classified as suspected carcinogens. Hence, the presence of these chemicals in different exposure situations must be monitored. In the context of determining isocyanates in air, the methodologies included derivatisation with the reagent di-n-butylamine (DBA) upon collection and subsequent determination using liquid chromatography (LC) and mass spectrometric detection (MS). A user-friendly solvent-free sampler for collection of airborne isocyanates was developed as an alternative to a more cumbersome impinger-filter sampling technique. The combination of the DBA reagent together with MS detection techniques revealed several new exposure situations for isocyanates, such as isocyanic acid during thermal degradation of PUR and urea-based resins. Further, a method for characterising isocyanates in technical products used in the production of PUR was developed. This enabled determination of isocyanates in air for which pure analytical standards are missing. Tandem MS (MS/MS) determination of isocyanates in air below 10-6 of the threshold limit values was achieved. As for the determination of amines, the analytical methods included derivatisation into pentafluoropropionic amide or ethyl carbamate ester derivatives and subsequent MS analysis. Several amines in biological fluids, as markers of exposure for either the amines themselves or the corresponding isocyanates, were determined by LC-MS/MS at amol level. In aqueous extraction solutions of flexible PUR foam products, toluene diamine and related compounds were found. In conclusion, this thesis demonstrates the usefulness of well characterised analytical procedures and techniques for determination of hazardous compounds. Without reliable and robust methodologies there is a risk that exposure levels will be underestimated or, even worse, that relevant compounds will be completely missed.
Resumo:
Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.
Resumo:
Proper ion channels’ functioning is a prerequisite for a normal cell and disorders involving ion channels, or channelopathies, underlie many human diseases. Long QT syndromes (LQTS) for example may arise from the malfunctioning of hERG channel, caused either by the binding of drugs or mutations in HERG gene. In the first part of this thesis I present a framework to investigate the mechanism of ion conduction through hERG channel. The free energy profile governing the elementary steps of ion translocation in the pore was computed by means of umbrella sampling simulations. Compared to previous studies, we detected a different dynamic behavior: according to our data hERG is more likely to mediate a conduction mechanism which has been referred to as “single-vacancy-like” by Roux and coworkers (2001), rather then a “knock-on” mechanism. The same protocol was applied to a model of hERG presenting the Gly628Ser mutation, found to be cause of congenital LQTS. The results provided interesting insights about the reason of the malfunctioning of the mutant channel. Since they have critical functions in viruses’ life cycle, viral ion channels, such as M2 proton channel, are considered attractive targets for antiviral therapy. A deep knowledge of the mechanisms that the virus employs to survive in the host cell is of primary importance in the identification of new antiviral strategies. In the second part of this thesis I shed light on the role that M2 plays in the control of electrical potential inside the virus, being the charge equilibration a condition required to allow proton influx. The ion conduction through M2 was simulated using metadynamics technique. Based on our results we suggest that a potential anion-mediated cation-proton exchange, as well as a direct anion-proton exchange could both contribute to explain the activity of the M2 channel.
Resumo:
An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.
Resumo:
Summary PhD Thesis Jan Pollmann: This thesis focuses on global scale measurements of light reactive non-methane hydrocarbon (NMHC), in the volatility range from ethane to toluene with a special focus on ethane, propane, isobutane, butane, isopentane and pentane. Even though they only occur at the ppt level (nmol mol-1) in the remote troposphere these species can yield insight into key atmospheric processes. An analytical method was developed and subsequently evaluated to analyze NMHC from the NOAA – ERSL cooperative air sampling network. Potential analytical interferences through other atmospheric trace gases (water vapor and ozone) were carefully examined. The analytical parameters accuracy and precision were analyzed in detail. It was proven that more than 90% of the data points meet the Global Atmospheric Watch (GAW) data quality objective. Trace gas measurements from 28 measurement stations were used to derive the global atmospheric distribution profile for 4 NMHC (ethane, propane, isobutane, butane). A close comparison of the derived ethane data with previously published reports showed that northern hemispheric ethane background mixing ratio declined by approximately 30% since 1990. No such change was observed for southern hemispheric ethane. The NMHC data and trace gas data supplied by NOAA ESRL were used to estimate local diurnal averaged hydroxyl radical (OH) mixing ratios by variability analysis. Comparison of the variability derived OH with directly measured OH and modeled OH mixing ratios were found in good agreement outside the tropics. Tropical OH was on average two times higher than predicted by the model. Variability analysis was used to assess the effect of chlorine radicals on atmospheric oxidation chemistry. It was found that Cl is probably not of significant relevance on a global scale.
Resumo:
In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.
Resumo:
For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.
Resumo:
We consider a simple (but fully three-dimensional) mathematical model for the electromagnetic exploration of buried, perfect electrically conducting objects within the soil underground. Moving an electric device parallel to the ground at constant height in order to generate a magnetic field, we measure the induced magnetic field within the device, and factor the underlying mathematics into a product of three operations which correspond to the primary excitation, some kind of reflection on the surface of the buried object(s) and the corresponding secondary excitation, respectively. Using this factorization we are able to give a justification of the so-called sampling method from inverse scattering theory for this particular set-up.
Resumo:
Many age-related neurodegenerative disorders such as Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis and polyglutamine disorders, including Huntington’s disease, are associated with the aberrant formation of protein aggregates. These protein aggregates and/or their precursors are believed to be causally linked to the pathogenesis of such protein conformation disorders, also referred to as proteinopathies. The accumulation of protein aggregates, frequently under conditions of an age-related increase in oxidative stress, implies the failure of protein quality control and the resulting proteome instability as an upstream event of proteinopathies. As aging is a main risk factor of many proteinopathies, potential alterations of protein quality control pathways that accompany the biological aging process could be a crucial factor for the onset of these disorders.rnrnThe focus of this dissertation lies on age-related alterations of protein quality control mechanisms that are regulated by the co-chaperones of the BAG (Bcl-2-associated athanogene) family. BAG proteins are thought to promote nucleotide exchange on Hsc/Hsp70 and to couple the release of chaperone-bound substrates to distinct down-stream cellular processes. The present study demonstrates that BAG1 and BAG3 are reciprocally regulated during aging leading to an increased BAG3 to BAG1 ratio in cellular models of replicative senescence as well as in neurons of the aging rodent brain. Furthermore, BAG1 and BAG3 were identified as key regulators of protein degradation pathways. BAG1 was found to be essential for effective degradation of polyubiquitinated proteins by the ubiquitin/proteasome system, possibly by promoting Hsc/Hsp70 substrate transfer to the 26S proteasome. In contrast, BAG3 was identified to stimulate the turnover of polyubiquitinated proteins by macroautophagy, a catabolic process mediated by lysosomal hydrolases. BAG3-regulated protein degradation was found to depend on the function of the ubiquitin-receptor protein SQSTM1 which is known to sequester polyubiquitinated proteins for macroautophagic degradation. It could be further demonstrated that SQSTM1 expression is tightly coupled to BAG3 expression and that BAG3 can physically interact with SQSTM1. Moreover, immunofluorescence-based microscopic analyses revealed that BAG3 co-localizes with SQSTM1 in protein sequestration structures suggesting a direct role of BAG3 in substrate delivery to SQSTM1 for macroautophagic degradation. Consistent with these findings, the age-related switch from BAG1 to BAG3 was found to determine that aged cells use the macroautophagic system more intensely for the turnover of polyubiquitinated proteins, in particular of insoluble, aggregated quality control substrates. Finally, in vivo expression analysis of macroautophagy markers in young and old mice as well as analysis of the lysosomal enzymatic activity strongly indicated that the macroautophagy pathway is also recruited in the nervous system during the organismal aging process.rnrnTogether these findings suggest that protein turnover by macroautophagy is gaining importance during the aging process as insoluble quality control substrates are increasingly produced that cannot be degraded by the proteasomal system. For this reason, a switch from the proteasome regulator BAG1 to the macroautophagy stimulator BAG3 occurs during cell aging. Hence, it can be concluded that the BAG3-mediated recruitment of the macroauto-phagy pathway is an important adaptation of the protein quality control system to maintain protein homeostasis in the presence of an enhanced pro-oxidant and aggregation-prone milieu characteristic of aging. Future studies will explore whether an impairment of this adaptation process may contribute to age-related proteinopathies.
Resumo:
Wir untersuchen die numerische Lösung des inversen Streuproblems der Rekonstruktion der Form, Position und Anzahl endlich vieler perfekt leitender Objekte durch Nahfeldmessungen zeitharmonischer elektromagnetischer Wellen mit Hilfe von Metalldetektoren. Wir nehmen an, dass sich die Objekte gänzlich im unteren Halbraum eines unbeschränkten zweischichtigen Hintergrundmediums befinden. Wir nehmen weiter an, dass der obere Halbraum mit Luft und der untere Halbraum mit Erde gefüllt ist. Wir betrachten zuerst die physikalischen Grundlagen elektromagnetischer Wellen, aus denen wir zunächst ein vereinfachtes mathematisches Modell ableiten, in welchem wir direkt das elektromagnetische Feld messen. Dieses Modell erweitern wir dann um die Messung des elektromagnetischen Feldes von Sendespulen mit Hilfe von Empfangsspulen. Für das vereinfachte Modell entwickeln wir, unter Verwendung der Theorie des zugehörigen direkten Streuproblems, ein nichtiteratives Verfahren, das auf der Idee der sogenannten Faktorisierungsmethode beruht. Dieses Verfahren übertragen wir dann auf das erweiterte Modell. Wir geben einen Implementierungsvorschlag der Rekonstruktionsmethode und demonstrieren an einer Reihe numerischer Experimente die Anwendbarkeit des Verfahrens. Weiterhin untersuchen wir mehrere Abwandlungen der Methode zur Verbesserung der Rekonstruktionen und zur Verringerung der Rechenzeit.
Resumo:
In dieser Arbeit wird ein vergröbertes (engl. coarse-grained, CG) Simulationsmodell für Peptide in wässriger Lösung entwickelt. In einem CG Verfahren reduziert man die Anzahl der Freiheitsgrade des Systems, so dass manrngrössere Systeme auf längeren Zeitskalen untersuchen kann. Die Wechselwirkungspotentiale des CG Modells sind so aufgebaut, dass die Peptid Konformationen eines höher aufgelösten (atomistischen) Modells reproduziert werden.rnIn dieser Arbeit wird der Einfluss unterschiedlicher bindender Wechsel-rnwirkungspotentiale in der CG Simulation untersucht, insbesondere daraufhin,rnin wie weit das Konformationsgleichgewicht der atomistischen Simulation reproduziert werden kann. Im CG Verfahren verliert man per Konstruktionrnmikroskopische strukturelle Details des Peptids, zum Beispiel, Korrelationen zwischen Freiheitsgraden entlang der Peptidkette. In der Dissertationrnwird gezeigt, dass diese “verlorenen” Eigenschaften in einem Rückabbildungsverfahren wiederhergestellt werden können, in dem die atomistischen Freiheitsgrade wieder in die CG-Strukturen eingefügt werden. Dies gelingt, solange die Konformationen des CG Modells grundsätzlich gut mit der atomistischen Ebene übereinstimmen. Die erwähnten Korrelationen spielen einerngrosse Rolle bei der Bildung von Sekundärstrukturen und sind somit vonrnentscheidender Bedeutung für ein realistisches Ensemble von Peptidkonformationen. Es wird gezeigt, dass für eine gute Übereinstimmung zwischen CG und atomistischen Kettenkonformationen spezielle bindende Wechselwirkungen wie zum Beispiel 1-5 Bindungs- und 1,3,5-Winkelpotentiale erforderlich sind. Die intramolekularen Parameter (d.h. Bindungen, Winkel, Torsionen), die für kurze Oligopeptide parametrisiert wurden, sind übertragbarrnauf längere Peptidsequenzen. Allerdings können diese gebundenen Wechselwirkungen nur in Kombination mit solchen nichtbindenden Wechselwirkungspotentialen kombiniert werden, die bei der Parametrisierung verwendet werden, sind also zum Beispiel nicht ohne weiteres mit einem andere Wasser-Modell kombinierbar. Da die Energielandschaft in CG-Simulationen glatter ist als im atomistischen Modell, gibt es eine Beschleunigung in der Dynamik. Diese Beschleunigung ist unterschiedlich für verschiedene dynamische Prozesse, zum Beispiel für verschiedene Arten von Bewegungen (Rotation und Translation). Dies ist ein wichtiger Aspekt bei der Untersuchung der Kinetik von Strukturbildungsprozessen, zum Beispiel Peptid Aggregation.rn
Resumo:
Generic object recognition is an important function of the human visual system and everybody finds it highly useful in their everyday life. For an artificial vision system it is a really hard, complex and challenging task because instances of the same object category can generate very different images, depending of different variables such as illumination conditions, the pose of an object, the viewpoint of the camera, partial occlusions, and unrelated background clutter. The purpose of this thesis is to develop a system that is able to classify objects in 2D images based on the context, and identify to which category the object belongs to. Given an image, the system can classify it and decide the correct categorie of the object. Furthermore the objective of this thesis is also to test the performance and the precision of different supervised Machine Learning algorithms in this specific task of object image categorization. Through different experiments the implemented application reveals good categorization performances despite the difficulty of the problem. However this project is open to future improvement; it is possible to implement new algorithms that has not been invented yet or using other techniques to extract features to make the system more reliable. This application can be installed inside an embedded system and after trained (performed outside the system), so it can become able to classify objects in a real-time. The information given from a 3D stereocamera, developed inside the department of Computer Engineering of the University of Bologna, can be used to improve the accuracy of the classification task. The idea is to segment a single object in a scene using the depth given from a stereocamera and in this way make the classification more accurate.
Resumo:
During a two-stage revision for prosthetic joint infections (PJI), joint aspirations, open tissue sampling and serum inflammatory markers are performed before re-implantation to exclude ongoing silent infection. We investigated the performance of these diagnostic procedures on the risk of recurrence of PJI among asymptomatic patients undergoing a two-stage revision. A total of 62 PJI were found in 58 patients. All patients had intra-operative surgical exploration during re-implantation, and 48 of them had intra-operative microbiological swabs. Additionally, 18 joint aspirations and one open biopsy were performed before second-stage reimplantation. Recurrence or persistence of PJI occurred in 12 cases with a mean delay of 218 days after re-implantation, but only four pre- or intraoperative invasive joint samples had grown a pathogen in cultures. In at least seven recurrent PJIs (58%), patients had a normal C-reactive protein (CRP, < 10 mg/l) level before re-implantation. The sensitivity, specificity, positive predictive and negative predictive values of pre-operative invasive joint aspiration and CRP for the prediction of PJI recurrence was 0.58, 0.88, 0.5, 0.84 and 0.17, 0.81, 0.13, 0.86, respectively. As a conclusion, pre-operative joint aspiration, intraoperative bacterial sampling, surgical exploration and serum inflammatory markers are poor predictors of PJI recurrence. The onset of reinfection usually occurs far later than reimplantation.
Resumo:
Conventional MRI may still be an inaccurate method for the non-invasive detection of a microadenoma in adrenocorticotropin (ACTH)-dependent Cushing's syndrome (CS). Bilateral inferior petrosal sinus sampling (BIPSS) with ovine corticotropin-releasing hormone (oCRH) stimulation is an invasive, but accurate, intervention in the diagnostic armamentarium surrounding CS. Until now, there is a continuous controversial debate regarding lateralization data in detecting a microadenoma. Using BIPSS, we evaluated whether a highly selective placement of microcatheters without diversion of venous outflow might improve detection of pituitary microadenoma.