942 resultados para Energy quality
Resumo:
Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Due to the growing attention of consumers towards their food, improvement of quality of animal products has become one of the main focus of research. To this aim, the application of modern molecular genetics approaches has been proved extremely useful and effective. This innovative drive includes all livestock species productions, including pork. The Italian pig breeding industry is unique because needs heavy pigs slaughtered at about 160 kg for the production of high quality processed products. For this reason, it requires precise meat quality and carcass characteristics. Two aspects have been considered in this thesis: the application of the transcriptome analysis in post mortem pig muscles as a possible method to evaluate meat quality parameters related to the pre mortem status of the animals, including health, nutrition, welfare, and with potential applications for product traceability (chapters 3 and 4); the study of candidate genes for obesity related traits in order to identify markers associated with fatness in pigs that could be applied to improve carcass quality (chapters 5, 6, and 7). Chapter three addresses the first issue from a methodological point of view. When we considered this issue, it was not obvious that post mortem skeletal muscle could be useful for transcriptomic analysis. Therefore we demonstrated that the quality of RNA extracted from skeletal muscle of pigs sampled at different post mortem intervals (20 minutes, 2 hours, 6 hours, and 24 hours) is good for downstream applications. Degradation occurred starting from 48 h post mortem even if at this time it is still possible to use some RNA products. In the fourth chapter, in order to demonstrate the potential use of RNA obtained up to 24 hours post mortem, we present the results of RNA analysis with the Affymetrix microarray platform that made it possible to assess the level of expression of more of 24000 mRNAs. We did not identify any significant differences between the different post mortem times suggesting that this technique could be applied to retrieve information coming from the transcriptome of skeletal muscle samples not collected just after slaughtering. This study represents the first contribution of this kind applied to pork. In the fifth chapter, we investigated as candidate for fat deposition the TBC1D1 [TBC1 (tre-2/USP6, BUB2, cdc16) gene. This gene is involved in mechanisms regulating energy homeostasis in skeletal muscle and is associated with predisposition to obesity in humans. By resequencing a fragment of the TBC1D1 gene we identified three synonymous mutations localized in exon 2 (g.40A>G, g.151C>T, and g.172T>C) and 2 polymorphisms localized in intron 2 (g.219G>A and g.252G>A). One of these polymorphisms (g.219G>A) was genotyped by high resolution melting (HRM) analysis and PCR-RFLP. Moreover, this gene sequence was mapped by radiation hybrid analysis on porcine chromosome 8. The association study was conducted in 756 performance tested pigs of Italian Large White and Italian Duroc breeds. Significant results were obtained for lean meat content, back fat thickness, visible intermuscular fat and ham weight. In chapter six, a second candidate gene (tribbles homolog 3, TRIB3) is analyzed in a study of association with carcass and meat quality traits. The TRIB3 gene is involved in energy metabolism of skeletal muscle and plays a role as suppressor of adipocyte differentiation. We identified two polymorphisms in the first coding exon of the porcine TRIB3 gene, one is a synonymous SNP (c.132T> C), a second is a missense mutation (c.146C> T, p.P49L). The two polymorphisms appear to be in complete linkage disequilibrium between and within breeds. The in silico analysis of the p.P49L substitution suggests that it might have a functional effect. The association study in about 650 pigs indicates that this marker is associated with back fat thickness in Italian Large White and Italian Duroc breeds in two different experimental designs. This polymorphisms is also associated with lactate content of muscle semimembranosus in Italian Large White pigs. Expression analysis indicated that this gene is transcribed in skeletal muscle and adipose tissue as well as in other tissues. In the seventh chapter, we reported the genotyping results for of 677 SNPs in extreme divergent groups of pigs chosen according to the extreme estimated breeding values for back fat thickness. SNPs were identified by resequencing, literature mining and in silico database mining. analysis, data reported in the literature of 60 candidates genes for obesity. Genotyping was carried out using the GoldenGate (Illumina) platform. Of the analyzed SNPs more that 300 were polymorphic in the genotyped population and had minor allele frequency (MAF) >0.05. Of these SNPs, 65 were associated (P<0.10) with back fat thickness. One of the most significant gene marker was the same TBC1D1 SNPs reported in chapter 5, confirming the role of this gene in fat deposition in pig. These results could be important to better define the pig as a model for human obesity other than for marker assisted selection to improve carcass characteristics.
Resumo:
In this present work high quality PMMA opals with different sphere sizes, silica opals from large size spheres, multilayer opals, and inverse opals were fabricated. Highly monodisperse PMMA spheres were synthesized by surfactant-free emulsion polymerization (polydispersity ~2%). Large-area and well-ordered PMMA crystalline films with a homogenous thickness were produced by the vertical deposition method using a drawing device. Optical experiments have confirmed the high quality of these PMMA photonic crystals, e.g., well resolved high-energy bands of the transmission and reflectance spectra of the opaline films were observed. For fabrication of high quality opaline photonic crystals from large silica spheres (diameter of 890 nm), self-assembled in patterned Si-substrates a novel technique has been developed, in which the crystallization was performed by using a drawing apparatus in combination with stirring. The achievements comprise a spatial selectivity of opal crystallization without special treatment of the wafer surface, the opal lattice was found to match the pattern precisely in width as well as depth, particularly an absence of cracks within the size of the trenches, and finally a good three-dimensional order of the opal lattice even in trenches with a complex confined geometry. Multilayer opals from opaline films with different sphere sizes or different materials were produced by sequential crystallization procedure. Studies of the transmission in triple-layer hetero-opal revealed that its optical properties cannot only be considered as the linear superposition of two independent photonic bandgaps. The remarkable interface effect is the narrowing of the transmission minima. Large-area, high-quality, and robust photonic opal replicas from silicate-based inorganic-organic hybrid polymers (ORMOCER® s) were prepared by using the template-directed method, in which a high quality PMMA opal template was infiltrated with a neat inorganic-organic ORMOCER® oligomer, which can be photopolymerized within the opaline voids leading to a fully-developed replica structure with a filling factor of nearly 100%. This opal replica is structurally homogeneous, thermally and mechanically stable and the large scale (cm2 size) replica films can be handled easily as free films with a pair of tweezers.
Resumo:
The present study concerns the acoustical characterisation of Italian historical theatres. It moved from the ISO 3382 which provides the guidelines for the measurement of a well established set of room acoustic parameters inside performance spaces. Nevertheless, the peculiarity of Italian historical theatres needs a more specific approach. The Charter of Ferrara goes in this direction, aiming at qualifying the sound field in this kind of halls and the present work pursues the way forward. Trying to understand how the acoustical qualification should be done, the Bonci Theatre in Cesena has been taken as a case study. In September 2012 acoustical measurements were carried out in the theatre, recording monaural e binaural impulse responses at each seat in the hall. The values of the time criteria, energy criteria and psycho-acoustical and spatial criteria have been extracted according to ISO 3382. Statistics were performed and a 3D model of the theatre was realised and tuned. Statistical investigations were carried out on the whole set of measurement positions and on carefully chosen reduced subsets; it turned out that these subsets are representative only of the “average” acoustics of the hall. Normality tests were carried out to verify whether EDT, T30 and C80 could be described with some degree of reliability with a theoretical distribution. Different results, according to the varying assumptions underlying each test, were found. Finally, an attempt was made to correlate the numerical results emerged from the statistical analysis to the perceptual sphere. Looking for “acoustical equivalent areas”, relative difference limens were considered as threshold values. No rule of thumb emerged. Finally, the significance of the usual representation through mean values and standard deviation, which may be meaningful for normal distributed data, was investigated.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
In the last decade the near-surface mounted (NSM) strengthening technique using carbon fibre reinforced polymers (CFRP) has been increasingly used to improve the load carrying capacity of concrete members. Compared to externally bonded reinforcement (EBR), the NSM system presents considerable advantages. This technique consists in the insertion of carbon fibre reinforced polymer laminate strips into pre-cut slits opened in the concrete cover of the elements to be strengthened. CFRP reinforcement is bonded to concrete with an appropriate groove filler, typically epoxy adhesive or cement grout. Up to now, research efforts have been mainly focused on several structural aspects, such as: bond behaviour, flexural and/or shear strengthening effectiveness, and energy dissipation capacity of beam-column joints. In such research works, as well as in field applications, the most widespread adhesives that are used to bond reinforcements to concrete are epoxy resins. It is largely accepted that the performance of the whole application of NSM systems strongly depends on the mechanical properties of the epoxy resins, for which proper curing conditions must be assured. Therefore, the existence of non-destructive methods that allow monitoring the curing process of epoxy resins in the NSM CFRP system is desirable, in view of obtaining continuous information that can provide indication in regard to the effectiveness of curing and the expectable bond behaviour of CFRP/adhesive/concrete systems. The experimental research was developed at the Laboratory of the Structural Division of the Civil Engineering Department of the University of Minho in Guimar\~aes, Portugal (LEST). The main objective was to develop and propose a new method for continuous quality control of the curing of epoxy resins applied in NSM CFRP strengthening systems. This objective is pursued through the adaptation of an existing technique, termed EMM-ARM (Elasticity Modulus Monitoring through Ambient Response Method) that has been developed for monitoring the early stiffness evolution of cement-based materials. The experimental program was composed of two parts: (i) direct pull-out tests on concrete specimens strengthened with NSM CFRP laminate strips were conducted to assess the evolution of bond behaviour between CFRP and concrete since early ages; and, (ii) EMM-ARM tests were carried out for monitoring the progressive stiffness development of the structural adhesive used in CFRP applications. In order to verify the capability of the proposed method for evaluating the elastic modulus of the epoxy, static E-Modulus was determined through tension tests. The results of the two series of tests were then combined and compared to evaluate the possibility of implementation of a new method for the continuous monitoring and quality control of NSM CFRP applications.
Resumo:
Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.
Resumo:
Graphene, the thinnest two-dimensional material possible, is considered as a realistic candidate for the numerous applications in electronic, energy storage and conversion devices due to its unique properties, such as high optical transmittance, high conductivity, excellent chemical and thermal stability. However, the electronic and chemical properties of graphene are highly dependent on their preparation methods. Therefore, the development of novel chemical exfoliation process which aims at high yield synthesis of high quality graphene while maintaining good solution processability is of great concern. This thesis focuses on the solution production of high-quality graphene by wet-chemical exfoliation methods and addresses the applications of the chemically exfoliated graphene in organic electronics and energy storage devices.rnPlatinum is the most commonly used catalysts for fuel cells but they suffered from sluggish electron transfer kinetics. On the other hand, heteroatom doped graphene is known to enhance not only electrical conductivity but also long term operation stability. In this regard, a simple synthetic method is developed for the nitrogen doped graphene (NG) preparation. Moreover, iron (Fe) can be incorporated into the synthetic process. As-prepared NG with and without Fe shows excellent catalytic activity and stability compared to that of Pt based catalysts.rnHigh electrical conductivity is one of the most important requirements for the application of graphene in electronic devices. Therefore, for the fabrication of electrically conductive graphene films, a novel methane plasma assisted reduction of GO is developed. The high electrical conductivity of plasma reduced GO films revealed an excellent electrochemical performance in terms of high power and energy densities when used as an electrode in the micro-supercapacitors.rnAlthough, GO can be prepared in bulk scale, large amount of defect density and low electrical conductivity are major drawbacks. To overcome the intrinsic limitation of poor quality of GO and/or reduced GO, a novel protocol is extablished for mass production of high-quality graphene by means of electrochemical exfoliation of graphite. The prepared graphene shows high electrical conductivity, low defect density and good solution processability. Furthermore, when used as electrodes in organic field-effect transistors and/or in supercapacitors, the electrochemically exfoliated graphene shows excellent device performances. The low cost and environment friendly production of such high-quality graphene is of great importance for future generation electronics and energy storage devices. rn
Resumo:
Hybrid Elektrodenmaterialien (HEM) sind der Schlüssel zu grundlegenden Fortschritten in der Energiespeicherung und Systemen zur Energieumwandlung, einschließlich Lithium-Ionen-Batterien (LiBs), Superkondensatoren (SCs) und Brennstoffzellen (FCs). Die faszinierenden Eigenschaften von Graphen machen es zu einem guten Ausgangsmaterial für die Darstellung von HEM. Jedoch scheitern traditionelle Verfahren zur Herstellung von Graphen-HEM (GHEM) scheitern häufig an der fehlenden Kontrolle über die Morphologie und deren Einheitlichkeit, was zu unzureichenden Grenzflächenwechselwirkungen und einer mangelhaften Leistung des Materials führt. Diese Arbeit konzentriert sich auf die Herstellung von GHEM über kontrollierte Darstellungsmethoden und befasst sich mit der Nutzung von definierten GHEM für die Energiespeicherung und -umwandlung. Die große Volumenausdehnung bildet den Hauptnachteil der künftigen Lithium-Speicher-Materialien. Als erstes wird ein dreidimensionaler Graphen Schaumhybrid zur Stärkung der Grundstruktur und zur Verbesserung der elektrochemischen Leistung des Fe3O4 Anodenmaterials dargestellt. Der Einsatz von Graphenschalen und Graphennetzen realisiert dabei einen doppelten Schutz gegen die Volumenschwankung des Fe3O4 bei dem elektrochemischen Prozess. Die Leistung der SCs und der FCs hängt von der Porenstruktur und der zugänglichen Oberfläche, beziehungsweise den katalytischen Stellen der Elektrodenmaterialien ab. Wir zeigen, dass die Steuerung der Porosität über Graphen-basierte Kohlenstoffnanoschichten (HPCN) die zugängliche Oberfläche und den Ionentransport/Ladungsspeicher für SCs-Anwendungen erhöht. Desweiteren wurden Stickstoff dotierte Kohlenstoffnanoschichten (NDCN) für die kathodische Sauerstoffreduktion (ORR) hergestellt. Eine maßgeschnittene Mesoporosität verbunden mit Heteroatom Doping (Stickstoff) fördert die Exposition der aktiven Zentren und die ORR-Leistung der metallfreien Katalysatoren. Hochwertiges elektrochemisch exfoliiertes Graphen (EEG) ist ein vielversprechender Kandidat für die Darstellung von GHEM. Allerdings ist die kontrollierte Darstellung von EEG-Hybriden weiterhin eine große Herausforderung. Zu guter Letzt wird eine Bottom-up-Strategie für die Darstellung von EEG Schichten mit einer Reihe von funktionellen Nanopartikeln (Si, Fe3O4 und Pt NPs) vorgestellt. Diese Arbeit zeigt einen vielversprechenden Weg für die wirtschaftliche Synthese von EEG und EEG-basierten Materialien.
Resumo:
To assess the sensitivity and image quality of chest radiography (CXR) with or without dual-energy subtracted (ES) bone images in the detection of rib fractures.
Resumo:
There are two main types of bone in the human body, trabecular and cortical bone. Cortical bone is primarily found on the outer surface of most bones in the body while trabecular bone is found in vertebrae and at the end of long bones (Ross 2007). Osteoporosis is a condition that compromises the structural integrity of trabecular bone, greatly reducing the ability of the bone to absorb energy from falls. The current method for diagnosing osteoporosis and predicting fracture risk is measurement of bone mineral density. Limitations of this method include dependence on the bone density measurement device and dependence on type of test and measurement location (Rubin 2005). Each year there are approximately 250,000 hip fractures in the United States due to osteoporosis (Kleerekoper 2006). Currently, the most common method for repairing a hip fracture is a hip fixation surgery. During surgery, a temporary guide wire is inserted to guide the permanent screw into place and then removed. It is believed that directly measuring this screw pullout force may result in a better assessment of bone quality than current indirect measurement techniques (T. Bowen 2008-2010, pers. comm.). The objective of this project is to design a device that can measure the force required to extract this guide wire. It is believed that this would give the surgeon a direct, quantitative measurement of bone quality at the site of the fixation. A first generation device was designed by a Bucknell Biomedical Engineering Senior Design team during the 2008- 2009 Academic Year. The first step of this project was to examine the device, conduct a thorough design analysis, and brainstorm new concepts. The concept selected uses a translational screw to extract the guide wire. The device was fabricated and underwent validation testing to ensure that the device was functional and met the required engineering specifications. Two tests were conducted, one to test the functionality of the device by testing if the device gave repeatable results, and the other to test the sensitivity of the device to misalignment. Guide wires were extracted from 3 materials, low density polyethylene, ultra high molecular weight polyethylene, and polypropylene and the force of extraction was measured. During testing, it was discovered that the spring in the device did not have a high enough spring constant to reach the high forces necessary for extracting the wires without excessive deflection of the spring. The test procedure was modified slightly so the wires were not fully threaded into the material. The testing results indicate that there is significant variation in the screw pullout force, up to 30% of the average value. This significant variation was attributed to problems in the testing and data collection, and a revised set of tests was proposed to better evaluate the performance of the device. The fabricated device is a fully-functioning prototype and further refinements and testing of the device may lead to a 3rd generation version capable of measuring the screw pullout force during hip fixation surgery.
Resumo:
Tandem mass spectral libraries are gaining more and more importance for the identification of unknowns in different fields of research, including metabolomics, forensics, toxicology, and environmental analysis. Particularly, the recent invention of reliable, robust, and transferable libraries has increased the general acceptance of these tools. Herein, we report on results obtained from thorough evaluation of the match reliabilities of two tandem mass spectral libraries: the MSforID library established by the Oberacher group in Innsbruck and the Weinmann library established by the Weinmann group in Freiburg. Three different experiments were performed: (1) Spectra of the libraries were searched against their corresponding library after excluding either this single compound-specific spectrum or all compound-specific spectra prior to searching; (2) the libraries were searched against each other using either library as reference set or sample set; (3) spectra acquired on different mass spectrometric instruments were matched to both libraries. Almost 13,000 tandem mass spectra were included in this study. The MSforID search algorithm was used for spectral matching. Statistical evaluation of the library search results revealed that principally both libraries enable the sensitive and specific identification of compounds. Due to higher mass accuracy of the QqTOF compared with the QTrap instrument, matches to the MSforID library were more reliable when comparing spectra with both libraries. Furthermore, only the MSforID library was shown to be efficiently transferable to different kinds of tandem mass spectrometers, including "tandem-in-time" instruments; this is due to the coverage of a large range of different collision energy settings-including the very low range-which is an outstanding characteristics of the MSforID library.
Resumo:
The low-energy β− emitter 161Tb is very similar to 177Lu with respect to half-life, beta energy and chemical properties. However, 161Tb also emits a significant amount of conversion and Auger electrons. Greater therapeutic effect can therefore be expected in comparison to 177Lu. It also emits low-energy photons that are useful for gamma camera imaging. The 160Gd(n,γ)161Gd→161Tb production route was used to produce 161Tb by neutron irradiation of massive 160Gd targets (up to 40 mg) in nuclear reactors. A semiautomated procedure based on cation exchange chromatography was developed and applied to isolate no carrier added (n.c.a.) 161Tb from the bulk of the 160Gd target and from its stable decay product 161Dy. 161Tb was used for radiolabeling DOTA-Tyr3-octreotate; the radiolabeling profile was compared to the commercially available n.c.a. 177Lu. A 161Tb Derenzo phantom was imaged using a small-animal single-photon emission computed tomography camera. Up to 15 GBq of 161Tb was produced by long-term irradiation of Gd targets. Using a cation exchange resin, we obtained 80%–90% of the available 161Tb with high specific activity, radionuclide and chemical purity and in quantities sufficient for therapeutic applications. The 161Tb obtained was of the quality required to prepare 161Tb–DOTA-Tyr3-octreotate. We were able to produce 161Tb in n.c.a. form by irradiating highly enriched 160Gd targets; it can be obtained in the quantity and quality required for the preparation of 161Tb-labeled therapeutic agents.