697 resultados para rosin sizing
Resumo:
Abstract Background Particulate systems are well known to be able to deliver drugs with high efficiency and fewer adverse side effects, possibly by endocytosis of the drug carriers. On the other hand, cationic compounds and assemblies exhibit a general antimicrobial action. In this work, cationic nanoparticles built from drug, cationic lipid and polyelectrolytes are shown to be excellent and active carriers of amphotericin B against C. albicans. Results Assemblies of amphotericin B and cationic lipid at extreme drug to lipid molar ratios were wrapped by polyelectrolytes forming cationic nanoparticles of high colloid stability and fungicidal activity against Candida albicans. Experimental strategy involved dynamic light scattering for particle sizing, zeta-potential analysis, colloid stability, determination of AmB aggregation state by optical spectra and determination of activity against Candida albicans in vitro from cfu countings. Conclusion Novel and effective cationic particles delivered amphotericin B to C. albicans in vitro with optimal efficiency seldom achieved from drug, cationic lipid or cationic polyelectrolyte in separate. The multiple assembly of antibiotic, cationic lipid and cationic polyelctrolyte, consecutively nanostructured in each particle produced a strategical and effective attack against the fungus cells.
Resumo:
Nella tesi si analizzano le principali fonti del rumore aeronautico, lo stato dell'arte dal punto di vista normativo, tecnologico e procedurale. Si analizza lo stato dell'arte anche riguardo alla classificazione degli aeromobili, proponendo un nuovo indice prestazionale in alternativa a quello indicato dalla metodologia di certificazione (AC36-ICAO) Allo scopo di diminuire l'impatto acustico degli aeromobili in fase di atterraggio, si analizzano col programma INM i benefici di procedure CDA a 3° rispetto alle procedure tradizionali e, di seguito di procedure CDA ad angoli maggiori in termini di riduzione di lunghezza e di area delle isofoniche SEL85, SEL80 e SEL75.
Resumo:
We need a large amount of energy to make our homes pleasantly warm in winter and cool in summer. If we also consider the energy losses that occur through roofs, perimeter walls and windows, it would be more appropriate to speak of waste than consumption. The solution would be to build passive houses, i.e. buildings more efficient and environmentally friendly, able to ensure a drastic reduction of electricity and heating bills. Recently, the increase of public awareness about global warming and environmental pollution problems have “finally” opened wide possibility in the field of sustainable construction by encouraging new renewable methods for heating and cooling space. Shallow geothermal allows to exploit the renewable heat reservoir, present in the soil at depths between 15 and 20 m, for air-conditioning of buildings, using a ground source heat pump. This thesis focuses on the design of an air-conditioning system with geothermal heat pump coupled to energy piles, i.e. piles with internal heat exchangers, for a typical Italian-family building, on the basis of a geological-technical report about a plot of Bologna’s plain provided by Geo-Net s.r.l. The study has involved a preliminary static sizing of the piles in order to calculate their length and number, then the project was completed making the energy sizing, where it has been verified if the building energy needs were met with the static solution obtained. Finally the attention was focused on the technical and economical validity compared to a traditional system (cost-benefit analysis) and on the problem of the uncertainty data design and their effects on the operating and initial costs of the system (sensitivity analysis). To evaluate the performance of the thermal system and the potential use of the piles was also used the PILESIM2 software, designed by Dr. Pahud of the SUPSI’s school.
Resumo:
Aerosol particles and water vapour are two important constituents of the atmosphere. Their interaction, i.e. thecondensation of water vapour on particles, brings about the formation of cloud, fog, and raindrops, causing the water cycle on the earth, and being responsible for climate changes. Understanding the roles of water vapour and aerosol particles in this interaction has become an essential part of understanding the atmosphere. In this work, the heterogeneous nucleation on pre-existing aerosol particles by the condensation of water vapour in theflow of a capillary nozzle was investigated. Theoretical and numerical modelling as well as experiments on thiscondensation process were included. Based on reasonable results from the theoretical and numerical modelling, an idea of designing a new nozzle condensation nucleus counter (Nozzle-CNC), that is to utilise the capillary nozzle to create an expanding water saturated air flow, was then put forward and various experiments were carried out with this Nozzle-CNC under different experimental conditions. Firstly, the air stream in the long capillary nozzle with inner diameter of 1.0~mm was modelled as a steady, compressible and heat-conducting turbulence flow by CFX-FLOW3D computational program. An adiabatic and isentropic cooling in the nozzle was found. A supersaturation in the nozzle can be created if the inlet flow is water saturated, and its value depends principally on flow velocity or flow rate through the nozzle. Secondly, a particle condensational growth model in air stream was developed. An extended Mason's diffusion growthequation with size correction for particles beyond the continuum regime and with the correction for a certain particle Reynolds number in an accelerating state was given. The modelling results show the rapid condensational growth of aerosol particles, especially for fine size particles, in the nozzle stream, which, on the one hand, may induce evident `over-sizing' and `over-numbering' effects in aerosol measurements as nozzle designs are widely employed for producing accelerating and focused aerosol beams in aerosol instruments like optical particle counter (OPC) and aerodynamical particle sizer (APS). It can, on the other hand, be applied in constructing the Nozzle-CNC. Thirdly, based on the optimisation of theoretical and numerical results, the new Nozzle-CNC was built. Under various experimental conditions such as flow rate, ambient temperature, and the fraction of aerosol in the total flow, experiments with this instrument were carried out. An interesting exponential relation between the saturation in the nozzle and the number concentration of atmospheric nuclei, including hygroscopic nuclei (HN), cloud condensation nuclei (CCN), and traditionally measured atmospheric condensation nuclei (CN), was found. This relation differs from the relation for the number concentration of CCN obtained by other researchers. The minimum detectable size of this Nozzle-CNC is 0.04?m. Although further improvements are still needed, this Nozzle-CNC, in comparison with other CNCs, has severaladvantages such as no condensation delay as particles larger than the critical size grow simultaneously, low diffusion losses of particles, little water condensation at the inner wall of the instrument, and adjustable saturation --- therefore the wide counting region, as well as no calibration compared to non-water condensation substances.
Resumo:
Das Time-of-Flight Aerosol Mass Spectrometer (ToF-AMS) der Firma Aerodyne ist eine Weiterentwicklung des Aerodyne Aerosolmassenspektrometers (Q-AMS). Dieses ist gut charakterisiert und kommt weltweit zum Einsatz. Beide Instrumente nutzen eine aerodynamische Linse, aerodynamische Partikelgrößenbestimmung, thermische Verdampfung und Elektronenstoß-Ionisation. Im Gegensatz zum Q-AMS, wo ein Quadrupolmassenspektrometer zur Analyse der Ionen verwendet wird, kommt beim ToF-AMS ein Flugzeit-Massenspektrometer zum Einsatz. In der vorliegenden Arbeit wird anhand von Laborexperimenten und Feldmesskampagnen gezeigt, dass das ToF-AMS zur quantitativen Messung der chemischen Zusammensetzung von Aerosolpartikeln mit hoher Zeit- und Größenauflösung geeignet ist. Zusätzlich wird ein vollständiges Schema zur ToF-AMS Datenanalyse vorgestellt, dass entwickelt wurde, um quantitative und sinnvolle Ergebnisse aus den aufgenommenen Rohdaten, sowohl von Messkampagnen als auch von Laborexperimenten, zu erhalten. Dieses Schema basiert auf den Charakterisierungsexperimenten, die im Rahmen dieser Arbeit durchgeführt wurden. Es beinhaltet Korrekturen, die angebracht werden müssen, und Kalibrationen, die durchgeführt werden müssen, um zuverlässige Ergebnisse aus den Rohdaten zu extrahieren. Beträchtliche Arbeit wurde außerdem in die Entwicklung eines zuverlässigen und benutzerfreundlichen Datenanalyseprogramms investiert. Dieses Programm kann zur automatischen und systematischen ToF-AMS Datenanalyse und –korrektur genutzt werden.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.
Resumo:
Polare Stratosphärenwolken (PSC), die unterhalb einer Temperatur von etwa -78 °C in polaren Regionen auftreten, üben einen starken Einfluss auf die stratosphärische Ozonschicht aus. Dieser Einfluss erfolgt größtenteils über heterogene chemische Reaktionen, die auf den Oberflächen von Wolkenpartikeln stattfinden. Chemische Reaktionen die dabei ablaufen sind eine Voraussetzung für den späteren Ozonabbau. Des Weiteren verändert die Sedimentation der Wolkenpartikel die chemische Zusammensetzung bzw. die vertikale Verteilung der Spurengase in der Stratosphäre. Für die Ozonchemie spielt dabei die Beseitigung von reaktivem Stickstoff durch Sedimentation Salpetersäure-haltiger Wolkenpartikeln (Denitrifizierung) eine wichtige Rolle. Durch gleichen Sedimentationsprozess von PSC Elementen wird der Stratosphäre des weiteren Wasserdampf entzogen (Dehydrierung). Beide Prozesse begünstigen einen länger andauernden stratosphärischen Ozonabbau im polaren Frühling.rnGerade im Hinblick auf die Denitrifikation durch Sedimentation größerer PSC-Partikel werden in dieser Arbeit neue Resultate von in-situ Messungen vorgestellt, die im Rahmen der RECONCILE-Kampagne im Winter des Jahres 2010 an Bord des Höhenforschungs-Flugzeugs M-55 Geophysica durchgeführt wurden. Dabei wurden in fünf Flügen Partikelgrößenverteilungen in einem Größenbereich zwischen 0,5 und 35 µm mittels auf der Lichtstreuung basierender Wolkenpartikel-Spektrometer gemessen. Da polare Stratosphärenwolken in Höhen zwischen 17 und 30 km auftreten, sind in-situ Messungen vergleichsweise selten, so dass noch einige offene Fragen bestehen bleiben. Gerade Partikel mit optischen Durchmessern von bis zu 35µm, die während der neuen Messungen detektiert wurden, müssen mit theoretischen Einschränkungen in Einklang gebracht werden. Die Größe der Partikel wird dabei durch die Verfügbarkeit der beteiligten Spurenstoffe (Wasserdampf und Salpetersäure), die Sedimentationsgeschwindigkeit, Zeit zum Anwachsen und von der Umgebungstemperatur begrenzt. Diese Faktoren werden in der vorliegenden Arbeit diskutiert. Aus dem gemessenen Partikelvolumen wird beispielsweise unter der Annahme der NAT-Zusammensetzung (Nitric Acid Trihydrate) die äquivalente Konzentration des HNO 3 der Gasphase berechnet. Im Ergebnis wird die verfügbare Konzentration von Salpetersäure der Stratosphäre überschritten. Anschließend werden Hypothesen diskutiert, wodurch das gemessene Partikelvolumen überschätzt worden sein könnte, was z.B. im Fall einer starken Asphärizität der Partikel möglich wäre. Weiterhin wurde eine Partikelmode unterhalb von 2-3µm im Durchmesser aufgrund des Temperaturverhaltens als STS (Supercooled Ternary Solution droplets) identifiziert.rnUm die Konzentration der Wolkenpartikel anhand der Messung möglichst genau berechnen zu können, muss das Messvolumen bzw. die effektive Messfläche der Instrumente bekannt sein. Zum Vermessen dieser Messfläche wurde ein Tröpfchengenerator aufgebaut und zum Kalibrieren von drei Instrumenten benutzt. Die Kalibration mittels des Tröpfchengenerators konzentrierte sich auf die Cloud Combination Probe (CCP). Neben der Messfläche und der Größenbestimmung der Partikel werden in der Arbeit unter Zuhilfenahme von Messungen in troposphärischen Wolken und an einer Wolkensimulationskammer auch weitere Fehlerquellen der Messung untersucht. Dazu wurde unter anderem die statistische Betrachtung von Intervallzeiten einzelner Messereignisse, die in neueren Sonden aufgezeichnet werden, herangezogen. Letzteres ermöglicht es, Messartefakte wie Rauschen, Koinzidenzfehler oder „Shattering“ zu identifizieren.rn
Resumo:
Viscous dampers are characterized as very effective devices applied for seismic design and retrofitting. The objective of this thesis is to apply the Five-Step Procedure ,developed by a research group in University of Bologna, for sizing the viscous dampers to be installed in an existing precast RC structure. The idea is to apply the viscous damping devices in different positions in the structure then to identify and compare the performance of all types placement position.
Resumo:
Restriction fragment length polymorphism (RFLP) analysis is an economic and fast technique for molecular typing but has the drawback of difficulties in accurately sizing DNA fragments and comparing banding patterns on agarose gels. We aimed to improve RFLP for typing of the important human pathogen Streptococcus pneumoniae and to compare the results with the commonly used typing techniques of pulsed-field gel electrophoresis and multilocus sequence typing. We designed primers to amplify a noncoding region adjacent to the pneumolysin gene. The PCR product was digested separately with six restriction endonucleases, and the DNA fragments were analyzed using an Agilent 2100 bioanalyzer for accurate sizing. The combined RFLP results for all enzymes allowed us to assign each of the 47 clinical isolates of S. pneumoniae tested to one of 33 RFLP types. RFLP analyzed using the bioanalyzer allowed discrimination between strains similar to that obtained by the more commonly used techniques of pulsed-field gel electrophoresis, which discriminated between 34 types, and multilocus sequence typing, which discriminated between 35 types, but more quickly and with less expense. RFLP of a noncoding region using the Agilent 2100 bioanalyzer could be a useful addition to the molecular typing techniques in current use for S. pneumoniae, especially as a first screen of a local population.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.
Resumo:
This report is a PhD dissertation proposal to study the in-cylinder temperature and heat flux distributions within a gasoline turbocharged direct injection (GTDI) engine. Recent regulations requiring automotive manufacturers to increase the fuel efficiency of their vehicles has led to great technological achievements in internal combustion engines. These achievements have increased the power density of gasoline engines dramatically in the last two decades. Engine technologies such as variable valve timing (VVT), direct injection (DI), and turbocharging have significantly improved engine power-to-weight and power-to-displacement ratios. A popular trend for increasing vehicle fuel economy in recent years has been to downsize the engine and add VVT, DI, and turbocharging technologies so that a lighter more efficient engine can replace a larger, heavier one. With the added power density, thermal management of the engine becomes a more important issue. Engine components are being pushed to their temperature limits. Therefore it has become increasingly important to have a greater understanding of the parameters that affect in-cylinder temperatures and heat transfer. The proposed research will analyze the effects of engine speed, load, relative air-fuel ratio (AFR), and exhaust gas recirculation (EGR) on both in-cylinder and global temperature and heat transfer distributions. Additionally, the effect of knocking combustion and fuel spray impingement will be investigated. The proposed research will be conducted on a 3.5 L six cylinder GTDI engine. The research engine will be instrumented with a large number of sensors to measure in-cylinder temperatures and pressures, as well as, the temperature, pressure, and flow rates of energy streams into and out of the engine. One of the goals of this research is to create a model that will predict the energy distribution to the crankshaft, exhaust, and cooling system based on normalized values for engine speed, load, AFR, and EGR. The results could be used to aid in the engine design phase for turbocharger and cooling system sizing. Additionally, the data collected can be used for validation of engine simulation models, since in-cylinder temperature and heat flux data is not readily available in the literature..
Resumo:
BACKGROUND: Peak oxygen uptake (peak Vo(2)) is an established integrative measurement of maximal exercise capacity in cardiovascular disease. After heart transplantation (HTx) peak Vo(2) remains reduced despite normal systolic left ventricular function, which highlights the relevance of diastolic function. In this study we aim to characterize the predictive significance of cardiac allograft diastolic function for peak Vo(2). METHODS: Peak Vo(2) was measured using a ramp protocol on a bicycle ergometer. Left ventricular (LV) diastolic function was assessed with tissue Doppler imaging sizing the velocity of the early (Ea) and late (Aa) apical movement of the mitral annulus, and conventional Doppler measuring early (E) and late (A) diastolic transmitral flow propagation. Correlation coefficients were calculated and linear regression models fitted. RESULTS: The post-transplant time interval of the 39 HTxs ranged from 0.4 to 20.1 years. The mean age of the recipients was 55 +/- 14 years and body mass index (BMI) was 25.4 +/- 3.9 kg/m(2). Mean LV ejection fraction was 62 +/- 4%, mean LV mass index 108 +/- 22 g/m(2) and mean peak Vo(2) 20.1 +/- 6.3 ml/kg/min. Peak Vo(2) was reduced in patients with more severe diastolic dysfunction (pseudonormal or restrictive transmitral inflow pattern), or when E/Ea was > or =10. Peak Vo(2) correlated with recipient age (r = -0.643, p < 0.001), peak heart rate (r = 0.616, p < 0.001) and BMI (r = -0.417, p = 0.008). Of all echocardiographic measurements, Ea (r = 0.561, p < 0.001) and Ea/Aa (r = 0.495, p = 0.002) correlated best. Multivariate analysis identified age, heart rate, BMI and Ea/Aa as independent predictors of peak Vo(2). CONCLUSIONS: Diastolic dysfunction is relevant for the limitation of maximal exercise capacity after HTx.
Resumo:
OBJECTIVE: In search of an optimal compression therapy for venous leg ulcers, a systematic review and meta-analysis was performed of randomized controlled trials (RCT) comparing compression systems based on stockings (MCS) with divers bandages. METHODS: RCT were retrieved from six sources and reviewed independently. The primary endpoint, completion of healing within a defined time frame, and the secondary endpoints, time to healing, and pain were entered into a meta-analysis using the tools of the Cochrane Collaboration. Additional subjective endpoints were summarized. RESULTS: Eight RCT (published 1985-2008) fulfilled the predefined criteria. Data presentation was adequate and showed moderate heterogeneity. The studies included 692 patients (21-178/study, mean age 61 years, 56% women). Analyzed were 688 ulcerated legs, present for 1 week to 9 years, sizing 1 to 210 cm(2). The observation period ranged from 12 to 78 weeks. Patient and ulcer characteristics were evenly distributed in three studies, favored the stocking groups in four, and the bandage group in one. Data on the pressure exerted by stockings and bandages were reported in seven and two studies, amounting to 31-56 and 27-49 mm Hg, respectively. The proportion of ulcers healed was greater with stockings than with bandages (62.7% vs 46.6%; P < .00001). The average time to healing (seven studies, 535 patients) was 3 weeks shorter with stockings (P = .0002). In no study performed bandages better than MCS. Pain was assessed in three studies (219 patients) revealing an important advantage of stockings (P < .0001). Other subjective parameters and issues of nursing revealed an advantage of MCS as well. CONCLUSIONS: Leg compression with stockings is clearly better than compression with bandages, has a positive impact on pain, and is easier to use.