883 resultados para LO-LD PHASE SEPARATION
Resumo:
In a previous study, we reported that the short-term treatment with celecoxib, a nonsteroidal anti-inflammatory drug (NSAID) attenuates the activation of brain structures related to nociception and does not interfere with orthodontic incisor separation in rats. The conclusion was that celecoxib could possibly be prescribed for pain in orthodontic patients. However, we did not analyze the effects of this drug in periodontium. The aim of this follow-up study was to analyze effects of celecoxib treatment on recruitment and activation of osteoclasts and alveolar bone resorption after inserting an activated orthodontic appliance between the incisors in our rat model. Twenty rats (400420 g) were pretreated through oral gavage with celecoxib (50 mg/kg) or vehicle (carboxymethylcellulose 0.4%). After 30 min, they received an activated (30 g) orthodontic appliance, set not to cause any palate disjunction. In sham animals, the appliance was immediately removed after introduction. All animals received ground food and, every 12 h, celecoxib or vehicle. After 48 h, they were anesthetized and transcardiacally perfused through the aorta with 4% formaldehyde. Subsequently, maxillae were removed, post-fixed and processed for histomorphometry or immunohistochemical analyses. As expected, incisor distalization induced an inflammatory response with certain histological changes, including an increase in the number of active osteoclasts at the compression side in group treated with vehicle (appliance: 32.2 +/- 2.49 vs sham: 4.8 +/- 1.79, P<0.05) and celecoxib (appliance: 31.0 +/- 1.45 vs sham: 4.6 +/- 1.82, P<0.05). The treatment with celecoxib did not modify substantially the histological alterations and the number of active osteoclasts after activation of orthodontic appliance. Moreover, we did not see any difference between the groups with respect to percentage of bone resorption area. Taken together with our previous results we conclude that short-term treatment with celecoxib can indeed be a therapeutic alternative for pain relieve during orthodontic procedures.
Resumo:
Here, we present a method for measuring barbiturates (butalbital, secobarbital, pentobarbital, and phenobarbital) in whole blood samples. To accomplish these measurements, analytes were extracted by means of hollow-fiber liquid-phase microextraction in the three-phase mode. Hollow-fiber pores were filled with decanol, and a solution of sodium hydroxide (pH 13) was introduced into the lumen of the fiber (acceptor phase). The fiber was submersed in the acidified blood sample, and the system was subjected to an ultrasonic bath. After a 5 min extraction, the acceptor phase was withdrawn from the fiber and dried under a nitrogen stream. The residue was reconstituted with ethyl acetate and trimethylanilinium hydroxide. An aliquot of 1.0 mu L of this solution was injected into the gas chromatograph/mass spectrometer, with the derivatization reaction occurring in the hot injector port (flash methylation). The method proved to be simple and rapid, and only a small amount of organic solvent (decanol) was needed for extraction. The detection limit was 0.5 mu g/mL for all the analyzed barbiturates. The calibration curves were linear over the specified range (1.0 to 10.0 mu g/mL). This method was successfully applied to postmortem samples (heart blood and femoral blood) collected from three deceased persons previously exposed to barbiturates.
Resumo:
The oil industry uses gas separators in production wells as the free gas present in the suction of the pump reduces the pumping efficiency and pump lifetime. Therefore, free gas is one of the most important variables in the design of pumping systems. However, in the literature there is little information on these separators. It is the case of the inverted-shroud gravitational gas separator. It has an annular geometry due to the installation of a cylindrical container in between the well casing and pioduction pipe (tubing). The purpose of the present study is to understand the phenomenology and behavior of inverted-shroud separator. Experimental tests were performed in a 10.5-m-length inclinable glass tube with air and water as working fluids. The water flow rate was in the range of 8.265-26.117 l/min and the average inlet air mass flow rate was 1.1041 kg/h, with inclination angles of 15 degrees, 30 degrees, 45 degrees, 60 degrees, 75 degrees, 80 degrees and 85 degrees. One of the findings is that the length between the inner annular level and production pipe inlet is one of the most important design parameters and based on that a new criterion for total gas separation is proposed. We also found that the phenomenology of the studied separator is not directly dependent on the gas flow rate, but on the average velocity of the free surface flow generated inside the separator. Maps of efficiency of gas separation were plotted and showed that liquid flow rate, inclination angle and pressure difference between casing and production pipe outlet are the main variables related to the gas separation phenomenon. The new data can be used for the development of design tools aiming to the optimized project of the pumping system for oil production in directional wells. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Travelling wave ion mobility mass spectrometry (TWIM-MS) with post-TWIM and pre-TWIM collision-induced dissociation (CID) experiments were used to form, separate and characterize protomers sampled directly from solutions or generated in the gas phase via CID. When in solution equilibria, these species were transferred to the gas phase via electrospray ionization, and then separated by TWIM-MS. CID performed after TWIM separation (post-TWIM) allowed the characterization of both protomers via structurally diagnostic fragments. Protonated aniline (1) sampled from solution was found to be constituted of a ca. 5:1 mixture of two gaseous protomers, that is, the N-protonated (1a) and ring protonated (1b) molecules, respectively. When dissociated, 1a nearly exclusively loses NH3, whereas 1b displays a much diverse set of fragments. When formed via CID, varying populations of 1a and 1b were detected. Two co-existing protomers of two isomeric porphyrins were also separated and characterized via post-TWIM CID. A deprotonated porphyrin sampled from a basic methanolic solution was found to be constituted predominantly of the protomer arising from deprotonation at the carboxyl group, which dissociates promptly by CO2 loss, but a CID-resistant protomer arising from deprotonation at a porphyrinic ring NH was also detected and characterized. The doubly deprotonated porphyrin was found to be constituted predominantly of a single protomer arising from deprotonation of two carboxyl groups. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
Dendritic Cells (DCs) derived from human blood monocytes that have been nurtured in GM-CSF and IL-4, followed by maturation in a monocyte-conditioned medium, are the most potent APCs known. These DCs have many features of primary DCs, including the expression of molecules that enhance antigen capture and selective receptors that guide DCs to and from several sites in the body, where they elicit the T cell mediated immune response. For these features, immature DCs (iDC) loaded with tumor antigen and matured (mDC) with a standard cytokine cocktail, are used for therapeutic vaccination in clinical trials of different cancers. However, the efficacy of DCs in the development of immunocompetence is critically influenced by the type (whole lysate, proteins, peptides, mRNA), the amount and the time of exposure of the tumor antigens used for loading in the presentation phase. The aim of the present study was to create instruments to acquire more information about DC antigen uptake and presentation mechanisms to improve the clinical efficacy of DCbased vaccine. In particular, two different tumor antigen were studied: the monoclonal immunoglobulin (IgG or IgA) produced in Myeloma Multiple, and the whole lysate obtained from melanoma tissues. These proteins were conjugated with fluorescent probe (FITC) to evaluate the kinetic of tumor antigen capturing process and its localization into DCs, by cytofluorimetric and fluorescence microscopy analysis, respectively. iDC pulsed with 100μg of IgG-FITC/106 cells were monitored from 2 to 22 hours after loading. By the cytofluorimetric analysis it was observed that the monoclonal antibody was completely captured after 2 hours from pulsing, and was decreased into mDC in 5 hours after maturation stimulus. To monitor the lysate uptake, iDC were pulsed with 80μg of tumor lysate/106 cells, then were monitored in the 2h to 22 hours interval time after loading. Then, to reveal difference between increasing lysate concentration, iDC were loaded with 20-40-80-100-200-400μg of tumor lysate/106 cells and monitored at 2-4-8-13h from pulsing. By the cytofluorimetric analysis, it was observed that, the 20-40-80-100μg uptake, after 8 hours loading was completed reaching a plateau phase. For 200 and 400μg the mean fluorescence of cells increased until 13h from pulsing. The lysate localization into iDC was evaluated with conventional and confocal fluorescence microscopy analysis. In the 2h to 8h time interval from loading an intensive and diffuse fluorescence was observed within the cytoplasmic compartment. Moreover, after 8h, the lysate fluorescence appeared to be organized in a restricted cloudy-shaded area with a typical polarized aspect. In addition, small fluorescent spots clearly appeared with an increment in the number and fluorescence intensity. The nature of these spot-like formations and cloudy area is now being investigated detecting the colocalization of the fluorescence lysate and specific markers for lysosomes, autophagosomes, endoplasmic reticulum and MHCII positive vesicles.
Resumo:
Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).
Dall'involucro all'invaso. Lo spazio a pianta centrale nell'opera architettonica di Adalberto Libera
Resumo:
An archetype selected over the centuries Adalberto Libera wrote little, showing more inclination to use the project as the only means of verification. This study uses a survey of the project for purely compositional space in relation to the reason that most other returns with continuity and consistency throughout his work. "The fruit of a type selected over centuries", in the words of Libera, is one of the most widely used and repeated spatial archetypes present in the history of architecture, given its nature as defined by a few consolidated elements and precisely defined with characters of geometric precision and absoluteness, the central space is provided, over the course of evolution of architecture, and its construction aspects as well as symbolic, for various uses, from historical period in which it was to coincide with sacred space for excellence, to others in which it lends itself to many different expressive possibilities of a more "secular". The central space was created on assumptions of a constructive character, and the same exact reason has determined the structural changes over the centuries, calling from time to time with advances in technology, the maximum extent possible and the different applications, which almost always have coincided with the reason for the monumental space. But it’s in the Roman world that the reason for the central space is defined from the start of a series of achievements that fix the character in perpetuity. The Pantheon was seen maximum results and, simultaneously, the archetype indispensable, to the point that it becomes difficult to sustain a discussion of the central space that excludes. But the reason the space station has complied, in ancient Rome, just as exemplary, monuments, public spaces or buildings with very different implications. The same Renaissance, on which Wittkower's proving itself once and for all, the nature and interpretation of sacred space station, and thus the symbolic significance of that invaded underlying interpretations related to Humanism, fixing the space-themed drawing it with the study and direct observation by the four-sixteenth-century masters, the ruins that in those years of renewed interest in the classical world, the first big pieces of excavation of ancient Rome brought to light with great surprise of all. Not a case, the choice to investigate the architectural work of Libera through the grounds of the central space. Investigating its projects and achievements, it turns out as the reason invoked particularly evident from the earliest to latest work, crossing-free period of the war which for many authors in different ways, the distinction between one stage and another, or the final miss. The theme and the occasion for Libera always distinct, it is precisely the key through which to investigate her work, to come to discover that the first-in this case the central plan-is the constant underlying all his work, and the second reason that the quota with or at the same time, we will return different each time and always the same Libera, formed on the major works remained from ancient times, and on this building method, means consciously, that the characters of architectural works, if valid, pass the time, and survive the use and function contingent. As for the facts by which to formalize it, they themselves are purely contingent, and therefore available to be transferred from one work to another, from one project to another, using also the loan. Using the same two words-at-issue and it becomes clear now how the theme of this study is the method of Libera and opportunity to the study of the central space in his work. But there is one aspect that, with respect to space a central plan evolves with the progress of the work of Libera on the archetype, and it is the reason behind all the way, just because an area built entirely on reason centric. It 'just the "center" of space that, ultimately, tells us the real progression and the knowledge that over the years has matured and changed in Libera. In the first phase, heavily laden with symbolic superstructure, even if used in a "bribe" from Free-always ill-disposed to sacrifice the idea of architecture to a phantom-center space is just the figure that identifies the icon represents space itself: the cross, the flame or the statue are different representations of the same idea of center built around an icon. The second part of the work of clearing the space station, changed the size of the orders but the demands of patronage, grows and expands the image space centric, celebratory nature that takes and becomes, in a different way, this same symbol . You see, one in all, as the project of "Civiltà Italiana" or symbolic arch are examples of this different attitude. And at the same point of view, you will understand how the two projects formulated on the reuse of the Mausoleum of Augustus is the key to its passage from first to second phase: the Ara Pacis in the second project, making itself the center of the composition "breaks" the pattern of symbolic figure in the center, because it is itself an architecture. And, in doing so, the transition takes place where the building itself-the central space-to become the center of that space that itself creates and determines, by extending the potential and the expressiveness of the enclosure (or cover) that defines the basin centered. In this second series of projects, which will be the apex and the point of "crisis" in the Palazzo dei Congressi all'E42 received and is no longer so, the symbol at the very geometry of space, but space itself and 'action' will be determined within this; action leading a movement, in the case of the Arco simbolico and the "Civiltà Italiana" or, more frequently, or celebration, as in the great Sala dei Recevimenti all’E42, which, in the first project proposal, is represented as a large area populated by people in suits, at a reception, in fact. In other words, in this second phase, the architecture is no longer a mere container, but it represents the shape of space, representing that which "contains". In the next step-determining the knowledge from which mature in their transition to post-war-is one step that radically changes the way centric space, although formally and compositionally Libera continues the work on the same elements, compounds and relationships in a different way . In this last phase Freedom, center, puts the man in human beings, in the two previous phases, and in a latent, were already at the center of the composition, even if relegated to the role of spectators in the first period, or of supporting actors in the second, now the heart of space. And it’s, as we shall see, the very form of being together in the form of "assembly", in its different shades (up to that sacred) to determine the shape of space, and how to relate the parts that combine to form it. The reconstruction of the birth, evolution and development of the central space of the ground in Libera, was born on the study of the monuments of ancient Rome, intersected on fifty years of recent history, honed on the constancy of a method and practice of a lifetime, becomes itself, Therefore, a project, employing the same mechanisms adopted by Libera; the decomposition and recomposition, research synthesis and unity of form, are in fact the structure of this research work. The road taken by Libera is a lesson in clarity and rationality, above all, and this work would uncover at least a fragment.
Resumo:
Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.
Resumo:
Satellite SAR (Synthetic Aperture Radar) interferometry represents a valid technique for digital elevation models (DEM) generation, providing metric accuracy even without ancillary data of good quality. Depending on the situations the interferometric phase could be interpreted both as topography and as a displacement eventually occurred between the two acquisitions. Once that these two components have been separated it is possible to produce a DEM from the first one or a displacement map from the second one. InSAR DEM (Digital Elevation Model) generation in the cryosphere is not a straightforward operation because almost every interferometric pair contains also a displacement component, which, even if small, when interpreted as topography during the phase to height conversion step could introduce huge errors in the final product. Considering a glacier, assuming the linearity of its velocity flux, it is therefore necessary to differentiate at least two pairs in order to isolate the topographic residue only. In case of an ice shelf the displacement component in the interferometric phase is determined not only by the flux of the glacier but also by the different heights of the two tides. As a matter of fact even if the two scenes of the interferometric pair are acquired at the same time of the day only the main terms of the tide disappear in the interferogram, while the other ones, smaller, do not elide themselves completely and so correspond to displacement fringes. Allowing for the availability of tidal gauges (or as an alternative of an accurate tidal model) it is possible to calculate a tidal correction to be applied to the differential interferogram. It is important to be aware that the tidal correction is applicable only knowing the position of the grounding line, which is often a controversial matter. In this thesis it is described the methodology applied for the generation of the DEM of the Drygalski ice tongue in Northern Victoria Land, Antarctica. The displacement has been determined both in an interferometric way and considering the coregistration offsets of the two scenes. A particular attention has been devoted to investigate the importance of the role of some parameters, such as timing annotations and orbits reliability. Results have been validated in a GIS environment by comparison with GPS displacement vectors (displacement map and InSAR DEM) and ICEsat GLAS points (InSAR DEM).
Resumo:
Lo scavo della chiesa di Santa MAria Maggiore ha permesso di acquisire nuove importanti informazioni sulla storia della città di Trento, sulla città tardo antica e sul processo di cristianizzazione. Il primo impianto ecclesiastico, datato a dopo la metà del V d. C. secolo, sorge su un precedente impianto termale realizzato intorno al II secolo d. C. ed appare caratterizzato da un forte carattere monumentale. La chiesa, a tre navate, presentava un presbiterio rialzato decorato in una prima fase da un opus sectile poi sostituito nel VI secolo da un mosaico policromo. Sono state rinvenute inoltre, parti consistenti della decorazione architettonica di fine VIII secolo pertinente questo stesso impianto che non subirà importanti modifiche fino alla realizzazione del successivo edificio di culto medievale, meno esteso e dai caratteri decisamente meno monumentali, caratterizzato dalla presenza di un esteso campo cimiteriale rinvenuto a nord della chiesa. A questo impianto ne succede un terzo, probabilmente a due navate, e dalla ricca decorazione pittorica demolito in età tardo rinascimentale per la realizzazione della chiesa attuale.
Resumo:
The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.
Resumo:
More efficient water treatment technologies would decrease the water bodies’ pollution and the actual intake of water resource. The aim of this thesis is an in-depth analysis of the magnetic separation of pollutants from water by means of a continuous-flow magnetic filter subjected to a field gradient produced by permanent magnets. This technique has the potential to improve times and efficiencies of both urban wastewater treatment plants and drinking water treatment plants. It might also substitute industrial wastewater treatments. This technique combines a physico-chemical phase of adsorption and a magnetic phase of filtration, having the potential to bond magnetite with any conventional adsorbent powder. The removal of both Magnetic Activated Carbons (MACs) and zeolite-magnetite mix with the addition of a coagulant was investigated. Adsorption tests of different pollutants (surfactants, endocrine disruptors, Fe(III), Mn(II), Ca(II)) on these adsorbents were also performed achieving good results. The numerical results concerning the adsorbent removals well reproduced the experimental ones obtained from two different experimental setups. In real situations the treatable flow rates are up to 90 m3/h (2000 m3/d).
Resumo:
The goal of this thesis is the acceleration of numerical calculations of QCD observables, both at leading order and next–to–leading order in the coupling constant. In particular, the optimization of helicity and spin summation in the context of VEGAS Monte Carlo algorithms is investigated. In the literature, two such methods are mentioned but without detailed analyses. Only one of these methods can be used at next–to–leading order. This work presents a total of five different methods that replace the helicity sums with a Monte Carlo integration. This integration can be combined with the existing phase space integral, in the hope that this causes less overhead than the complete summation. For three of these methods, an extension to existing subtraction terms is developed which is required to enable next–to–leading order calculations. All methods are analyzed with respect to efficiency, accuracy, and ease of implementation before they are compared with each other. In this process, one method shows clear advantages in relation to all others.
Resumo:
Microfluidic devices can be used for many applications, including the formation of well-controlled emulsions. In this study, the capability to continuously create monodisperse droplets in a microfluidic device was used to form calcium-alginate capsules.Calcium-alginate capsules have many potential uses, such as immunoisolation of cells and microencapsulation of active drug ingredients or bitter agents in food or beverage products. The gelation of calcium-alginate capsules is achieved by crosslinking sodiumalginate with calcium ions. Calcium ions dissociated from calcium carbonate due to diffusion of acetic acid from a sunflower oil phase into an aqueous droplet containing sodium-alginate and calcium carbonate. After gelation, the capsules were separated from the continuous oil phase into an aqueous solution for use in biological applications. Typically, capsules are separated bycentrifugation, which can damage both the capsules and the encapsulated material. A passive method achieves separation without exposing the encapsulated material or the capsules to large mechanical forces, thereby preventing damage. To achieve passiveseparation, the use of a microfluidic device with opposing channel wa hydrophobicity was used to stabilize co-laminar flow of im of hydrophobicity is accomplished by defining one length of the channel with a hydrogel. The chosen hydrogel was poly (ethylene glycol) diacrylate, which adheres to the glass surface through the use of self-assembled monolayer of 3-(trichlorosilyl)-propyl methacrylate. Due to the difference in surface energy within the channel, the aqueous stream is stabilized near a hydrogel and the oil stream is stabilized near the thiolene based optical adhesive defining the opposing length of the channel. Passive separation with co-laminar flow has shown success in continuously separating calcium-alginatecapsules from an oil phase into an aqueous phase. In addition to successful formation and separation of calcium alginate capsules,encapsulation of Latex micro-beads and viable mammalian cells has been achieved. The viability of encapsulated mammalian cells was determined using a live/dead stain. The co-laminar flow device has also been demonstrated as a means of separating liquid-liquidemulsions.
Resumo:
Conventional liquid liquid extraction (LLE) methods require large volumes of fluids to achieve the desired mass transfer of a solute, which is unsuitable for systems dealing with a low volume or high value product. An alternative to these methods is to scale down the process. Millifluidic devices share many of the benefits of microfluidic systems, including low fluid volumes, increased interfacial area-to-volume ratio, and predictability. A robust millifluidic device was created from acrylic, glass, and aluminum. The channel is lined with a hydrogel cured in the bottom half of the device channel. This hydrogel stabilizes co-current laminar flow of immiscible organic and aqueous phases. Mass transfer of the solute occurs across the interface of these contacting phases. Using a y-junction, an aqueous emulsion is created in an organic phase. The emulsion travels through a length of tubing and then enters the co-current laminar flow device, where the emulsion is broken and each phase can be collected separately. The inclusion of this emulsion formation and separation increases the contact area between the organic and aqueous phases, therefore increasing the area over which mass transfer can occur. Using this design, 95% extraction efficiency was obtained, where 100% is represented by equilibrium. By continuing to explore this LLE process, the process can be optimized and with better understanding may be more accurately modeled. This system has the potential to scale up to the industrial level and provide the efficient extraction required with low fluid volumes and a well-behaved system.