11 resultados para uniformity of deposition
em Digital Commons - Michigan Tech
Resumo:
Amperometric electrodeposition has been used to obtain uniform, conductive, and repeatable polyaniline (PANi) thin films for use in nano scaled biochemical sensors. This report describes the characterization of these films. Techniques such as ellipsometry were used to test repeatability of the deposition and the uniformity of the deposited thin films. Raman spectroscopy was utilized to confirm the composition of the deposited PANi thin films. Fluorescence microscopy was used to determine the immobilization of antibodies to the PANi thin films using biotin-avidin interactions, as well as the density of active binding sites. Ellipsometry results demonstrated that biomolecules could be immobilized on PANi films as thin as 9nm. Evidence from the Raman spectroscopy demonstrated the conductive nature of the PANi films. The fluorescence microscopy demonstrated that antibodies could be immobilized on PANi films, although the experiment also demonstrated a low density of binding sites. The characterization demonstrates the utility of the PANi thin films as a conductive interface between the inorganic sensor platform and biochemical molecules.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.
Resumo:
Eutrophication is a persistent problem in many fresh water lakes. Delay in lake recovery following reductions in external loading of phosphorus, the limiting nutrient in fresh water ecosystems, is often observed. Models have been created to assist with lake remediation efforts, however, the application of management tools to sediment diagenesis is often neglected due to conceptual and mathematical complexity. SED2K (Chapra et al. 2012) is proposed as a "middle way", offering engineering rigor while being accessible to users. An objective of this research is to further support the development and application SED2K for sediment phosphorus diagenesis and release to the water column of Onondaga Lake. Application of SED2K has been made to eutrophic Lake Alice in Minnesota. The more homogenous sediment characteristics of Lake Alice, compared with the industrially polluted sediment layers of Onondaga Lake, allowed for an invariant rate coefficient to be applied to describe first order decay kinetics of phosphorus. When a similar approach was attempted on Onondaga Lake an invariant rate coefficient failed to simulate the sediment phosphorus profile. Therefore, labile P was accounted for by progressive preservation after burial and a rate coefficient which gradual decreased with depth was applied. In this study, profile sediment samples were chemically extracted into five operationally-defined fractions: CaCO3-P, Fe/Al-P, Biogenic-P, Ca Mineral-P and Residual-P. Chemical fractionation data, from this study, showed that preservation is not the only mechanism by which phosphorus may be maintained in a non-reactive state in the profile. Sorption has been shown to contribute substantially to P burial within the profile. A new kinetic approach involving partitioning of P into process based fractions is applied here. Results from this approach indicate that labile P (Ca Mineral and Organic P) is contributing to internal P loading to Onondaga Lake, through diagenesis and diffusion to the water column, while the sorbed P fraction (Fe/Al-P and CaCO3-P) is remaining consistent. Sediment profile concentrations of labile and total phosphorus at time of deposition were also modeled and compared with current labile and total phosphorus, to quantify the extent to which remaining phosphorus which will continue to contribute to internal P loading and influence the trophic status of Onondaga Lake. Results presented here also allowed for estimation of the depth of the active sediment layer and the attendant response time as well as the sediment burden of labile P and associated efflux.
Resumo:
Traditionally, densities of newly built roadways are checked by direct sampling (cores) or by nuclear density gauge measurements. For roadway engineers, density of asphalt pavement surfaces is essential to determine pavement quality. Unfortunately, field measurements of density by direct sampling or by nuclear measurement are slow processes. Therefore, I have explored the use of rapidly-deployed ground penetrating radar (GPR) as an alternative means of determining pavement quality. The dielectric constant of pavement surface may be a substructure parameter that correlates with pavement density, and can be used as a proxy when density of asphalt is not known from nuclear or destructive methods. The dielectric constant of the asphalt can be determined using ground penetrating radar (GPR). In order to use GPR for evaluation of road surface quality, the relationship between dielectric constants of asphalt and their densities must be established. Field measurements of GPR were taken at four highway sites in Houghton and Keweenaw Counties, Michigan, where density values were also obtained using nuclear methods in the field. Laboratory studies involved asphalt samples taken from the field sites and samples created in the laboratory. These were tested in various ways, including, density, thickness, and time domain reflectometry (TDR). In the field, GPR data was acquired using a 1000 MHz air-launched unit and a ground-coupled unit at 200 and 500 MHz. The equipment used was owned and operated by the Michigan Department of Transportation (MDOT) and available for this study for a total of four days during summer 2005 and spring 2006. The analysis of the reflected waveforms included “routine” processing for velocity using commercial software and direct evaluation of reflection coefficients to determine a dielectric constant. The dielectric constants computed from velocities do not agree well with those obtained from reflection coefficients. Perhaps due to the limited range of asphalt types studied, no correlation between density and dielectric constant was evident. Laboratory measurements were taken with samples removed from the field and samples created for this study. Samples from the field were studied using TDR, in order to obtain dielectric constant directly, and these correlated well with the estimates made from reflection coefficients. Samples created in the laboratory were measured using 1000 MHz air-launched GPR, and 400 MHz ground-coupled GPR, each under both wet and dry conditions. On the basis of these observations, I conclude that dielectric constant of asphalt can be reliably measured from waveform amplitude analysis of GJPR data, based on the consistent agreement with that obtained in the laboratory using TDR. Because of the uniformity of asphalts studied here, any correlation between dielectric constant and density is not yet apparent.
Resumo:
The particulate matter distribution (PM) trends that exist in catalyzed particulate filters (CPFs) after loading, passive oxidation, active regeneration, and post loading conditions are not clearly understood. These data are required to optimize the operation of CPFs, prevent damage to the CPFs caused by non-uniform distributions, and develop accurate CPF models. To develop an understanding of PM distribution trends, multiple tests were conducted and the PM distribution was measured in three dimensions using a terahertz wave scanner. The results of this work indicate that loading, passive oxidation, active regeneration, and post loading can all cause non-uniform PM distributions. The density of the PM in the substrate after loading and the amount of PM that is oxidized during passive oxidations and active regenerations affect the uniformity of the distribution. Post loading that occurs after active regenerations result in distributions that are less uniform than post loading that occurs after passive oxidations.
Resumo:
The time course of lake recovery after a reduction in external loading of nutrients is often controlled by conditions in the sediment. Remediation of eutrophication is hindered by the presence of legacy organic carbon deposits, that exert a demand on the terminal electron acceptors of the lake and contribute to problems such as internal nutrient recycling, absence of sediment macrofauna, and flux of toxic metal species into the water column. Being able to quantify the timing of a lake’s response requires determination of the magnitude and lability, i.e., the susceptibility to biodegradation, of the organic carbon within the legacy deposit. This characterization is problematic for organic carbon in sediments because of the presence of different fractions of carbon, which vary from highly labile to refractory. The lability of carbon under varied conditions was tested with a bioassay approach. It was found that the majority of the organic material found in the sediments is conditionally-labile, where mineralization potential is dependent on prevailing conditions. High labilities were noted under oxygenated conditions and a favorable temperature of 30 °C. Lability decreased when oxygen was removed, and was further reduced when the temperature was dropped to the hypolimnetic average of 8° C . These results indicate that reversible preservation mechanisms exist in the sediment, and are able to protect otherwise labile material from being mineralized under in situ conditions. The concept of an active sediment layer, a region in the sediments in which diagenetic reactions occur (with nothing occurring below it), was examined through three lines of evidence. Initially, porewater profiles of oxygen, nitrate, sulfate/total sulfide, ETSA (Electron Transport System Activity- the activity of oxygen, nitrate, iron/manganese, and sulfate), and methane were considered. It was found through examination of the porewater profiles that the edge of diagenesis occurred around 15-20 cm. Secondly, historical and contemporary TOC profiles were compared to find the point at which the profiles were coincident, indicating the depth at which no change has occurred over the (13 year) interval between core collections. This analysis suggested that no diagenesis has occurred in Onondaga Lake sediment below a depth of 15 cm. Finally, the time to 99% mineralization, the t99, was viewed by using a literature estimate of the kinetic rate constant for diagenesis. A t99 of 34 years, or approximately 30 cm of sediment depth, resulted for the slowly decaying carbon fraction. Based on these three lines of evidence , an active sediment layer of 15-20 cm is proposed for Onondaga Lake, corresponding to a time since deposition of 15-20 years. While a large legacy deposit of conditionally-labile organic material remains in the sediments of Onondaga Lake, it becomes clear that preservation, mechanisms that act to shield labile organic carbon from being degraded, protects this material from being mineralized and exerting a demand on the terminal electron acceptors of the lake. This has major implications for management of the lake, as it defines the time course of lake recovery following a reduction in nutrient loading.
Resumo:
Silicon has long been considered as one of the most promising anode material for lithium-ion batteries. However, the poor cycle life due to stress during charge/discharge cycling has been a major concern for its practical applications. In this report, novel Si-metal nanocomposites have been explored to accommodate the stress generated in the intercalation process. Several approaches have been studied with the aim of getting uniform mixing, good mechanical stability and high Si content. Among the three approaches being investigated, Si- Galinstan nanocomposite based on electrophoretic deposition showed the best promise by achieving at least 32.3% Si theoretical weight percentage, and our in current experiments we’ve already get 13% Silicon weight percentage, which gave us an anode material 46% more capacity than the current commercial product.
Resumo:
During the past decades, tremendous research interests have been attracted to investigate nanoparticles due to their promising catalytic, magnetic, and optical properties. In this thesis, two novel methods of nanoparticle fabrication were introduced and the basic formation mechanisms were studied. Metal nanoparticles and polyurethane nanoparticles were separately fabricated by a short-distance sputter deposition technique and a reactive ion etching process. First, a sputter deposition method with a very short target-substrate distance is found to be able to generate metal nanoparticles on the glass substrate inside a RIE chamber. The distribution and morphology of nanoparticles are affected by the distance, the ion concentration and the process time. Densely-distributed nanoparticles of various compositions are deposited on the substrate surface when the target-substrate distance is smaller than 130mm. It is much less than the atoms’ mean free path, which is the threshold in previous research for nanoparticles’ formation. Island structures are formed when the distance is increased to 510mm, indicating the tendency to form continuous thin film. The trend is different from previously-reported sputtering method for nanoparticle fabrication, where longer distance between the target and the substrate facilitates the formation of nanoparticle. A mechanism based on the seeding effect of the substrate is proposed to interpret the experimental results. Secondly, in polyurethane nanoparticles’ fabrication, a mechanism is put forward based on the microphase separation phenomenon in block copolymer thin film. The synthesized polymers have formed dispersed and continuous phases because of the different properties between segments. With harder mechanical property, the dispersed phase is remained after RIE process while the continuous phase is etched away, leading to the formation of nanoparticles on the substrate. The nanoparticles distribution is found to be affected by the heating effect, the process time and the plasma power. Superhydrophilic property is found on samples with these two types of nanoparticles. The relationship between the nanostructure and the hydrophilicity is studied for further potential applications.
Resumo:
The single-electron transistor (SET) is one of the best candidates for future nano electronic circuits because of its ultralow power consumption, small size and unique functionality. SET devices operate on the principle of Coulomb blockade, which is more prominent at dimensions of a few nano meters. Typically, the SET device consists of two capacitively coupled ultra-small tunnel junctions with a nano island between them. In order to observe the Coulomb blockade effects in a SET device the charging energy of the device has to be greater that the thermal energy. This condition limits the operation of most of the existing SET devices to cryogenic temperatures. Room temperature operation of SET devices requires sub-10nm nano-islands due to the inverse dependence of charging energy on the radius of the conducting nano-island. Fabrication of sub-10nm structures using lithography processes is still a technological challenge. In the present investigation, Focused Ion Beam based etch and deposition technology is used to fabricate single electron transistors devices operating at room temperature. The SET device incorporates an array of tungsten nano-islands with an average diameter of 8nm. The fabricated devices are characterized at room temperature and clear Coulomb blockade and Coulomb oscillations are observed. An improvement in the resolution limitation of the FIB etching process is demonstrated by optimizing the thickness of the active layer. SET devices with structural and topological variation are developed to explore their impact on the behavior of the device. The threshold voltage of the device was minimized to ~500mV by minimizing the source-drain gap of the device to 17nm. Vertical source and drain terminals are fabricated to realize single-dot based SET device. A unique process flow is developed to fabricate Si dot based SET devices for better gate controllability in the device characteristic. The device vi parameters of the fabricated devices are extracted by using a conductance model. Finally, characteristic of these devices are validated with the simulated data from theoretical modeling.
Resumo:
This technical report discusses the application of the Lattice Boltzmann Method (LBM) and Cellular Automata (CA) simulation in fluid flow and particle deposition. The current work focuses on incompressible flow simulation passing cylinders, in which we incorporate the LBM D2Q9 and CA techniques to simulate the fluid flow and particle loading respectively. For the LBM part, the theories of boundary conditions are studied and verified using the Poiseuille flow test. For the CA part, several models regarding simulation of particles are explained. And a new Digital Differential Analyzer (DDA) algorithm is introduced to simulate particle motion in the Boolean model. The numerical results are compared with a previous probability velocity model by Masselot [Masselot 2000], which shows a satisfactory result.
Resumo:
Since it is very toxic and accumulates in organisms, particularly in fish, mercury is a very important pollutant and one of the most studies. And this concern over the toxicity and human health risks of mercury has prompted efforts to regulate anthropogenic emissions. As mercury pollution problem is getting increasingly serious, we are curious about how serious this problem will be in the future. What is more, how the climate change in the future will affect the mercury concentration in the atmosphere. So we investigate the impact of climate change on mercury concentration in the atmosphere. We focus on the comparison between the mercury data for year 2000 and for year 2050. The GEOS-Chem model shows that the mercury concentrations for all tracers (1 to 3), elemental mercury (Hg(0)), divalent mercury (Hg(II)) and primary particulate mercury (Hg(P)) have differences between 2000 and 2050 in most regions over the world. From the model results, we can see the climate change from 2000 to 2050 would decrease Hg(0) surface concentration in most of the world. The driving factors of Hg(0) surface concentration changes are natural emissions(ocean and vegetation) and the transformation reactions between Hg(0) and Hg(II). The climate change from 2000 to 2050 would increase Hg(II) surface concentration in most of mid-latitude continental parts of the world while decreasing Hg(II) surface concentration in most of high-latitude part of the world. The driving factors of Hg(II) surface concentration changes is deposition amount change (majorly wet deposition) from 2000 to 2050 and the transformation reactions between Hg(0) and Hg(II). Climate change would increase Hg(P) concentration in most of mid-latitude area of the world and meanwhile decrease Hg(P) concentration in most of high-latitude regions of the world. For the Hg(P) concentration changes, the major driving factor is the deposition amount change (mainly wet deposition) from 2000 to 2050.