992 resultados para effective pH
Resumo:
Optical microscopy is an essential tool in biological science and one of the gold standards for medical examinations. Miniaturization of microscopes can be a crucial stepping stone towards realizing compact, cost-effective and portable platforms for biomedical research and healthcare. This thesis reports on implementations of bright-field and fluorescence chip-scale microscopes for a variety of biological imaging applications. The term “chip-scale microscopy” refers to lensless imaging techniques realized in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical microscopes.
Our strategy for chip-scale microscopy involves utilization of low-cost Complementary metal Oxide Semiconductor (CMOS) image sensors, computational image processing and micro-fabricated structural components. First, the sub-pixel resolving optofluidic microscope (SROFM), will be presented, which combines microfluidics and pixel super-resolution image reconstruction to perform high-throughput imaging of fluidic samples, such as blood cells. We discuss design parameters and construction of the device, as well as the resulting images and the resolution of the device, which was 0.66 µm at the highest acuity. The potential applications of SROFM for clinical diagnosis of malaria in the resource-limited settings is discussed.
Next, the implementations of ePetri, a self-imaging Petri dish platform with microscopy resolution, are presented. Here, we simply place the sample of interest on the surface of the image sensor and capture the direct shadow images under the illumination. By taking advantage of the inherent motion of the microorganisms, we achieve high resolution (~1 µm) imaging and long term culture of motile microorganisms over ultra large field-of-view (5.7 mm × 4.4 mm) in a specialized ePetri platform. We apply the pixel super-resolution reconstruction to a set of low-resolution shadow images of the microorganisms as they move across the sensing area of an image sensor chip and render an improved resolution image. We perform longitudinal study of Euglena gracilis cultured in an ePetri platform and image based analysis on the motion and morphology of the cells. The ePetri device for imaging non-motile cells are also demonstrated, by using the sweeping illumination of a light emitting diode (LED) matrix for pixel super-resolution reconstruction of sub-pixel shifted shadow images. Using this prototype device, we demonstrate the detection of waterborne parasites for the effective diagnosis of enteric parasite infection in resource-limited settings.
Then, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope, which uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is also based on the image reconstruction with sweeping illumination technique, where the sequence of images are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
Finally, we report on the implementation of fluorescence chip-scale microscope, based on a silo-filter structure fabricated on the pixel array of a CMOS image sensor. The extruded pixel design with metal walls between neighboring pixels successfully guides fluorescence emission through the thick absorptive filter to the photodiode layer of a pixel. Our silo-filter CMOS image sensor prototype achieves 13-µm resolution for fluorescence imaging over a wide field-of-view (4.8 mm × 4.4 mm). Here, we demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.
Resumo:
This thesis consists of three parts. Chapter 2 deals with the dynamic buckling behavior of steel braces under cyclic axial end displacement. Braces under such a loading condition belong to a class of "acceleration magnifying" structural components, in which a small motion at the loading points can cause large internal acceleration and inertia. This member-level inertia is frequently ignored in current studies of braces and braced structures. This chapter shows that, under certain conditions, the inclusion of the member-level inertia can lead to brace behavior fundamentally different from that predicted by the quasi-static method. This result is to have significance in the correct use of the quasi-static, pseudo-dynamic and static condensation methods in the simulation of braces or braced structures under dynamic loading. The strain magnitude and distribution in the braces are also studied in this chapter.
Chapter 3 examines the effect of column uplift on the earthquake response of braced steel frames and explores the feasibility of flexible column-base anchoring. It is found that fully anchored braced-bay columns can induce extremely large internal forces in the braced-bay members and their connections, thus increasing the risk of failures observed in recent earthquakes. Flexible braced-bay column anchoring can significantly reduce the braced bay member force, but at the same time also introduces large story drift and column uplift. The pounding of an uplifting column with its support can result in very high compressive axial force.
Chapter 4 conducts a comparative study on the effectiveness of a proposed non-buckling bracing system and several conventional bracing systems. The non-buckling bracing system eliminates buckling and thus can be composed of small individual braces distributed widely in a structure to reduce bracing force concentration and increase redundancy. The elimination of buckling results in a significantly more effective bracing system compared with the conventional bracing systems. Among the conventional bracing systems, bracing configurations and end conditions for the bracing members affect the effectiveness.
The studies in Chapter 3 and Chapter 4 also indicate that code-designed conventionally braced steel frames can experience unacceptably severe response under the strong ground motions recorded during the recent Northridge and Kobe earthquakes.
Resumo:
This thesis presents a civil engineering approach to active control for civil structures. The proposed control technique, termed Active Interaction Control (AIC), utilizes dynamic interactions between different structures, or components of the same structure, to reduce the resonance response of the controlled or primary structure under earthquake excitations. The primary control objective of AIC is to minimize the maximum story drift of the primary structure. This is accomplished by timing the controlled interactions so as to withdraw the maximum possible vibrational energy from the primary structure to an auxiliary structure, where the energy is stored and eventually dissipated as the external excitation decreases. One of the important advantages of AIC over most conventional active control approaches is the very low external power required.
In this thesis, the AIC concept is introduced and a new AIC algorithm, termed Optimal Connection Strategy (OCS) algorithm, is proposed. The efficiency of the OCS algorithm is demonstrated and compared with two previously existing AIC algorithms, the Active Interface Damping (AID) and Active Variable Stiffness (AVS) algorithms, through idealized examples and numerical simulations of Single- and Multi-Degree-of Freedom systems under earthquake excitations. It is found that the OCS algorithm is capable of significantly reducing the story drift response of the primary structure. The effects of the mass, damping, and stiffness of the auxiliary structure on the system performance are investigated in parametric studies. Practical issues such as the sampling interval and time delay are also examined. A simple but effective predictive time delay compensation scheme is developed.
Resumo:
The negative impacts of ambient aerosol particles, or particulate matter (PM), on human health and climate are well recognized. However, owing to the complexity of aerosol particle formation and chemical evolution, emissions control strategies remain difficult to develop in a cost effective manner. In this work, three studies are presented to address several key issues currently stymieing California's efforts to continue improving its air quality.
Gas-phase organic mass (GPOM) and CO emission factors are used in conjunction with measured enhancements in oxygenated organic aerosol (OOA) relative to CO to quantify the significant lack of closure between expected and observed organic aerosol concentrations attributable to fossil-fuel emissions. Two possible conclusions emerge from the analysis to yield consistency with the ambient organic data: (1) vehicular emissions are not a dominant source of anthropogenic fossil SOA in the Los Angeles Basin, or (2) the ambient SOA mass yields used to determine the SOA formation potential of vehicular emissions are substantially higher than those derived from laboratory chamber studies. Additional laboratory chamber studies confirm that, owing to vapor-phase wall loss, the SOA mass yields currently used in virtually all 3D chemical transport models are biased low by as much as a factor of 4. Furthermore, predictions from the Statistical Oxidation Model suggest that this bias could be as high as a factor of 8 if the influence of the chamber walls could be removed entirely.
Once vapor-phase wall loss has been accounted for in a new suite of laboratory chamber experiments, the SOA parameterizations within atmospheric chemical transport models should also be updated. To address the numerical challenges of implementing the next generation of SOA models in atmospheric chemical transport models, a novel mathematical framework, termed the Moment Method, is designed and presented. Assessment of the Moment Method strengths and weaknesses provide valuable insight that can guide future development of SOA modules for atmospheric CTMs.
Finally, regional inorganic aerosol formation and evolution is investigated via detailed comparison of predictions from the Community Multiscale Air Quality (CMAQ version 4.7.1) model against a suite of airborne and ground-based meteorological measurements, gas- and aerosol-phase inorganic measurements, and black carbon (BC) measurements over Southern California during the CalNex field campaign in May/June 2010. Results suggests that continuing to target sulfur emissions with the hopes of reducing ambient PM concentrations may not the most effective strategy for Southern California. Instead, targeting dairy emissions is likely to be an effective strategy for substantially reducing ammonium nitrate concentrations in the eastern part of the Los Angeles Basin.
Resumo:
The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.
Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.
Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.
Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.
In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.
Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.
The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.
Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.
Resumo:
4 p.
Resumo:
Theoretical and experimental investigations of charge-carrier dynamics at semiconductor/liquid interfaces, specifically with respect to interfacial electron transfer and surface recombination, are presented.
Fermi's golden rule has been used to formulate rate expressions for charge transfer of delocalized carriers in a nondegenerately doped semiconducting electrode to localized, outer-sphere redox acceptors in an electrolyte phase. The treatment allows comparison between charge-transfer kinetic data at metallic, semimetallic, and semiconducting electrodes in terms of parameters such as the electronic coupling to the electrode, the attenuation of coupling with distance into the electrolyte, and the reorganization energy of the charge-transfer event. Within this framework, rate constant values expected at representative semiconducting electrodes have been determined from experimental data for charge transfer at metallic electrodes. The maximum rate constant (i.e., at optimal exoergicity) for outer-sphere processes at semiconducting electrodes is computed to be in the range 10-17-10-16 cm4 s-1, which is in excellent agreement with prior theoretical models and experimental results for charge-transfer kinetics at semiconductor/liquid interfaces.
Double-layer corrections have been evaluated for semiconductor electrodes in both depletion and accumulation conditions. In conjuction with the Gouy-Chapman-Stern model, a finite difference approach has been used to calculate potential drops at a representative solid/liquid interface. Under all conditions that were simulated, the correction to the driving force used to evaluate the interfacial rate constant was determined to be less than 2% of the uncorrected interfacial rate constant.
Photoconductivity decay lifetimes have been obtained for Si(111) in contact with solutions of CH3OH or tetrahydrofuran containing one-electron oxidants. Silicon surfaces in contact with electrolyte solutions having Nernstian redox potentials > 0 V vs. SCE exhibited low effective surface recombination velocities regardless of the different surface chemistries. The formation of an inversion layer, and not a reduced density of electrical trap sites on the surface, is shown to be responsible for the long charge-carrier lifetimes observed for these systems. In addition, a method for preparing an air-stable, low surface recombination velocity Si surface through a two-step, chlorination/alkylation reaction is described.
Resumo:
This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.
Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.
However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.
It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.
With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.
Resumo:
The Earth is very heterogeneous, especially in the region close to the surface of the Earth, and in regions close to the core-mantle boundary (CMB). The lowermost mantle (bottom 300km of the mantle) is the place for fast anomaly (3% faster S velocity than PREM, modeled from Scd), for slow anomaly (-3% slower S velocity than PREM, modeled from S,ScS), for extreme anomalous structure (ultra-low velocity zone, 30% lower inS velocity, 10% lower in P velocity). Strong anomaly with larger dimension is also observed beneath Africa and Pacific, originally modeled from travel time of S, SKS and ScS. Given the heterogeneous nature of the earth, more accurate approach (than travel time) has to be applied to study the details of various anomalous structures, and matching waveform with synthetic seismograms has proven effective in constraining the velocity structures. However, it is difficult to make synthetic seismograms in more than 1D cases where no exact analytical solution is possible. Numerical methods like finite difference or finite elements are too time consuming in modeling body waveforms. We developed a 2D synthetic algorithm, which is extended from 1D generalized ray theory (GRT), to make synthetic seismograms efficiently (each seismogram per minutes). This 2D algorithm is related to WKB approximation, but is based on different principles, it is thus named to be WKM, i.e., WKB modified. WKM has been applied to study the variation of fast D" structure beneath the Caribbean sea, to study the plume beneath Africa. WKM is also applied to study PKP precursors which is a very important seismic phase in modeling lower mantle heterogeneity. By matching WKM synthetic seismograms with various data, we discovered and confirmed that (a) The D" beneath Caribbean varies laterally, and the variation is best revealed with Scd+Sab beyond 88 degree where Sed overruns Sab. (b) The low velocity structure beneath Africa is about 1500 km in height, at least 1000km in width, and features 3% reduced S velocity. The low velocity structure is a combination of a relatively thin, low velocity layer (200 km thick or less) beneath the Atlantic, then rising very sharply into mid mantle towards Africa. (c) At the edges of this huge Africa low velocity structures, ULVZs are found by modeling the large separation between S and ScS beyond 100 degree. The ULVZ to the eastern boundary was discovered with SKPdS data, and later is confirmed by PKP precursor data. This is the first time that ULVZ is verified with distinct seismic phase.
Resumo:
Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.
For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.
For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.
For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.
Resumo:
Adsorption of aqueous Pb(II) and Cu(II) on α-quartz was studied as a function of time, system surface area, and chemical speciation. Experimental systems contained sodium as a major cation, hydroxide, carbonate, and chloride as major anions, and covered the pH range 4 to 8. In some cases citrate and EDTA were added as representative organic complexing agents. The adsorption equilibria were reached quickly, regardless of the system surface area. The positions of the adsorption equilibria were found to be strongly dependent on pH, ionic strength and concentration of citrate and EDTA. The addition of these non-adsorbing ligands resulted in a competition between chelation and adsorption. The experimental work also included the examination of the adsorption behavior of the doubly charged major cations Ca(II) and Mg(II) as a function of pH.
The theoretical description of the experimental systems was obtained by means of chemical equilibrium-plus-adsorption computations using two adsorption models: one mainly electrostatic (the James-Healy Model), and the other mainly chemical (the Ion Exchange-Surface Complex Formation Model). Comparisons were made between these two models.
The main difficulty in the theoretical predictions of the adsorption behavior of Cu(II) was the lack of the reliable data for the second hydrolysis constant(*β_2) The choice of the constant was made on the basis of potentiometric titratlons of Cu^(2+)
The experimental data obtained and the resulting theoretical observations were applied in models of the chemical behavior of trace metals in fresh oxic waters, with emphasis on Pb(II) and Cu(II).
Resumo:
A comprehensive study was made of the flocculation of dispersed E. coli bacterial cells by the cationic polymer polyethyleneimine (PEI). The three objectives of this study were to determine the primary mechanism involved in the flocculation of a colloid with an oppositely charged polymer, to determine quantitative correlations between four commonly-used measurements of the extent of flocculation, and to record the effect of varying selected system parameters on the degree of flocculation. The quantitative relationships derived for the four measurements of the extent of flocculation should be of direct assistance to the sanitary engineer in evaluating the effectiveness of specific coagulation processes.
A review of prior statistical mechanical treatments of absorbed polymer configuration revealed that at low degrees of surface site coverage, an oppositely- charged polymer molecule is strongly adsorbed to the colloidal surface, with only short loops or end sequences extending into the solution phase. Even for high molecular weight PEI species, these extensions from the surface are theorized to be less than 50 Å in length. Although the radii of gyration of the five PEI species investigated were found to be large enough to form interparticle bridges, the low surface site coverage at optimum flocculation doses indicates that the predominant mechanism of flocculation is adsorption coagulation.
The effectiveness of the high-molecular weight PEI species 1n producing rapid flocculation at small doses is attributed to the formation of a charge mosaic on the oppositely-charged E. coli surfaces. The large adsorbed PEI molecules not only neutralize the surface charge at the adsorption sites, but also cause charge reversal with excess cationic segments. The alignment of these positive surface patches with negative patches on approaching cells results in strong electrostatic attraction in addition to a reduction of the double-layer interaction energies. The comparative ineffectiveness of low-molecular weight PEI species in producing E. coli flocculation is caused by the size of the individual molecules, which is insufficient to both neutralize and reverse the negative E.coli surface charge. Consequently, coagulation produced by low molecular weight species is attributed solely to the reduction of double-layer interaction energies via adsorption.
Electrophoretic mobility experiments supported the above conclusions, since only the high-molecular weight species were able to reverse the mobility of the E. coli cells. In addition, electron microscope examination of the seam of agglutination between E. coli cells flocculation by PEI revealed tightly- bound cells, with intercellular separation distances of less than 100-200 Å in most instances. This intercellular separation is partially due to cell shrinkage in preparation of the electron micrographs.
The extent of flocculation was measured as a function of PEl molecular weight, PEl dose, and the intensity of reactor chamber mixing. Neither the intensity of mixing, within the common treatment practice limits, nor the time of mixing for up to four hours appeared to play any significant role in either the size or number of E.coli aggregates formed. The extent of flocculation was highly molecular weight dependent: the high-molecular-weight PEl species produce the larger aggregates, the greater turbidity reductions, and the higher filtration flow rates. The PEl dose required for optimum flocculation decreased as the species molecular weight increased. At large doses of high-molecular-weight species, redispersion of the macroflocs occurred, caused by excess adsorption of cationic molecules. The excess adsorption reversed the surface charge on the E.coli cells, as recorded by electrophoretic mobility measurements.
Successful quantitative comparisons were made between changes in suspension turbidity with flocculation and corresponding changes in aggregate size distribution. E. coli aggregates were treated as coalesced spheres, with Mie scattering coefficients determined for spheres in the anomalous diffraction regime. Good quantitative comparisons were also found to exist between the reduction in refiltration time and the reduction of the total colloid surface area caused by flocculation. As with turbidity measurements, a coalesced sphere model was used since the equivalent spherical volume is the only information available from the Coulter particle counter. However, the coalesced sphere model was not applicable to electrophoretic mobility measurements. The aggregates produced at each PEl dose moved at approximately the same vlocity, almost independently of particle size.
PEl was found to be an effective flocculant of E. coli cells at weight ratios of 1 mg PEl: 100 mg E. coli. While PEl itself is toxic to E.coli at these levels, similar cationic polymers could be effectively applied to water and wastewater treatment facilities to enhance sedimentation and filtration characteristics.
Resumo:
Arid and semiarid landscapes comprise nearly a third of the Earth's total land surface. These areas are coming under increasing land use pressures. Despite their low productivity these lands are not barren. Rather, they consist of fragile ecosystems vulnerable to anthropogenic disturbance.
The purpose of this thesis is threefold: (I) to develop and test a process model of wind-driven desertification, (II) to evaluate next-generation process-relevant remote monitoring strategies for use in arid and semiarid regions, and (III) to identify elements for effective management of the world's drylands.
In developing the process model of wind-driven desertification in arid and semiarid lands, field, remote sensing, and modeling observations from a degraded Mojave Desert shrubland are used. This model focuses on aeolian removal and transport of dust, sand, and litter as the primary mechanisms of degradation: killing plants by burial and abrasion, interrupting natural processes of nutrient accumulation, and allowing the loss of soil resources by abiotic transport. This model is tested in field sampling experiments at two sites and is extended by Fourier Transform and geostatistical analysis of high-resolution imagery from one site.
Next, the use of hyperspectral remote sensing data is evaluated as a substantive input to dryland remote monitoring strategies. In particular, the efficacy of spectral mixture analysis (SMA) in discriminating vegetation and soil types and detennining vegetation cover is investigated. The results indicate that hyperspectral data may be less useful than often thought in determining vegetation parameters. Its usefulness in determining soil parameters, however, may be leveraged by developing simple multispectral classification tools that can be used to monitor desertification.
Finally, the elements required for effective monitoring and management of arid and semiarid lands are discussed. Several large-scale multi-site field experiments are proposed to clarify the role of wind as a landscape and degradation process in dry lands. The role of remote sensing in monitoring the world's drylands is discussed in terms of optimal remote sensing platform characteristics and surface phenomena which may be monitored in order to identify areas at risk of desertification. A desertification indicator is proposed that unifies consideration of environmental and human variables.
Resumo:
The ability to interface with and program cellular function remains a challenging research frontier in biotechnology. Although the emerging field of synthetic biology has recently generated a variety of gene-regulatory strategies based on synthetic RNA molecules, few strategies exist through which to control such regulatory effects in response to specific exogenous or endogenous molecular signals. Here, we present the development of an engineered RNA-based device platform to detect and act on endogenous protein signals, linking these signals to the regulation of genes and thus cellular function.
We describe efforts to develop an RNA-based device framework for regulating endogenous genes in human cells. Previously developed RNA control devices have demonstrated programmable ligand-responsive genetic regulation in diverse cell types, and we attempted to adapt this class of cis-acting control elements to function in trans. We divided the device into two strands that reconstitute activity upon hybridization. Device function was optimized using an in vivo model system, and we found that device sequence is not as flexible as previously reported. After verifying the in vitro activity of our optimized design, we attempted to establish gene regulation in a human cell line using additional elements to direct device stability, structure, and localization. The significant limitations of our platform prevented endogenous gene regulation.
We next describe the development of a protein-responsive RNA-based regulatory platform. Employing various design strategies, we demonstrated functional devices that both up- and downregulate gene expression in response to a heterologous protein in a human cell line. The activity of our platform exceeded that of a similar, small-molecule-responsive platform. We demonstrated the ability of our devices to respond to both cytoplasmic- and nuclear-localized protein, providing insight into the mechanism of action and distinguishing our platform from previously described devices with more restrictive ligand localization requirements. Finally, we demonstrated the versatility of our device platform by developing a regulatory device that responds to an endogenous signaling protein.
The foundational tool we present here possesses unique advantages over previously described RNA-based gene-regulatory platforms. This genetically encoded technology may find future applications in the development of more effective diagnostic tools and targeted molecular therapy strategies.