8 resultados para computation- and data-intensive applications

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Techniques are developed for estimating activity profiles in fixed bed reactors and catalyst deactivation parameters from operating reactor data. These techniques are applicable, in general, to most industrial catalytic processes. The catalytic reforming of naphthas is taken as a broad example to illustrate the estimation schemes and to signify the physical meaning of the kinetic parameters of the estimation equations. The work is described in two parts. Part I deals with the modeling of kinetic rate expressions and the derivation of the working equations for estimation. Part II concentrates on developing various estimation techniques.

Part I: The reactions used to describe naphtha reforming are dehydrogenation and dehydroisomerization of cycloparaffins; isomerization, dehydrocyclization and hydrocracking of paraffins; and the catalyst deactivation reactions, namely coking on alumina sites and sintering of platinum crystallites. The rate expressions for the above reactions are formulated, and the effects of transport limitations on the overall reaction rates are discussed in the appendices. Moreover, various types of interaction between the metallic and acidic active centers of reforming catalysts are discussed as characterizing the different types of reforming reactions.

Part II: In catalytic reactor operation, the activity distribution along the reactor determines the kinetics of the main reaction and is needed for predicting the effect of changes in the feed state and the operating conditions on the reactor output. In the case of a monofunctional catalyst and of bifunctional catalysts in limiting conditions, the cumulative activity is sufficient for predicting steady reactor output. The estimation of this cumulative activity can be carried out easily from measurements at the reactor exit. For a general bifunctional catalytic system, the detailed activity distribution is needed for describing the reactor operation, and some approximation must be made to obtain practicable estimation schemes. This is accomplished by parametrization techniques using measurements at a few points along the reactor. Such parametrization techniques are illustrated numerically with a simplified model of naphtha reforming.

To determine long term catalyst utilization and regeneration policies, it is necessary to estimate catalyst deactivation parameters from the the current operating data. For a first order deactivation model with a monofunctional catalyst or with a bifunctional catalyst in special limiting circumstances, analytical techniques are presented to transform the partial differential equations to ordinary differential equations which admit more feasible estimation schemes. Numerical examples include the catalytic oxidation of butene to butadiene and a simplified model of naphtha reforming. For a general bifunctional system or in the case of a monofunctional catalyst subject to general power law deactivation, the estimation can only be accomplished approximately. The basic feature of an appropriate estimation scheme involves approximating the activity profile by certain polynomials and then estimating the deactivation parameters from the integrated form of the deactivation equation by regression techniques. Different bifunctional systems must be treated by different estimation algorithms, which are illustrated by several cases of naphtha reforming with different feed or catalyst composition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.

Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.

Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.

The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.

In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Noncommutative geometry is a source of particle physics models with matter Lagrangians coupled to gravity. One may associate to any noncommutative space (A, H, D) its spectral action, which is defined in terms of the Dirac spectrum of its Dirac operator D. When viewing a spin manifold as a noncommutative space, D is the usual Dirac operator. In this paper, we give nonperturbative computations of the spectral action for quotients of SU(2), Bieberbach manifolds, and SU(3) equipped with a variety of geometries. Along the way we will compute several Dirac spectra and refer to applications of this computation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.

Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.

Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.

The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.

Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.

The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.

Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.

In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.

This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.

The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.

Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.

Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.

Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photovoltaic energy conversion represents a economically viable technology for realizing collection of the largest energy resource known to the Earth -- the sun. Energy conversion efficiency is the most leveraging factor in the price of energy derived from this process. This thesis focuses on two routes for high efficiency, low cost devices: first, to use Group IV semiconductor alloy wire array bottom cells and epitaxially grown Group III-V compound semiconductor alloy top cells in a tandem configuration, and second, GaP growth on planar Si for heterojunction and tandem cell applications.

Metal catalyzed vapor-liquid-solid grown microwire arrays are an intriguing alternative for wafer-free Si and SiGe materials which can be removed as flexible membranes. Selected area Cu-catalyzed vapor-liquid solid growth of SiGe microwires is achieved using chlorosilane and chlorogermane precursors. The composition can be tuned up to 12% Ge with a simultaneous decrease in the growth rate from 7 to 1 μm/min-1. Significant changes to the morphology were observed, including tapering and faceting on the sidewalls and along the lengths of the wires. Characterization of axial and radial cross sections with transmission electron microscopy revealed no evidence of defects at facet corners and edges, and the tapering is shown to be due to in-situ removal of catalyst material during growth. X-ray diffraction and transmission electron microscopy reveal a Ge-rich crystal at the tip of the wires, strongly suggesting that the Ge incorporation is limited by the crystallization rate.

Tandem Ga1-xInxP/Si microwire array solar cells are a route towards a high efficiency, low cost, flexible, wafer-free solar technology. Realizing tandem Group III-V compound semiconductor/Si wire array devices requires optimization of materials growth and device performance. GaP and Ga1-xInxP layers were grown heteroepitaxially with metalorganic chemical vapor deposition on Si microwire array substrates. The layer morphology and crystalline quality have been studied with scanning electron microscopy and transmission electron microscopy, and they provide a baseline for the growth and characterization of a full device stack. Ultimately, the complexity of the substrates and the prevalence of defects resulted in material without detectable photoluminescence, unsuitable for optoelectronic applications.

Coupled full-field optical and device physics simulations of a Ga0.51In0.49P/Si wire array tandem are used to predict device performance. A 500 nm thick, highly doped "buffer" layer between the bottom cell and tunnel junction is assumed to harbor a high density of lattice mismatch and heteroepitaxial defects. Under simulated AM1.5G illumination, the device structure explored in this work has a simulated efficiency of 23.84% with realistic top cell SRH lifetimes and surface recombination velocities. The relative insensitivity to surface recombination is likely due to optical generation further away from the free surfaces and interfaces of the device structure.

Finally, GaP has been grown free of antiphase domains on Si (112) oriented substrates using metalorganic chemical vapor deposition. Low temperature pulsed nucleation is followed by high temperature continuous growth, yielding smooth, specular thin films. Atomic force microscopy topography mapping showed very smooth surfaces (4-6 Å RMS roughness) with small depressions in the surface. Thin films (~ 50 nm) were pseudomorphic, as confirmed by high resolution x-ray diffraction reciprocal space mapping, and 200 nm thick films showed full relaxation. Transmission electron microscopy showed no evidence of antiphase domain formation, but there is a population of microtwin and stacking fault defects.