8 resultados para Enterprise Modeling Ontology
em CaltechTHESIS
Resumo:
Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.
Resumo:
For damaging response, the force-displacement relationship of a structure is highly nonlinear and history-dependent. For satisfactory analysis of such behavior, it is important to be able to characterize and to model the phenomenon of hysteresis accurately. A number of models have been proposed for response studies of hysteretic structures, some of which are examined in detail in this thesis. There are two popular classes of models used in the analysis of curvilinear hysteretic systems. The first is of the distributed element or assemblage type, which models the physical behavior of the system by using well-known building blocks. The second class of models is of the differential equation type, which is based on the introduction of an extra variable to describe the history dependence of the system.
Owing to their mathematical simplicity, the latter models have been used extensively for various applications in structural dynamics, most notably in the estimation of the response statistics of hysteretic systems subjected to stochastic excitation. But the fundamental characteristics of these models are still not clearly understood. A response analysis of systems using both the Distributed Element model and the differential equation model when subjected to a variety of quasi-static and dynamic loading conditions leads to the following conclusion: Caution must be exercised when employing the models belonging to the second class in structural response studies as they can produce misleading results.
The Massing's hypothesis, originally proposed for steady-state loading, can be extended to general transient loading as well, leading to considerable simplification in the analysis of the Distributed Element models. A simple, nonparametric identification technique is also outlined, by means of which an optimal model representation involving one additional state variable is determined for hysteretic systems.
Resumo:
The Earth is very heterogeneous, especially in the region close to the surface of the Earth, and in regions close to the core-mantle boundary (CMB). The lowermost mantle (bottom 300km of the mantle) is the place for fast anomaly (3% faster S velocity than PREM, modeled from Scd), for slow anomaly (-3% slower S velocity than PREM, modeled from S,ScS), for extreme anomalous structure (ultra-low velocity zone, 30% lower inS velocity, 10% lower in P velocity). Strong anomaly with larger dimension is also observed beneath Africa and Pacific, originally modeled from travel time of S, SKS and ScS. Given the heterogeneous nature of the earth, more accurate approach (than travel time) has to be applied to study the details of various anomalous structures, and matching waveform with synthetic seismograms has proven effective in constraining the velocity structures. However, it is difficult to make synthetic seismograms in more than 1D cases where no exact analytical solution is possible. Numerical methods like finite difference or finite elements are too time consuming in modeling body waveforms. We developed a 2D synthetic algorithm, which is extended from 1D generalized ray theory (GRT), to make synthetic seismograms efficiently (each seismogram per minutes). This 2D algorithm is related to WKB approximation, but is based on different principles, it is thus named to be WKM, i.e., WKB modified. WKM has been applied to study the variation of fast D" structure beneath the Caribbean sea, to study the plume beneath Africa. WKM is also applied to study PKP precursors which is a very important seismic phase in modeling lower mantle heterogeneity. By matching WKM synthetic seismograms with various data, we discovered and confirmed that (a) The D" beneath Caribbean varies laterally, and the variation is best revealed with Scd+Sab beyond 88 degree where Sed overruns Sab. (b) The low velocity structure beneath Africa is about 1500 km in height, at least 1000km in width, and features 3% reduced S velocity. The low velocity structure is a combination of a relatively thin, low velocity layer (200 km thick or less) beneath the Atlantic, then rising very sharply into mid mantle towards Africa. (c) At the edges of this huge Africa low velocity structures, ULVZs are found by modeling the large separation between S and ScS beyond 100 degree. The ULVZ to the eastern boundary was discovered with SKPdS data, and later is confirmed by PKP precursor data. This is the first time that ULVZ is verified with distinct seismic phase.
Resumo:
Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.
Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.
Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.
The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.
Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.
The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.
Resumo:
Arid and semiarid landscapes comprise nearly a third of the Earth's total land surface. These areas are coming under increasing land use pressures. Despite their low productivity these lands are not barren. Rather, they consist of fragile ecosystems vulnerable to anthropogenic disturbance.
The purpose of this thesis is threefold: (I) to develop and test a process model of wind-driven desertification, (II) to evaluate next-generation process-relevant remote monitoring strategies for use in arid and semiarid regions, and (III) to identify elements for effective management of the world's drylands.
In developing the process model of wind-driven desertification in arid and semiarid lands, field, remote sensing, and modeling observations from a degraded Mojave Desert shrubland are used. This model focuses on aeolian removal and transport of dust, sand, and litter as the primary mechanisms of degradation: killing plants by burial and abrasion, interrupting natural processes of nutrient accumulation, and allowing the loss of soil resources by abiotic transport. This model is tested in field sampling experiments at two sites and is extended by Fourier Transform and geostatistical analysis of high-resolution imagery from one site.
Next, the use of hyperspectral remote sensing data is evaluated as a substantive input to dryland remote monitoring strategies. In particular, the efficacy of spectral mixture analysis (SMA) in discriminating vegetation and soil types and detennining vegetation cover is investigated. The results indicate that hyperspectral data may be less useful than often thought in determining vegetation parameters. Its usefulness in determining soil parameters, however, may be leveraged by developing simple multispectral classification tools that can be used to monitor desertification.
Finally, the elements required for effective monitoring and management of arid and semiarid lands are discussed. Several large-scale multi-site field experiments are proposed to clarify the role of wind as a landscape and degradation process in dry lands. The role of remote sensing in monitoring the world's drylands is discussed in terms of optimal remote sensing platform characteristics and surface phenomena which may be monitored in order to identify areas at risk of desertification. A desertification indicator is proposed that unifies consideration of environmental and human variables.
Resumo:
This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.
We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.
In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.
In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.
The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.
Resumo:
The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.
We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.
Resumo:
Progress is made on the numerical modeling of both laminar and turbulent non-premixed flames. Instead of solving the transport equations for the numerous species involved in the combustion process, the present study proposes reduced-order combustion models based on local flame structures.
For laminar non-premixed flames, curvature and multi-dimensional diffusion effects are found critical for the accurate prediction of sooting tendencies. A new numerical model based on modified flamelet equations is proposed. Sooting tendencies are calculated numerically using the proposed model for a wide range of species. These first numerically-computed sooting tendencies are in good agreement with experimental data. To further quantify curvature and multi-dimensional effects, a general flamelet formulation is derived mathematically. A budget analysis of the general flamelet equations is performed on an axisymmetric laminar diffusion flame. A new chemistry tabulation method based on the general flamelet formulation is proposed. This new tabulation method is applied to the same flame and demonstrates significant improvement compared to previous techniques.
For turbulent non-premixed flames, a new model to account for chemistry-turbulence interactions is proposed. %It is found that these interactions are not important for radicals and small species, but substantial for aromatic species. The validity of various existing flamelet-based chemistry tabulation methods is examined, and a new linear relaxation model is proposed for aromatic species. The proposed relaxation model is validated against full chemistry calculations. To further quantify the importance of aromatic chemistry-turbulence interactions, Large-Eddy Simulations (LES) have been performed on a turbulent sooting jet flame. %The aforementioned relaxation model is used to provide closure for the chemical source terms of transported aromatic species. The effects of turbulent unsteadiness on soot are highlighted by comparing the LES results with a separate LES using fully-tabulated chemistry. It is shown that turbulent unsteady effects are of critical importance for the accurate prediction of not only the inception locations, but also the magnitude and fluctuations of soot.