873 resultados para Multi-scale modeling
Resumo:
The seminal multiple view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis methodology. Although seminal, these benchmark datasets are limited in scope with few reference scenes. Here, we try to take these works a step further by proposing a new multi-view stereo dataset, which is an order of magnitude larger in number of scenes and with a significant increase in diversity. Specifically, we propose a dataset containing 80 scenes of large variability. Each scene consists of 49 or 64 accurate camera positions and reference structured light scans, all acquired by a 6-axis industrial robot. To apply this dataset we propose an extension of the evaluation protocol from the Middlebury evaluation, reflecting the more complex geometry of some of our scenes. The proposed dataset is used to evaluate the state of the art multiview stereo algorithms of Tola et al., Campbell et al. and Furukawa et al. Hereby we demonstrate the usability of the dataset as well as gain insight into the workings and challenges of multi-view stereopsis. Through these experiments we empirically validate some of the central hypotheses of multi-view stereopsis, as well as determining and reaffirming some of the central challenges.
Resumo:
Removal of dissolved salts and toxic chemicals in water, especially at a few parts per million (ppm) levels is one of the most difficult problems. There are several methods used for water purification. The choice of the method depends mainly on the level of feed water salinity, source of energy and type of contaminants present. Distillation is an age old method which can remove all types of dissolved impurities from contaminated water. In multiple effect distillation (MED) latent heat of steam is recycled several times to produce many units of distilled water with one unit of primary steam input. This is already being used in large capacity plants for treating sea water. But the challenge lies in designing a system for small scale operations that can treat a few cubic meters of water per day, especially suitable for rural communities where the available water is brackish. A small scale MED unit with an extendable number of effects has been designed and analyzed for optimum yield in terms of total distillate produced. © 2010 Elsevier B.V.
Resumo:
Sol-gel-synthesized bioactive glasses may be formed via a hydrolysis condensation reaction, silica being introduced in the form of tetraethyl orthosilicate (TEOS), and calcium is typically added in the form of calcium nitrate. The synthesis reaction proceeds in an aqueous environment; the resultant gel is dried, before stabilization by heat treatment. These materials, being amorphous, are complex at the level of their atomic-scale structure, but their bulk properties may only be properly understood on the basis of that structural insight. Thus, a full understanding of their structure-property relationship may only be achieved through the application of a coherent suite of leading-edge experimental probes, coupled with the cogent use of advanced computer simulation methods. Using as an exemplar a calcia-silica sol-gel glass of the kind developed by Larry Hench, in the memory of whom this paper is dedicated, we illustrate the successful use of high-energy X-ray and neutron scattering (diffraction) methods, magic-angle spinning solid-state NMR, and molecular dynamics simulation as components to a powerful methodology for the study of amorphous materials.
Resumo:
In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^
Resumo:
The dissertation reports on two studies. The purpose of Study I was to develop and evaluate a measure of cognitive competence (the Critical Problem Solving Skills Scale – Qualitative Extension) using Relational Data Analysis (RDA) with a multi-ethnic, adolescent sample. My study builds on previous work that has been conducted to provide evidence for the reliability and validity of the RDA framework in evaluating youth development programs (Kurtines et al., 2008). Inter-coder percent agreement among the TOC and TCC coders for each of the category levels was moderate to high, with a range of .76 to .94. The Fleiss' kappa across all category levels was from substantial agreement to almost perfect agreement, with a range of .72 to .91. The correlation between the TOC and the TCC demonstrated medium to high correlation, with a range of r(40)=.68, p<.001 to r(40)=.79, p<.001. Study II reports an investigation of a positive youth development program using an Outcome Mediation Cascade (OMC) evaluation model, an integrated model for evaluating the empirical intersection between intervention and developmental processes. The Changing Lives Program (CLP) is a community supported positive youth development intervention implemented in a practice setting as a selective/indicated program for multi-ethnic, multi-problem at risk youth in urban alternative high schools in the Miami Dade County Public Schools (M-DCPS). The 259 participants for this study were drawn from the CLP's archival data file. The study used a structural equation modeling approach to construct and evaluate the hypothesized model. Findings indicated that the hypothesized model fit the data (χ2 (7) = 5.651, p = .83; RMSEA = .00; CFI = 1.00; WRMR = .319). My study built on previous research using the OMC evaluation model (Eichas, 2010), and the findings are consistent with the hypothesis that in addition to having effects on targeted positive outcomes, PYD interventions are likely to have progressive cascading effects on untargeted problem outcomes that operate through effects on positive outcomes.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
A two-dimensional, 2D, finite-difference time-domain (FDTD) method is used to analyze two different models of multi-conductor transmission lines (MTL). The first model is a two-conductor MTL and the second is a threeconductor MTL. Apart from the MTL's, a three-dimensional, 3D, FDTD method is used to analyze a three-patch microstrip parasitic array. While the MTL analysis is entirely in time-domain, the microstrip parasitic array is a study of scattering parameter Sn in the frequency-domain. The results clearly indicate that FDTD is an efficient and accurate tool to model and analyze multiconductor transmission line as well as microstrip antennas and arrays.
Resumo:
Peer reviewed
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as
`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol
particles and greenhouse gases (GHGs) as responses to their surrounding environments.
While the signicance of quantifying the exchange rates of GHGs and atmospheric
aerosol particles between the terrestrial biosphere and the atmosphere is
hardly questioned in many scientic elds, the progress in improving model predictability,
data interpretation or the combination of the two remains impeded by
the lack of precise framework elucidating their dynamic transport processes over a
wide range of spatiotemporal scales. The diculty in developing prognostic modeling
tools to quantify the source or sink strength of these atmospheric substances
can be further magnied by the fact that the climate system is also sensitive to the
feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,
the emergent need is to reduce uncertainties when assessing this complex and dynamic
feedback cycle that is necessary to support the decisions of mitigation and
adaptation policies associated with human activities (e.g., anthropogenic emission
controls and land use managements) under current and future climate regimes.
With the goal to improve the predictions for the biosphere-atmosphere exchange
of biologically active gases and atmospheric aerosol particles, the main focus of this
dissertation is on revising and up-scaling the biotic and abiotic transport processes
from leaf to canopy scales. The validity of previous modeling studies in determining
iv
the exchange rate of gases and particles is evaluated with detailed descriptions of their
limitations. Mechanistic-based modeling approaches along with empirical studies
across dierent scales are employed to rene the mathematical descriptions of surface
conductance responsible for gas and particle exchanges as commonly adopted by all
operational models. Specically, how variation in horizontal leaf area density within
the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes
and thereby the ultrane particle collection eciency at the leaf/branch scale
is explored using wind tunnel experiments with interpretations by a porous media
model and a scaling analysis. A multi-layered and size-resolved second-order closure
model combined with particle
uxes and concentration measurements within and
above a forest is used to explore the particle transport processes within the canopy
sub-layer and the partitioning of particle deposition onto canopy medium and forest
oor. For gases, a modeling framework accounting for the leaf-level boundary layer
eects on the stomatal pathway for gas exchange is proposed and combined with sap
ux measurements in a wind tunnel to assess how leaf-level transpiration varies with
increasing wind speed. How exogenous environmental conditions and endogenous
soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and
below-ground water dynamics in the soil-plant system and shape plant responses
to droughts is assessed by a porous media model that accommodates the transient
water
ow within the plant vascular system and is coupled with the aforementioned
leaf-level gas exchange model and soil-root interaction model. It should be noted
that tackling all aspects of potential issues causing uncertainties in forecasting the
feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single
dissertation but further research questions and opportunities based on the foundation
derived from this dissertation are also brie
y discussed.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Topographic variation, the spatial variation in elevation and terrain features, underpins a myriad of patterns and processes in geography and ecology and is key to understanding the variation of life on the planet. The characterization of this variation is scale-dependent, i.e. it varies with the distance over which features are assessed and with the spatial grain (grid cell resolution) of analysis. A fully standardized and global multivariate product of different terrain features has the potential to support many large-scale basic research and analytical applications, however to date, such technique is unavailable. Here we used the digital elevation model products of global 250 m GMTED and near-global 90 m SRTM to derive a suite of topographic variables: elevation, slope, aspect, eastness, northness, roughness, terrain roughness index, topographic position index, vector ruggedness measure, profile and tangential curvature, and 10 geomorphological landform classes. We aggregated each variable to 1, 5, 10, 50 and 100 km spatial grains using several aggregation approaches (median, average, minimum, maximum, standard deviation, percent cover, count, majority, Shannon Index, entropy, uniformity). While a global cross-correlation underlines the high similarity of many variables, a more detailed view in four mountain regions reveals local differences, as well as scale variations in the aggregated variables at different spatial grains. All newly-developed variables are available for download at http://www.earthenv.org and can serve as a basis for standardized hydrological, environmental and biodiversity modeling at a global extent.
Resumo:
The focus of this thesis is to explore and quantify the response of large-scale solid mass transfer events on satellite-based gravity observations. The gravity signature of large-scale solid mass transfers has not been deeply explored yet; mainly due to the lack of significant events during dedicated satellite gravity missions‘ lifespans. In light of the next generation of gravity missions, the feasibility of employing satellite gravity observations to detect submarine and surface mass transfers is of importance for geoscience (improves the understanding of geodynamic processes) and for geodesy (improves the understanding of the dynamic gravity field). The aim of this thesis is twofold and focuses on assessing the feasibility of using satellite gravity observations for detecting large-scale solid mass transfers and on modeling the impact on the gravity field caused by these events. A methodology that employs 3D forward modeling simulations and 2D wavelet multiresolution analysis is suggested to estimate the impact of solid mass transfers on satellite gravity observations. The gravity signature of various submarine and subaerial events that occurred in the past was estimated. Case studies were conducted to assess the sensitivity and resolvability required in order to observe gravity differences caused by solid mass transfers. Simulation studies were also employed in order to assess the expected contribution of the Next Generation of Gravity Missions for this application.
Resumo:
One of the global phenomena with threats to environmental health and safety is artisanal mining. There are ambiguities in the manner in which an ore-processing facility operates which hinders the mining capacity of these miners in Ghana. These problems are reviewed on the basis of current socio-economic, health and safety, environmental, and use of rudimentary technologies which limits fair-trade deals to miners. This research sought to use an established data-driven, geographic information (GIS)-based system employing the spatial analysis approach for locating a centralized processing facility within the Wassa Amenfi-Prestea Mining Area (WAPMA) in the Western region of Ghana. A spatial analysis technique that utilizes ModelBuilder within the ArcGIS geoprocessing environment through suitability modeling will systematically and simultaneously analyze a geographical dataset of selected criteria. The spatial overlay analysis methodology and the multi-criteria decision analysis approach were selected to identify the most preferred locations to site a processing facility. For an optimal site selection, seven major criteria including proximity to settlements, water resources, artisanal mining sites, roads, railways, tectonic zones, and slopes were considered to establish a suitable location for a processing facility. Site characterizations and environmental considerations, incorporating identified constraints such as proximity to large scale mines, forest reserves and state lands to site an appropriate position were selected. The analysis was limited to criteria that were selected and relevant to the area under investigation. Saaty’s analytical hierarchy process was utilized to derive relative importance weights of the criteria and then a weighted linear combination technique was applied to combine the factors for determination of the degree of potential site suitability. The final map output indicates estimated potential sites identified for the establishment of a facility centre. The results obtained provide intuitive areas suitable for consideration