947 resultados para modeling and model calibration
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
Resumo:
Bulk gallium nitride (GaN) power semiconductor devices are gaining significant interest in recent years, creating the need for technology computer aided design (TCAD) simulation to accurately model and optimize these devices. This paper comprehensively reviews and compares different GaN physical models and model parameters in the literature, and discusses the appropriate selection of these models and parameters for TCAD simulation. 2-D drift-diffusion semi-classical simulation is carried out for 2.6 kV and 3.7 kV bulk GaN vertical PN diodes. The simulated forward current-voltage and reverse breakdown characteristics are in good agreement with the measurement data even over a wide temperature range.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This work provides a holistic investigation into the realm of feature modeling within software product lines. The work presented identifies limitations and challenges within the current feature modeling approaches. Those limitations include, but not limited to, the dearth of satisfactory cognitive presentation, inconveniency in scalable systems, inflexibility in adapting changes, nonexistence of predictability of models behavior, as well as the lack of probabilistic quantification of model’s implications and decision support for reasoning under uncertainty. The work in this thesis addresses these challenges by proposing a series of solutions. The first solution is the construction of a Bayesian Belief Feature Model, which is a novel modeling approach capable of quantifying the uncertainty measures in model parameters by a means of incorporating probabilistic modeling with a conventional modeling approach. The Bayesian Belief feature model presents a new enhanced feature modeling approach in terms of truth quantification and visual expressiveness. The second solution takes into consideration the unclear support for the reasoning under the uncertainty process, and the challenging constraint satisfaction problem in software product lines. This has been done through the development of a mathematical reasoner, which was designed to satisfy the model constraints by considering probability weight for all involved parameters and quantify the actual implications of the problem constraints. The developed Uncertain Constraint Satisfaction Problem approach has been tested and validated through a set of designated experiments. Profoundly stating, the main contributions of this thesis include the following: • Develop a framework for probabilistic graphical modeling to build the purported Bayesian belief feature model. • Extend the model to enhance visual expressiveness throughout the integration of colour degree variation; in which the colour varies with respect to the predefined probabilistic weights. • Enhance the constraints satisfaction problem by the uncertainty measuring of the parameters truth assumption. • Validate the developed approach against different experimental settings to determine its functionality and performance.
Resumo:
When something unfamiliar emerges or when something familiar does something unexpected people need to make sense of what is emerging or going on in order to act. Social representations theory suggests how individuals and society make sense of the unfamiliar and hence how the resultant social representations (SRs) cognitively, emotionally, and actively orient people and enable communication. SRs are social constructions that emerge through individual and collective engagement with media and with everyday conversations among people. Recent developments in text analysis techniques, and in particular topic modeling, provide a potentially powerful analytical method to examine the structure and content of SRs using large samples of narrative or text. In this paper I describe the methods and results of applying topic modeling to 660 micronarratives collected from Australian academics / researchers, government employees, and members of the public in 2010-2011. The narrative fragments focused on adaptation to climate change (CC) and hence provide an example of Australian society making sense of an emerging and conflict ridden phenomena. The results of the topic modeling reflect elements of SRs of adaptation to CC that are consistent with findings in the literature as well as being reasonably robust predictors of classes of action in response to CC. Bayesian Network (BN) modeling was used to identify relationships among the topics (SR elements) and in particular to identify relationships among topics, sentiment, and action. Finally the resulting model and topic modeling results are used to highlight differences in the salience of SR elements among social groups. The approach of linking topic modeling and BN modeling offers a new and encouraging approach to analysis for ongoing research on SRs.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
This dissertation presents work done in the design, modeling, and fabrication of magnetically actuated microrobot legs. Novel fabrication processes for manufacturing multi-material compliant mechanisms have been used to fabricate effective legged robots at both the meso and micro scales, where the meso scale refers to the transition between macro and micro scales. This work discusses the development of a novel mesoscale manufacturing process, Laser Cut Elastomer Refill (LaCER), for prototyping millimeter-scale multi-material compliant mechanisms with elastomer hinges. Additionally discussed is an extension of previous work on the development of a microscale manufacturing process for fabricating micrometer-sale multi-material compliant mechanisms with elastomer hinges, with the added contribution of a method for incorporating magnetic materials for mechanism actuation using externally applied fields. As both of the fabrication processes outlined make significant use of highly compliant elastomer hinges, a fast, accurate modeling method for these hinges was desired for mechanism characterization and design. An analytical model was developed for this purpose, making use of the pseudo rigid-body (PRB) model and extending its utility to hinges with significant stretch component, such as those fabricated from elastomer materials. This model includes 3 springs with stiffnesses relating to material stiffness and hinge geometry, with additional correction factors for aspects particular to common multi-material hinge geometry. This model has been verified against a finite element analysis model (FEA), which in turn was matched to experimental data on mesoscale hinges manufactured using LaCER. These modeling methods have additionally been verified against experimental data from microscale hinges manufactured using the Si/elastomer/magnetics MEMS process. The development of several mechanisms is also discussed: including a mesoscale LaCER-fabricated hexapedal millirobot capable of walking at 2.4 body lengths per second; prototyped mesoscale LaCER-fabricated underactuated legs with asymmetrical features for improved performance; 1 centimeter cubed LaCER-fabricated magnetically-actuated hexapods which use the best-performing underactuated leg design to locomote at up to 10.6 body lengths per second; five microfabricated magnetically actuated single-hinge mechanisms; a 14-hinge, 11-link microfabricated gripper mechanism; a microfabricated robot leg mechansim demonstrated clearing a step height of 100 micrometers; and a 4 mm x 4 mm x 5 mm, 25 mg microfabricated magnetically-actuated hexapod, demonstrated walking at up to 2.25 body lengths per second.
Resumo:
Crop models are simplified mathematical representations of the interacting biological and environmental components of the dynamic soil–plant–environment system. Sorghum crop modeling has evolved in parallel with crop modeling capability in general, since its origins in the 1960s and 1970s. Here we briefly review the trajectory in sorghum crop modeling leading to the development of advanced models. We then (i) overview the structure and function of the sorghum model in the Agricultural Production System sIMulator (APSIM) to exemplify advanced modeling concepts that suit both agronomic and breeding applications, (ii) review an example of use of sorghum modeling in supporting agronomic management decisions, (iii) review an example of the use of sorghum modeling in plant breeding, and (iv) consider implications for future roles of sorghum crop modeling. Modeling and simulation provide an avenue to explore consequences of crop management decision options in situations confronted with risks associated with seasonal climate uncertainties. Here we consider the possibility of manipulating planting configuration and density in sorghum as a means to manipulate the productivity–risk trade-off. A simulation analysis of decision options is presented and avenues for its use with decision-makers discussed. Modeling and simulation also provide opportunities to improve breeding efficiency by either dissecting complex traits to more amenable targets for genetics and breeding, or by trait evaluation via phenotypic prediction in target production regions to help prioritize effort and assess breeding strategies. Here we consider studies on the stay-green trait in sorghum, which confers yield advantage in water-limited situations, to exemplify both aspects. The possible future roles of sorghum modeling in agronomy and breeding are discussed as are opportunities related to their synergistic interaction. The potential to add significant value to the revolution in plant breeding associated with genomic technologies is identified as the new modeling frontier.
Resumo:
Water regimes in the Brazilian Cerrados are sensitive to climatological disturbances and human intervention. The risk that critical water-table levels are exceeded over long periods of time can be estimated by applying stochastic methods in modeling the dynamic relationship between water levels and driving forces such as precipitation and evapotranspiration. In this study, a transfer function-noise model, the so called PIRFICT-model, is applied to estimate the dynamic relationship between water-table depth and precipitation surplus/deficit in a watershed with a groundwater monitoring scheme in the Brazilian Cerrados. Critical limits were defined for a period in the Cerrados agricultural calendar, the end of the rainy season, when extremely shallow levels (< 0.5-m depth) can pose a risk to plant health and machinery before harvesting. By simulating time-series models, the risk of exceeding critical thresholds during a continuous period of time (e.g. 10 days) is described by probability levels. These simulated probabilities were interpolated spatially using universal kriging, incorporating information related to the drainage basin from a digital elevation model. The resulting map reduced model uncertainty. Three areas were defined as presenting potential risk at the end of the rainy season. These areas deserve attention with respect to water-management and land-use planning.
Resumo:
A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.
Resumo:
Agroforestry has large potential for carbon (C) sequestration while providing many economical, social, and ecological benefits via its diversified products. Airborne lidar is considered as the most accurate technology for mapping aboveground biomass (AGB) over landscape levels. However, little research in the past has been done to study AGB of agroforestry systems using airborne lidar data. Focusing on an agroforestry system in the Brazilian Amazon, this study first predicted plot-level AGB using fixed-effects regression models that assumed the regression coefficients to be constants. The model prediction errors were then analyzed from the perspectives of tree DBH (diameter at breast height)?height relationships and plot-level wood density, which suggested the need for stratifying agroforestry fields to improve plot-level AGB modeling. We separated teak plantations from other agroforestry types and predicted AGB using mixed-effects models that can incorporate the variation of AGB-height relationship across agroforestry types. We found that, at the plot scale, mixed-effects models led to better model prediction performance (based on leave-one-out cross-validation) than the fixed-effects models, with the coefficient of determination (R2) increasing from 0.38 to 0.64. At the landscape level, the difference between AGB densities from the two types of models was ~10% on average and up to ~30% at the pixel level. This study suggested the importance of stratification based on tree AGB allometry and the utility of mixed-effects models in modeling and mapping AGB of agroforestry systems.
Resumo:
The purpose of this project is to develop a three-dimensional block model for a garnet deposit in the Alder Gulch, Madison County, Montana. Garnets occur in pre-Cambrian metamorphic Red Wash gneiss and similar rocks in the vicinity. This project seeks to model the percentage of garnet in a deposit called the Section 25 deposit using the Surpac software. Data available for this work are drillhole, trench and grab sample data obtained from previous exploration of the deposit. The creation of the block model involves validating the data, creating composites of assayed garnet percentages and conducting basic statistics on composites using Surpac statistical tools. Variogram analysis will be conducted on composites to quantify the continuity of the garnet mineralization. A three-dimensional block model will be created and filled with estimates of garnet percentage using different methods of reserve estimation and the results compared.
Resumo:
Conventional vehicles are creating pollution problems, global warming and the extinction of high density fuels. To address these problems, automotive companies and universities are researching on hybrid electric vehicles where two different power devices are used to propel a vehicle. This research studies the development and testing of a dynamic model for Prius 2010 Hybrid Synergy Drive (HSD), a power-split device. The device was modeled and integrated with a hybrid vehicle model. To add an electric only mode for vehicle propulsion, the hybrid synergy drive was modified by adding a clutch to carrier 1. The performance of the integrated vehicle model was tested with UDDS drive cycle using rule-based control strategy. The dSPACE Hardware-In-the-Loop (HIL) simulator was used for HIL simulation test. The HIL simulation result shows that the integration of developed HSD dynamic model with a hybrid vehicle model was successful. The HSD model was able to split power and isolate engine speed from vehicle speed in hybrid mode.
Resumo:
For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.