903 resultados para Modeling and simulation
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
Abstract Purpose The purpose of the study is to review recent studies published from 2007-2015 on tourism and hotel demand modeling and forecasting with a view to identifying the emerging topics and methods studied and to pointing future research directions in the field. Design/Methodology/approach Articles on tourism and hotel demand modeling and forecasting published in both science citation index (SCI) and social science citation index (SSCI) journals were identified and analyzed. Findings This review found that the studies focused on hotel demand are relatively less than those on tourism demand. It is also observed that more and more studies have moved away from the aggregate tourism demand analysis, while disaggregate markets and niche products have attracted increasing attention. Some studies have gone beyond neoclassical economic theory to seek additional explanations of the dynamics of tourism and hotel demand, such as environmental factors, tourist online behavior and consumer confidence indicators, among others. More sophisticated techniques such as nonlinear smooth transition regression, mixed-frequency modeling technique and nonparametric singular spectrum analysis have also been introduced to this research area. Research limitations/implications The main limitation of this review is that the articles included in this study only cover the English literature. Future review of this kind should also include articles published in other languages. The review provides a useful guide for researchers who are interested in future research on tourism and hotel demand modeling and forecasting. Practical implications This review provides important suggestions and recommendations for improving the efficiency of tourism and hospitality management practices. Originality/value The value of this review is that it identifies the current trends in tourism and hotel demand modeling and forecasting research and points out future research directions.
Resumo:
Different types of serious games have been used in elucidating computer science areas such as computer games, mobile games, Lego-based games, virtual worlds and webbased games. Different evaluation techniques have been conducted like questionnaires, interviews, discussions and tests. Simulation have been widely used in computer science as a motivational and interactive learning tool. This paper aims to evaluate the possibility of successful implementation of simulation in computer programming modules. A framework is proposed to measure the impact of serious games on enhancing students understanding of key computer science concepts. Experiments will be held on the EEECS of Queen’s University Belfast students to test the framework and attain results.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
This Licentiate Thesis is devoted to the presentation and discussion of some new contributions in applied mathematics directed towards scientific computing in sports engineering. It considers inverse problems of biomechanical simulations with rigid body musculoskeletal systems especially in cross-country skiing. This is a contrast to the main research on cross-country skiing biomechanics, which is based mainly on experimental testing alone. The thesis consists of an introduction and five papers. The introduction motivates the context of the papers and puts them into a more general framework. Two papers (D and E) consider studies of real questions in cross-country skiing, which are modelled and simulated. The results give some interesting indications, concerning these challenging questions, which can be used as a basis for further research. However, the measurements are not accurate enough to give the final answers. Paper C is a simulation study which is more extensive than paper D and E, and is compared to electromyography measurements in the literature. Validation in biomechanical simulations is difficult and reducing mathematical errors is one way of reaching closer to more realistic results. Paper A examines well-posedness for forward dynamics with full muscle dynamics. Moreover, paper B is a technical report which describes the problem formulation and mathematical models and simulation from paper A in more detail. Our new modelling together with the simulations enable new possibilities. This is similar to simulations of applications in other engineering fields, and need in the same way be handled with care in order to achieve reliable results. The results in this thesis indicate that it can be very useful to use mathematical modelling and numerical simulations when describing cross-country skiing biomechanics. Hence, this thesis contributes to the possibility of beginning to use and develop such modelling and simulation techniques also in this context.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Nella seguente tesi è descritto il principio di sviluppo di una macchina industriale di alimentazione. Il suddetto sistema dovrà essere installato fra due macchine industriali. L’apparato dovrà mettere al passo e sincronizzare con la macchina a valle i prodotti che arriveranno in input. La macchina ordina gli oggetti usando una serie di nastri trasportatori a velocità regolabile. Lo sviluppo è stato effettuato al Laboratorio Liam dopo la richiesta dell’azienda Sitma. Sitma produceva già un tipo di sistema come quello descritto in questa tesi. Il deisderio di Sitma è quindi quello di modernizzare la precedente applicazione poiché il dispositivo che le permetteva di effettuare la messa al passo di prodotti era un PLC Siemens che non è più commercializzato. La tesi verterà sullo studio dell’applicazione e la modellazione tramite Matlab-Simulink per poi proseguire ad una applicazione, seppure non risolutiva, in TwinCAT 3.
Resumo:
the work towards increased energy efficiency. In order to plan and perform effective energy renovation of the buildings, it is necessary to have adequate information on the current status of the buildings in terms of architectural features and energy needs. Unfortunately, the official statistics do not include all of the needed information for the whole building stock. This paper aims to fill the gaps in the statistics by gathering data from studies, projects and national energy agencies, and by calibrating TRNSYS models against the existing data to complete missing energy demand data, for countries with similar climate, through simulation. The survey was limited to residential and office buildings in the EU member states (before July 2013). This work was carried out as part of the EU FP7 project iNSPiRe. The building stock survey revealed over 70% of the residential and office floor area is concentrated in the six most populated countries. The total energy consumption in the residential sector is 14 times that of the office sector. In the residential sector, single family houses represent 60% of the heated floor area, albeit with different share in the different countries, indicating that retrofit solutions cannot be focused only on multi-family houses. The simulation results indicate that residential buildings in central and southern European countries are not always heated to 20 °C, but are kept at a lower temperature during at least part of the day. Improving the energy performance of these houses through renovation could allow the occupants to increase the room temperature and improve their thermal comfort, even though the potential for energy savings would then be reduced.
Resumo:
This work provides a holistic investigation into the realm of feature modeling within software product lines. The work presented identifies limitations and challenges within the current feature modeling approaches. Those limitations include, but not limited to, the dearth of satisfactory cognitive presentation, inconveniency in scalable systems, inflexibility in adapting changes, nonexistence of predictability of models behavior, as well as the lack of probabilistic quantification of model’s implications and decision support for reasoning under uncertainty. The work in this thesis addresses these challenges by proposing a series of solutions. The first solution is the construction of a Bayesian Belief Feature Model, which is a novel modeling approach capable of quantifying the uncertainty measures in model parameters by a means of incorporating probabilistic modeling with a conventional modeling approach. The Bayesian Belief feature model presents a new enhanced feature modeling approach in terms of truth quantification and visual expressiveness. The second solution takes into consideration the unclear support for the reasoning under the uncertainty process, and the challenging constraint satisfaction problem in software product lines. This has been done through the development of a mathematical reasoner, which was designed to satisfy the model constraints by considering probability weight for all involved parameters and quantify the actual implications of the problem constraints. The developed Uncertain Constraint Satisfaction Problem approach has been tested and validated through a set of designated experiments. Profoundly stating, the main contributions of this thesis include the following: • Develop a framework for probabilistic graphical modeling to build the purported Bayesian belief feature model. • Extend the model to enhance visual expressiveness throughout the integration of colour degree variation; in which the colour varies with respect to the predefined probabilistic weights. • Enhance the constraints satisfaction problem by the uncertainty measuring of the parameters truth assumption. • Validate the developed approach against different experimental settings to determine its functionality and performance.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
When something unfamiliar emerges or when something familiar does something unexpected people need to make sense of what is emerging or going on in order to act. Social representations theory suggests how individuals and society make sense of the unfamiliar and hence how the resultant social representations (SRs) cognitively, emotionally, and actively orient people and enable communication. SRs are social constructions that emerge through individual and collective engagement with media and with everyday conversations among people. Recent developments in text analysis techniques, and in particular topic modeling, provide a potentially powerful analytical method to examine the structure and content of SRs using large samples of narrative or text. In this paper I describe the methods and results of applying topic modeling to 660 micronarratives collected from Australian academics / researchers, government employees, and members of the public in 2010-2011. The narrative fragments focused on adaptation to climate change (CC) and hence provide an example of Australian society making sense of an emerging and conflict ridden phenomena. The results of the topic modeling reflect elements of SRs of adaptation to CC that are consistent with findings in the literature as well as being reasonably robust predictors of classes of action in response to CC. Bayesian Network (BN) modeling was used to identify relationships among the topics (SR elements) and in particular to identify relationships among topics, sentiment, and action. Finally the resulting model and topic modeling results are used to highlight differences in the salience of SR elements among social groups. The approach of linking topic modeling and BN modeling offers a new and encouraging approach to analysis for ongoing research on SRs.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
The PhD project addresses the potential of using concentrating solar power (CSP) plants as a viable alternative energy producing system in Libya. Exergetic, energetic, economic and environmental analyses are carried out for a particular type of CSP plants. The study, although it aims a particular type of CSP plant – 50 MW parabolic trough-CSP plant, it is sufficiently general to be applied to other configurations. The novelty of the study, in addition to modeling and analyzing the selected configuration, lies in the use of a state-of-the-art exergetic analysis combined with the Life Cycle Assessment (LCA). The modeling and simulation of the plant is carried out in chapter three and they are conducted into two parts, namely: power cycle and solar field. The computer model developed for the analysis of the plant is based on algebraic equations describing the power cycle and the solar field. The model was solved using the Engineering Equation Solver (EES) software; and is designed to define the properties at each state point of the plant and then, sequentially, to determine energy, efficiency and irreversibility for each component. The developed model has the potential of using in the preliminary design of CSPs and, in particular, for the configuration of the solar field based on existing commercial plants. Moreover, it has the ability of analyzing the energetic, economic and environmental feasibility of using CSPs in different regions of the world, which is illustrated for the Libyan region in this study. The overall feasibility scenario is completed through an hourly analysis on an annual basis in chapter Four. This analysis allows the comparison of different systems and, eventually, a particular selection, and it includes both the economic and energetic components using the “greenius” software. The analysis also examined the impact of project financing and incentives on the cost of energy. The main technological finding of this analysis is higher performance and lower levelized cost of electricity (LCE) for Libya as compared to Southern Europe (Spain). Therefore, Libya has the potential of becoming attractive for the establishment of CSPs in its territory and, in this way, to facilitate the target of several European initiatives that aim to import electricity generated by renewable sources from North African and Middle East countries. The analysis is presented a brief review of the current cost of energy and the potential of reducing the cost from parabolic trough- CSP plant. Exergetic and environmental life cycle assessment analyses are conducted for the selected plant in chapter Five; the objectives are 1) to assess the environmental impact and cost, in terms of exergy of the life cycle of the plant; 2) to find out the points of weakness in terms of irreversibility of the process; and 3) to verify whether solar power plants can reduce environmental impact and the cost of electricity generation by comparing them with fossil fuel plants, in particular, Natural Gas Combined Cycle (NGCC) plant and oil thermal power plant. The analysis also targets a thermoeconomic analysis using the specific exergy costing (SPECO) method to evaluate the level of the cost caused by exergy destruction. The main technological findings are that the most important contribution impact lies with the solar field, which reports a value of 79%; and the materials with the vi highest impact are: steel (47%), molten salt (25%) and synthetic oil (21%). The “Human Health” damage category presents the highest impact (69%) followed by the “Resource” damage category (24%). In addition, the highest exergy demand is linked to the steel (47%); and there is a considerable exergetic demand related to the molten salt and synthetic oil with values of 25% and 19%, respectively. Finally, in the comparison with fossil fuel power plants (NGCC and Oil), the CSP plant presents the lowest environmental impact, while the worst environmental performance is reported to the oil power plant followed by NGCC plant. The solar field presents the largest value of cost rate, where the boiler is a component with the highest cost rate among the power cycle components. The thermal storage allows the CSP plants to overcome solar irradiation transients, to respond to electricity demand independent of weather conditions, and to extend electricity production beyond the availability of daylight. Numerical analysis of the thermal transient response of a thermocline storage tank is carried out for the charging phase. The system of equations describing the numerical model is solved by using time-implicit and space-backward finite differences and which encoded within the Matlab environment. The analysis presented the following findings: the predictions agree well with the experiments for the time evolution of the thermocline region, particularly for the regions away from the top-inlet. The deviations observed in the near-region of the inlet are most likely due to the high-level of turbulence in this region due to the localized level of mixing resulting; a simple analytical model to take into consideration this increased turbulence level was developed and it leads to some improvement of the predictions; this approach requires practically no additional computational effort and it relates the effective thermal diffusivity to the mean effective velocity of the fluid at each particular height of the system. Altogether the study indicates that the selected parabolic trough-CSP plant has the edge over alternative competing technologies for locations where DNI is high and where land usage is not an issue, such as the shoreline of Libya.
Resumo:
Transient power dissipation profiles in handheld electronic devices alternate between high and low power states depending on usage. Capacitive thermal management based on phase change materials potentially offers a fan-less thermal management for such transient profiles. However, such capacitive management becomes feasible only if there is a significant enhancement in the enthalpy change per unit volume of the phase change material since existing bulk materials such as paraffin fall short of requirements. In this thesis I propose novel nanostructured thin-film materials that can potentially exhibit significantly enhanced volumetric enthalpy change. Using fundamental thermodynamics of phase transition, calculations regarding the enhancement resulting from superheating in such thin film systems is conducted. Furthermore design of a microfabricated calorimeter to measure such enhancements is explained in detail. This work advances the state-of-art of phase change materials for capacitive cooling of handheld devices.
Resumo:
International audience