981 resultados para Simplified Models.
Resumo:
Yhteenveto: Talvivirtaamien redukointi vesistömallien avulla
Resumo:
Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.
Resumo:
We examine the stability of hadron resonance gas models by extending them to include undiscovered resonances through the Hagedorn formula. We find that the influence of unknown resonances on thermodynamics is large but bounded. We model the decays of resonances and investigate the ratios of particle yields in heavy-ion collisions. We find that observables such as hydrodynamics and hadron yield ratios change little upon extending the model. As a result, heavy-ion collisions at the RHIC and LHC are insensitive to a possible exponential rise in the hadronic density of states, thus increasing the stability of the predictions of hadron resonance gas models in this context. Hadron resonance gases are internally consistent up to a temperature higher than the crossover temperature in QCD, but by examining quark number susceptibilities we find that their region of applicability ends below the QCD crossover.
Resumo:
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.
Resumo:
In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.
Resumo:
Non-Gaussianity of signals/noise often results in significant performance degradation for systems, which are designed using the Gaussian assumption. So non-Gaussian signals/noise require a different modelling and processing approach. In this paper, we discuss a new Bayesian estimation technique for non-Gaussian signals corrupted by colored non Gaussian noise. The method is based on using zero mean finite Gaussian Mixture Models (GMMs) for signal and noise. The estimation is done using an adaptive non-causal nonlinear filtering technique. The method involves deriving an estimator in terms of the GMM parameters, which are in turn estimated using the EM algorithm. The proposed filter is of finite length and offers computational feasibility. The simulations show that the proposed method gives a significant improvement compared to the linear filter for a wide variety of noise conditions, including impulsive noise. We also claim that the estimation of signal using the correlation with past and future samples leads to reduced mean squared error as compared to signal estimation based on past samples only.
Resumo:
This study examined the effects of the Greeks of the options and the trading results of delta hedging strategies, with three different time units or option-pricing models. These time units were calendar time, trading time and continuous time using discrete approximation (CTDA) time. The CTDA time model is a pricing model, that among others accounts for intraday and weekend, patterns in volatility. For the CTDA time model some additional theta measures, which were believed to be usable in trading, were developed. The study appears to verify that there were differences in the Greeks with different time units. It also revealed that these differences influence the delta hedging of options or portfolios. Although it is difficult to say anything about which is the most usable of the different time models, as this much depends on the traders view of the passing of time, different market conditions and different portfolios, the CTDA time model can be viewed as an attractive alternative.
Resumo:
This paper examines how volatility in financial markets can preferable be modeled. The examination investigates how good the models for the volatility, both linear and nonlinear, are in absorbing skewness and kurtosis. The examination is done on the Nordic stock markets, including Finland, Sweden, Norway and Denmark. Different linear and nonlinear models are applied, and the results indicates that a linear model can almost always be used for modeling the series under investigation, even though nonlinear models performs slightly better in some cases. These results indicate that the markets under study are exposed to asymmetric patterns only to a certain degree. Negative shocks generally have a more prominent effect on the markets, but these effects are not really strong. However, in terms of absorbing skewness and kurtosis, nonlinear models outperform linear ones.
Resumo:
This study evaluates three different time units in option pricing: trading time, calendar time and continuous time using discrete approximations (CTDA). The CTDA-time model partitions the trading day into 30-minute intervals, where each interval is given a weight corresponding to the historical volatility in the respective interval. Furthermore, the non-trading volatility, both overnight and weekend volatility, is included in the first interval of the trading day in the CTDA model. The three models are tested on market prices. The results indicate that the trading-time model gives the best fit to market prices in line with the results of previous studies, but contrary to expectations under non-arbitrage option pricing. Under non-arbitrage pricing, the option premium should reflect the cost of hedging the expected volatility during the option’s remaining life. The study concludes that the historical patterns in volatility are not fully accounted for by the market, rather the market prices options closer to trading time.
Resumo:
We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.
Resumo:
Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.
Resumo:
The momentum balance of the linear-combination integral model for the transition zone is investigated for constant pressure flows. The imbalance is found to be small enough to be negligible for all practical purposes. [S0889-504X(00)00703-0].
Resumo:
This paper reviews computational reliability, computer algebra, stochastic stability and rotating frame turbulence (RFT) in the context of predicting the blade inplane mode stability, a mode which is at best weakly damped. Computational reliability can be built into routine Floquet analysis involving trim analysis and eigenanalysis, and a highly portable special purpose processor restricted to rotorcraft dynamics analysis is found to be more economical than a multipurpose processor. While the RFT effects are dominant in turbulence modeling, the finding that turbulence stabilizes the inplane mode is based on the assumption that turbulence is white noise.
Resumo:
A conceptually unifying and flexible approach to the ABC and FGH segments of the nortriterpenoid rubrifloradilactone C, each embodying a furo[3,2-b]furanone moiety, from the appropriate Morita-Baylis-Hillman adducts is delineated. (C) 2010 Elsevier Ltd. All rights reserved.