17 resultados para C30 - General-Sectional Models

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Muokatun matriisi-geometrian tekniikan kehitys yleimmäksi jonoksi on esitelty tässä työssä. Jonotus systeemi koostuu useista jonoista joilla on rajatut kapasiteetit. Tässä työssä on myös tutkittu PH-tyypin jakautumista kun ne jaetaan. Rakenne joka vastaa lopullista Markovin ketjua jossa on itsenäisiä matriiseja joilla on QBD rakenne. Myös eräitä rajallisia olotiloja on käsitelty tässä työssä. Sen esitteleminen matriisi-geometrisessä muodossa, muokkaamalla matriisi-geometristä ratkaisua on tämän opinnäytetyön tulos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a brief résumé of the history of solidification research and key factors affecting the solidification of fusion welds. There is a general agreement of the basic solidification theory, albeit differing - even confusing - nomenclatures do exist, and Cases 2 and 3 (the Chalmers' basic boundary conditions for solidification, categorized by Savage as Cases) are variably emphasized. Model Frame, a tool helping to model the continuum of fusion weld solidification from start to end, is proposed. It incorporates the general solidification models, of which the pertinent ones are selected for the actual modeling. The basic models are the main solidification Cases 1…4. These discrete Cases are joined with Sub-Cases: models of Pfann, Flemings and others, bringing needed Sub-Case variables into the model. Model Frame depicts a grain growing from the weld interface to its centerline. Besides modeling, the Model Frame supports education and academic debate. The new mathematical modeling techniques will extend its use into multi-dimensional modeling, introducing new variables and increasing the modeling accuracy. We propose a model: melting/solidification-model (M/S-model) - predicting the solute profile at the start of the solidification of a fusion weld. This Case 3-based Sub-Case takes into account the melting stage, the solute back-diffusion in the solid, and the growth rate acceleration typical to fusion welds. We propose - based on works of Rutter & Chalmers, David & Vitek and our experimental results on copper - that NEGS-EGS-transition is not associated only with cellular-dendritic-transition. Solidification is studied experimentally on pure and doped copper with welding speed range from 0 to 200 cm/min, with one test at 3000 cm/min. Found were only planar and cellular structures, no dendrites - columnar or equiaxed. Cell sub structures: rows of cubic elements we call "cubelettes", "cell-bands" and "micro-cells", as well as an anomalous crack morphology "crack-eye", were detected, as well as microscopic hot crack nucleus we call "grain-lag cracks", caused by a grain slightly lagging behind its neighbors in arrival to the weld centerline. Varestraint test and R-test revealed a change of crack morphologies from centerline cracks to grainand cell boundary cracks with an increasing welding speed. High speed made the cracks invisible to bare eye and hardly detectable with light microscope, while electron microscope often revealed networks of fine micro-cracks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis comprises two study populations. The first study sample (SS1) consisted of 411 adults examined and interviewed at three annual visits. The second study sample (SS2) consisted of 1720 adults who filled in a mailed questionnaire about secondary otalgia, tinnitus and fullness of ears. In the second phase of the SS2, 100 subjects with otalgia were examined and interviewed by specialist in stomatognathic physiology and otorhinolaryngology. In the third phase, 36 subjects participated in a randomized, controlled and blinded trial of effectiveness of occlusal appliance on secondary otalgia, facial pain, headache and treatment need of temporomandibular disorders (TMD). The standardized prevalence of recurrent secondary otalgia was 6%, tinnitus 15% and fullness of ears 8%. Aural symptoms were more frequent among young than old subjects. They were associated with other, simultaneous aural symptoms, TMD pain, head and neck region pain, and visits to a physician. The subjects with aural symptoms more often had tenderness on palpation of masticatory muscles and clinical signs of temporomandibular joint than the subjects without. 85% of the subjects reporting secondary otalgia had cervical spine or temporomandibular disorder or both. In SS1, the final model of secondary otalgia included active need treatment for TMD, elevated level of stress symptoms, and bruxism. In SS2, the final models of aural symptoms included associated aural symptoms, young age, TMD pain, headache and shoulder ache. Stabilization splint more effectively alleviated secondary otalgia and active treatment need for TMD than a palatal control splint. In patients with aural pain, tinnitus or fullness of ears, it is important to first rule out otologic and nasopharyngeal diseases that may cause the symptoms. If no explanation for aural symptoms is found, temporomandibular and cervical spine disorders should be rouled out to minimize unnecessary visits to a physician.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena on selvittää mitkä riskitekijät vaikuttavat osakkeiden tuottoihin. Arvopapereina käytetään kuutta portfoliota, jotka ovat jaoteltu markkina-arvon mukaan. Aikaperiodi on vuoden 1987 alusta vuoden 2004 loppuun. Malleina käytetään pääomamarkkinoiden hinnoittelumallia, arbitraasihinnoitteluteoriaa sekä kulutuspohjaista pääomamarkkinoiden hinnoittelumallia. Riskifaktoreina kahteen ensimmäiseen malliin käytetään markkinariskiä sekä makrotaloudellisia riskitekijöitä. Kulutuspohjaiseen pääomamarkkinoiden hinnoinoittelumallissa keskitytään estimoimaan kuluttajien riskitottumuksia sekä diskonttaustekijää, jolla kuluttaja arvostavat tulevaisuuden kulutusta. Tämä työ esittelee momenttiteorian, jolla pystymme estimoimaan lineaarisia sekä epälineaarisia yhtälöitä. Käytämme tätä menetelmää testaamissamme malleissa. Yhteenvetona tuloksista voidaan sanoa, että markkinabeeta onedelleen tärkein riskitekijä, mutta löydämme myös tukea makrotaloudellisille riskitekijöille. Kulutuspohjainen mallimme toimii melko hyvin antaen teoreettisesti hyväksyttäviä arvoja.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena on selvittää, mitkä tekijät vaikuttavat yrityksen ja valtion velkakirjojen väliseen tuottoeroon. Strukturaalisten luottoriskin hinnoittelumallien mukaan luottoriskiin vaikuttavia tekijöitä ovat yrityksen velkaantumisaste, volatiliteetti ja riskitön korkokanta. Tavoitteena on erityisesti tutkia, kuinka hyvin nämä teoreettiset tekijät selittävät tuottoeroja ja onko olemassa muita tärkeitä selittäviä tekijöitä. Luottoriskinvaihtosopimusten noteerauksia käytetään tuottoerojen määrittämiseen. Selittävät tekijät koostuvat sekä yrityskohtaisista että markkinalaajuisista muuttujista. Luottoriskinvaihtosopimusten ja yrityskohtaisten muuttujien data on kerätty yhteensä 50 yritykselle Euroalueen maista. Aineisto koostuu kuukausittaisista havainnoista aikaväliltä 01.01.2003-31.12.2006. Empiiriset tulokset osoittavat, että strukturaalisten mallien mukaiset tekijät selittävät vain pienen osan tuottoeron muutoksista yli ajan. Toisaalta nämä teoreettiset tekijät selittävät huomattavasti paremmin tuottoeron vaihtelua yli poikkileikkauksen. Muut kuin teoreettiset tekijät pystyvät selittämään suuren osan tuottoeron vaihtelusta. Erityisen tärkeäksi tuottoeron selittäväksi tekijäksi osoittautui yleinen riskipreemio velkakirjamarkkinoilla. Tulokset osoittavat, että luottoriskin hinnoittelumalleja on kehitettävä edelleenniin, että ne ottaisivat huomioon yrityskohtaisten tekijöiden lisäksi myös markkinalaajuisia tekijöitä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In general, models of ecological systems can be broadly categorized as ’top-down’ or ’bottom-up’ models, based on the hierarchical level that the model processes are formulated on. The structure of a top-down, also known as phenomenological, population model can be interpreted in terms of population characteristics, but it typically lacks an interpretation on a more basic level. In contrast, bottom-up, also known as mechanistic, population models are derived from assumptions and processes on a more basic level, which allows interpretation of the model parameters in terms of individual behavior. Both approaches, phenomenological and mechanistic modelling, can have their advantages and disadvantages in different situations. However, mechanistically derived models might be better at capturing the properties of the system at hand, and thus give more accurate predictions. In particular, when models are used for evolutionary studies, mechanistic models are more appropriate, since natural selection takes place on the individual level, and in mechanistic models the direct connection between model parameters and individual properties has already been established. The purpose of this thesis is twofold. Firstly, a systematical way to derive mechanistic discrete-time population models is presented. The derivation is based on combining explicitly modelled, continuous processes on the individual level within a reproductive period with a discrete-time maturation process between reproductive periods. Secondly, as an example of how evolutionary studies can be carried out in mechanistic models, the evolution of the timing of reproduction is investigated. Thus, these two lines of research, derivation of mechanistic population models and evolutionary studies, are complementary to each other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cosmological standard view is based on the assumptions of homogeneity, isotropy and general relativistic gravitational interaction. These alone are not sufficient for describing the current cosmological observations of accelerated expansion of space. Although general relativity is extremely accurately tested to describe the local gravitational phenomena, there is a strong demand for modifying either the energy content of the universe or the gravitational interaction itself to account for the accelerated expansion. By adding a non-luminous matter component and a constant energy component with negative pressure, the observations can be explained with general relativity. Gravitation, cosmological models and their observational phenomenology are discussed in this thesis. Several classes of dark energy models that are motivated by theories outside the standard formulation of physics were studied with emphasis on the observational interpretation. All the cosmological models that seek to explain the cosmological observations, must also conform to the local phenomena. This poses stringent conditions for the physically viable cosmological models. Predictions from a supergravity quintessence model was compared to Supernova 1a data and several metric gravity models were studied with local experimental results. Polytropic stellar configurations of solar, white dwarf and neutron stars were numerically studied with modified gravity models. The main interest was to study the spacetime around the stars. The results shed light on the viability of the studied cosmological models.