34 resultados para computer prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data assimilation provides an initial atmospheric state, called the analysis, for Numerical Weather Prediction (NWP). This analysis consists of pressure, temperature, wind, and humidity on a three-dimensional NWP model grid. Data assimilation blends meteorological observations with the NWP model in a statistically optimal way. The objective of this thesis is to describe methodological development carried out in order to allow data assimilation of ground-based measurements of the Global Positioning System (GPS) into the High Resolution Limited Area Model (HIRLAM) NWP system. Geodetic processing produces observations of tropospheric delay. These observations can be processed either for vertical columns at each GPS receiver station, or for the individual propagation paths of the microwave signals. These alternative processing methods result in Zenith Total Delay (ZTD) and Slant Delay (SD) observations, respectively. ZTD and SD observations are of use in the analysis of atmospheric humidity. A method is introduced for estimation of the horizontal error covariance of ZTD observations. The method makes use of observation minus model background (OmB) sequences of ZTD and conventional observations. It is demonstrated that the ZTD observation error covariance is relatively large in station separations shorter than 200 km, but non-zero covariances also appear at considerably larger station separations. The relatively low density of radiosonde observing stations limits the ability of the proposed estimation method to resolve the shortest length-scales of error covariance. SD observations are shown to contain a statistically significant signal on the asymmetry of the atmospheric humidity field. However, the asymmetric component of SD is found to be nearly always smaller than the standard deviation of the SD observation error. SD observation modelling is described in detail, and other issues relating to SD data assimilation are also discussed. These include the determination of error statistics, the tuning of observation quality control and allowing the taking into account of local observation error correlation. The experiments made show that the data assimilation system is able to retrieve the asymmetric information content of hypothetical SD observations at a single receiver station. Moreover, the impact of real SD observations on humidity analysis is comparable to that of other observing systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical models, used for atmospheric research, weather prediction and climate simulation, describe the state of the atmosphere over the heterogeneous surface of the Earth. Several fundamental properties of atmospheric models depend on orography, i.e. on the average elevation of land over a model area. The higher is the models' resolution, the more the details of orography directly influence the simulated atmospheric processes. This sets new requirements for the accuracy of the model formulations with respect to the spatially varying orography. Orography is always averaged, representing the surface elevation within the horizontal resolution of the model. In order to remove the smallest scales and steepest slopes, the continuous spectrum of orography is normally filtered (truncated) even more, typically beyond a few gridlengths of the model. This means, that in the numerical weather prediction (NWP) models, there will always be subgridscale orography effects, which cannot be explicitly resolved by numerical integration of the basic equations, but require parametrization. In the subgrid-scale, different physical processes contribute in different scales. The parametrized processes interact with the resolved-scale processes and with each other. This study contributes to building of a consistent, scale-dependent system of orography-related parametrizations for the High Resolution Limited Area Model (HIRLAM). The system comprises schemes for handling the effects of mesoscale (MSO) and small-scale (SSO) orographic effects on the simulated flow and a scheme of orographic effects on the surface-level radiation fluxes. Representation of orography, scale-dependencies of the simulated processes and interactions between the parametrized and resolved processes are discussed. From the high-resolution digital elevation data, orographic parameters are derived for both momentum and radiation flux parametrizations. Tools for diagnostics and validation are developed and presented. The parametrization schemes applied, developed and validated in this study, are currently being implemented into the reference version of HIRLAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion power is an appealing source of clean and abundant energy. The radiation resistance of reactor materials is one of the greatest obstacles on the path towards commercial fusion power. These materials are subject to a harsh radiation environment, and cannot fail mechanically or contaminate the fusion plasma. Moreover, for a power plant to be economically viable, the reactor materials must withstand long operation times, with little maintenance. The fusion reactor materials will contain hydrogen and helium, due to deposition from the plasma and nuclear reactions because of energetic neutron irradiation. The first wall divertor materials, carbon and tungsten in existing and planned test reactors, will be subject to intense bombardment of low energy deuterium and helium, which erodes and modifies the surface. All reactor materials, including the structural steel, will suffer irradiation of high energy neutrons, causing displacement cascade damage. Molecular dynamics simulation is a valuable tool for studying irradiation phenomena, such as surface bombardment and the onset of primary damage due to displacement cascades. The governing mechanisms are on the atomic level, and hence not easily studied experimentally. In order to model materials, interatomic potentials are needed to describe the interaction between the atoms. In this thesis, new interatomic potentials were developed for the tungsten-carbon-hydrogen system and for iron-helium and chromium-helium. Thus, the study of previously inaccessible systems was made possible, in particular the effect of H and He on radiation damage. The potentials were based on experimental and ab initio data from the literature, as well as density-functional theory calculations performed in this work. As a model for ferritic steel, iron-chromium with 10% Cr was studied. The difference between Fe and FeCr was shown to be negligible for threshold displacement energies. The properties of small He and He-vacancy clusters in Fe and FeCr were also investigated. The clusters were found to be more mobile and dissociate more rapidly than previously assumed, and the effect of Cr was small. The primary damage formed by displacement cascades was found to be heavily influenced by the presence of He, both in FeCr and W. Many important issues with fusion reactor materials remain poorly understood, and will require a huge effort by the international community. The development of potential models for new materials and the simulations performed in this thesis reveal many interesting features, but also serve as a platform for further studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Trust and Collectives" is a compilation of articles: (I) "On Rational Trust" (in Meggle, G. (ed.) Social Facts & Collective Intentionality, Dr. Hänsel-Hohenhausen AG (currently Ontos), 2002), (II) "Simulating Rational Social Normative Trust, Predictive Trust, and Predictive Reliance Between Agents" (M.Tuomela and S. Hofmann, Ethics and Information Technology 5, 2003), (III) "A Collective's Trust in a Collective's action" (Protosociology, 18-19, 2003), and (IV) "Cooperation and Trust in Group Contexts" (R. Tuomela and M.Tuomela, Mind and Society 4/1, 2005 ). The articles are tied together by an introduction that dwells deeply on the topic of trust. (I) presents a somewhat general version of (RSNTR) and some basic arguments. (II) offers an application of (RSNTR) for a computer simulation of trust.(III) applies (RSNTR) to Raimo Tuomela's "we-mode"collectives (i.e. The Philosophy of Social Practices, Cambridge University Press, 2002). (IV) analyzes cooperation and trust in the context of acting as a member of a collective. Thus, (IV) elaborates on the topic of collective agency in (III) and puts the trust account (RSNTR) to work in a framework of cooperation. The central aim of this work is to construct a well-argued conceptual and theoretical account of rational trust, viz. a person's subjectively rational trust in another person vis-à-vis his performance of an action, seen from a first-person point of view. The main method is conceptual and theoretical analysis understood along the lines of reflective equilibrium. The account of rational social normative trust (RSNTR), which is argued and defended against other views, is the result of the quest. The introduction stands on its own legs as an argued presentation of an analysis of the concept of rational trust and an analysis of trust itself (RSNTR). It is claimed that (RSNTR) is "genuine" trust and embedded in a relationship of mutual respect for the rights of the other party. This relationship is the growing site for trust, a causal and conceptual ground, but it is not taken as a reason for trusting (viz. predictive "trust"). Relevant themes such as risk, decision, rationality, control, and cooperation are discussed and the topics of the articles are briefly presented. In this work it is argued that genuine trust is to be kept apart from predictive "trust." When we trust a person vis-à-vis his future action that concerns ourselves on the basis of his personal traits and/or features of the specific situation we have a prediction-like attitude. Genuine trust develops in a relationship of mutual respect for the mutual rights of the other party. Such a relationship is formed through interaction where the parties gradually find harmony concerning "the rules of the game." The trust account stands as a contribution to philosophical research on central social notions and it could be used as a theoretical model in social psychology, economical and political science where interaction between persons and groups are in focus. The analysis could also serve as a model for a trust component in computer simulation of human action. In the context of everyday life the account clarifies the difference between predictive "trust" and genuine trust. There are no fast shortcuts to trust. Experiences of mutual respect for mutual rights cannot be had unless there is respect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In smaller countries where the key players in construction IT development tend to know each other personally and where public R&D funding is concentrated to a few channels, IT roadmaps and strategies would seem to have a better chance of influencing development than in the bigger industrial countries. In this paper Finland and the RATAS-project is presented as a historical case illustrating such impact. RATAS was initiated as a construction IT roadmap project in 1985, involving many of the key organisations and companies active in construction sector development. Several of the individuals who took an active part in the project have played an important role in later developments both in Finland and on the international scene. The central result of RATAS was the identification of what is nowadays called Building Information Modelling (BIM) technology as the central issue in getting IT into efficient use in the construction sector. BIM, which earlier was referred to as building product modelling, has been a key ingredient in many roadmaps since and the subject of international standardisation efforts such as STEP and IAI/IFCs. The RATAS project can in hindsight be seen as a forerunner with an impact which also transcended national borders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.