916 resultados para flood forecasting model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Moist convection is well known to be generally more intense over continental than maritime regions, with larger updraft velocities, graupel, and lightning production. This study explores the transition from maritime to continental convection by comparing the trends in Tropical Rainfall Measuring Mission (TRMM) radar and microwave (37 and 85 GHz) observations over islands of increasing size to those simulated by a cloud-resolving model. The observed storms were essentially maritime over islands of <100 km2 and continental over islands >10 000 km2, with a gradual transition in between. Equivalent radar and microwave quantities were simulated from cloud-resolving runs of the Weather Research and Forecasting model via offline radiation codes. The model configuration was idealized, with islands represented by regions of uniform surface heat flux without orography, using a range of initial sounding conditions without strong horizontal winds or aerosols. Simulated storm strength varied with initial sounding, as expected, but also increased sharply with island size in a manner similar to observations. Stronger simulated storms were associated with higher concentrations of large hydrometeors. Although biases varied with different ice microphysical schemes, the trend was similar for all three schemes tested and was also seen in 2D and 3D model configurations. The successful reproduction of the trend with such idealized forcing supports previous suggestions that mesoscale variation in surface heating—rather than any difference in humidity, aerosol, or other aspects of the atmospheric state—is the main reason that convection is more intense over continents and large islands than over oceans. Some dynamical storm aspects, notably the peak rainfall and minimum surface pressure low, were more sensitive to surface forcing than to the atmospheric sounding or ice scheme. Large hydrometeor concentrations and simulated microwave and radar signatures, however, were at least as sensitive to initial humidity levels as to surface forcing and were more sensitive to the ice scheme. Issues with running the TRMM simulator on 2D simulations are discussed, but they appear to be less serious than sensitivities to model microphysics, which were similar in 2D and 3D. This supports the further use of 2D simulations to economically explore modeling uncertainties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The parameterization of surface heat-flux variability in urban areas relies on adequate representation of surface characteristics. Given the horizontal resolutions (e.g. ≈0.1–1km) currently used in numerical weather prediction (NWP) models, properties of the urban surface (e.g. vegetated/built surfaces, street-canyon geometries) often have large spatial variability. Here, a new approach based on Urban Zones to characterize Energy partitioning (UZE) is tested within a NWP model (Weather Research and Forecasting model;WRF v3.2.1) for Greater London. The urban land-surface scheme is the Noah/Single-Layer Urban Canopy Model (SLUCM). Detailed surface information (horizontal resolution 1 km)in central London shows that the UZE offers better characterization of surface properties and their variability compared to default WRF-SLUCM input parameters. In situ observations of the surface energy fluxes and near-surface meteorological variables are used to select the radiation and turbulence parameterization schemes and to evaluate the land-surface scheme

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The impact of 1973–2005 land use–land cover (LULC) changes on near-surface air temperatures during four recent summer extreme heat events (EHEs) are investigated for the arid Phoenix, Arizona, metropolitan area using the Weather Research and Forecasting Model (WRF) in conjunction with the Noah Urban Canopy Model. WRF simulations were carried out for each EHE using LULC for the years 1973, 1985, 1998, and 2005. Comparison of measured near-surface air temperatures and wind speeds for 18 surface stations in the region show a good agreement between observed and simulated data for all simulation periods. The results indicate consistent significant contributions of urban development and accompanying LULC changes to extreme temperatures for the four EHEs. Simulations suggest new urban developments caused an intensification and expansion of the area experiencing extreme temperatures but mainly influenced nighttime temperatures with an increase of up to 10 K. Nighttime temperatures in the existing urban core showed changes of up to 2 K with the ongoing LULC changes. Daytime temperatures were not significantly affected where urban development replaced desert land (increase by 1 K); however, maximum temperatures increased by 2–4 K when irrigated agricultural land was converted to suburban development. According to the model simulations, urban landscaping irrigation contributed to cooling by 0.5–1 K in maximum daytime as well as minimum nighttime 2-m air temperatures in most parts of the urban region. Furthermore, urban development led to a reduction of the already relatively weak nighttime winds and therefore a reduction in advection of cooler air into the city.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The CWRF is developed as a climate extension of the Weather Research and Forecasting model (WRF) by incorporating numerous improvements in the representation of physical processes and integration of external (top, surface, lateral) forcings that are crucial to climate scales, including interactions between land, atmosphere, and ocean; convection and microphysics; and cloud, aerosol, and radiation; and system consistency throughout all process modules. This extension inherits all WRF functionalities for numerical weather prediction while enhancing the capability for climate modeling. As such, CWRF can be applied seamlessly to weather forecast and climate prediction. The CWRF is built with a comprehensive ensemble of alternative parameterization schemes for each of the key physical processes, including surface (land, ocean), planetary boundary layer, cumulus (deep, shallow), microphysics, cloud, aerosol, and radiation, and their interactions. This facilitates the use of an optimized physics ensemble approach to improve weather or climate prediction along with a reliable uncertainty estimate. The CWRF also emphasizes the societal service capability to provide impactrelevant information by coupling with detailed models of terrestrial hydrology, coastal ocean, crop growth, air quality, and a recently expanded interactive water quality and ecosystem model. This study provides a general CWRF description and basic skill evaluation based on a continuous integration for the period 1979– 2009 as compared with that of WRF, using a 30-km grid spacing over a domain that includes the contiguous United States plus southern Canada and northern Mexico. In addition to advantages of greater application capability, CWRF improves performance in radiation and terrestrial hydrology over WRF and other regional models. Precipitation simulation, however, remains a challenge for all of the tested models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the lead–lag relationship between the FTSE 100 index and index futures price employing a number of time series models. Using 10-min observations from June 1996–1997, it is found that lagged changes in the futures price can help to predict changes in the spot price. The best forecasting model is of the error correction type, allowing for the theoretical difference between spot and futures prices according to the cost of carry relationship. This predictive ability is in turn utilised to derive a trading strategy which is tested under real-world conditions to search for systematic profitable trading opportunities. It is revealed that although the model forecasts produce significantly higher returns than a passive benchmark, the model was unable to outperform the benchmark after allowing for transaction costs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Weather Research and Forecasting model was applied to analyze variations in the planetary boundary layer (PBL) structure over Southeast England including central and suburban London. The parameterizations and predictive skills of two nonlocal mixing PBL schemes, YSU and ACM2, and two local mixing PBL schemes, MYJ and MYNN2, were evaluated over a variety of stability conditions, with model predictions at a 3 km grid spacing. The PBL height predictions, which are critical for scaling turbulence and diffusion in meteorological and air quality models, show significant intra-scheme variance (> 20%), and the reasons are presented. ACM2 diagnoses the PBL height thermodynamically using the bulk Richardson number method, which leads to a good agreement with the lidar data for both unstable and stable conditions. The modeled vertical profiles in the PBL, such as wind speed, turbulent kinetic energy (TKE), and heat flux, exhibit large spreads across the PBL schemes. The TKE predicted by MYJ were found to be too small and show much less diurnal variation as compared with observations over London. MYNN2 produces better TKE predictions at low levels than MYJ, but its turbulent length scale increases with height in the upper part of the strongly convective PBL, where it should decrease. The local PBL schemes considerably underestimate the entrainment heat fluxes for convective cases. The nonlocal PBL schemes exhibit stronger mixing in the mean wind fields under convective conditions than the local PBL schemes and agree better with large-eddy simulation (LES) studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a fast and reliable method for redistributing a computational mesh in three dimensions which can generate a complex three dimensional mesh without any problems due to mesh tangling. The method relies on a three dimensional implementation of the parabolic Monge–Ampère (PMA) technique, for finding an optimally transported mesh. The method for implementing PMA is described in detail and applied to both static and dynamic mesh redistribution problems, studying both the convergence and the computational cost of the algorithm. The algorithm is applied to a series of problems of increasing complexity. In particular very regular meshes are generated to resolve real meteorological features (derived from a weather forecasting model covering the UK area) in grids with over 2×107 degrees of freedom. The PMA method computes these grids in times commensurate with those required for operational weather forecasting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Flood forecasting increasingly relies on numerical weather prediction forecasts to achieve longer lead times. One of the key difficulties that is emerging in constructing a decision framework for these flood forecasts is what to dowhen consecutive forecasts are so different that they lead to different conclusions regarding the issuing of warnings or triggering other action. In this opinion paper we explore some of the issues surrounding such forecast inconsistency (also known as "Jumpiness", "Turning points", "Continuity" or number of "Swings"). In thsi opinion paper we define forecast inconsistency; discuss the reasons why forecasts might be inconsistent; how we should analyse inconsistency; and what we should do about it; how we should communicate it and whether it is a totally undesirable property. The property of consistency is increasingly emerging as a hot topic in many forecasting environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematical relationships between Scoring Parameters can be used in Economic Scoring Formulas (ESF) in tendering to distribute the score among bidders in the economic part of a proposal. Each contracting authority must set an ESF when publishing tender specifications and the strategy of each bidder will differ depending on the ESF selected and the weight of the overall proposal scoring. This paper introduces the various mathematical relationships and density distributions that describe and inter-relate not only the main Scoring Parameters but the main Forecasting Parameters in any capped tender (those whose price is upper-limited). Forecasting Parameters, as variables that can be known in advance before the deadline of a tender is reached, together with Scoring Parameters constitute the basis of a future Bid Tender Forecasting Model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis consists of a summary and five self-contained papers addressing dynamics of firms in the Swedish wholesale trade sector. Paper [1] focuses upon determinants of new firm formation in the Swedish wholesale trade sector, using two definitions of firms’ relevant markets, markets defined as administrative areas, and markets based on a cost minimizing behavior of retailers. The paper shows that new entering firms tend to avoid regions with already high concentration of other firms in the same branch of wholesaling, while right-of-the-center local government and quality of the infrastructure have positive impacts upon entry of new firms. The signs of the estimated coefficients remain the same regardless which definition of relevant market is used, while the size of the coefficients is generally higher once relevant markets delineated on the cost-minimizing assumption of retailers are used. Paper [2] analyses determinant of firm relocation, distinguishing between the role of the factors in in-migration municipalities and out-migration municipalities. The results of the analysis indicate that firm-specific factors, such as profits, age and size of the firm are negatively related to the firm’s decision to relocate. Furthermore, firms seems to be avoiding municipalities with already high concentration of firms operating in the same industrial branch of wholesaling and also to be more reluctant to leave municipalities governed by right-of-the- center parties. Lastly, firms seem to avoid moving to municipalities characterized with high population density. Paper [3] addresses determinants of firm growth, adopting OLS and a quantile regression technique. The results of this paper indicate that very little of the firm growth can be explained by the firm-, industry- and region-specific factors, controlled for in the estimated models. Instead, the firm growth seems to be driven by internal characteristics of firms, factors difficult to capture in conventional statistics. This result supports Penrose’s (1959) suggestion that internal resources such as firm culture, brand loyalty, entrepreneurial skills, and so on, are important determinants of firm growth rates. Paper [4] formulates a forecasting model for firm entry into local markets and tests this model using data from the Swedish wholesale industry. The empirical analysis is based on directly estimating the profit function of wholesale firms and identification of low- and high-return local markets. The results indicate that 19 of 30 estimated models have more net entry in high-return municipalities, but the estimated parameters is only statistically significant at conventional level in one of our estimated models, and then with unexpected negative sign. Paper [5] studies effects of firm relocation on firm profits of relocating firms, employing a difference-in-difference propensity score matching. Using propensity score matching, the pre-relocalization differences between relocating and non-relocating firms are balanced, while the difference-in-difference estimator controls for all time-invariant unobserved heterogeneity among firms. The results suggest that firms that relocate increase their profits significantly, in comparison to what the profits would be had the firms not relocated. This effect is estimated to vary between 3 to 11 percentage points, depending on the length of the analyzed period. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Applying microeconomic theory, we develop a forecasting model for firm entry into local markets and test this model using data from the Swedish wholesale industry. The empirical analysis is based on directly estimating the profit function of wholesale firms. As in previous entry studies, profits are assumed to depend on firm- and location-specific factors,and the profit equation is estimated using panel data econometric techniques. Using the residuals from the profit equation estimations, we identify local markets in Sweden where firm profits are abnormally high given the level of all independent variables included in the profit function. From microeconomic theory, we then know that these local markets should have higher net entry than other markets, all else being equal, and we investigate this in a second step,also using a panel data econometric model. The results of estimating the net-entry equation indicate that four of five estimated models have more net entry in high-return municipalities, but the estimated parameter is only statistically significant at conventional levels in one of our estimated models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sistemas de previsão de cheias podem ser adequadamente utilizados quando o alcance é suficiente, em comparação com o tempo necessário para ações preventivas ou corretivas. Além disso, são fundamentalmente importantes a confiabilidade e a precisão das previsões. Previsões de níveis de inundação são sempre aproximações, e intervalos de confiança não são sempre aplicáveis, especialmente com graus de incerteza altos, o que produz intervalos de confiança muito grandes. Estes intervalos são problemáticos, em presença de níveis fluviais muito altos ou muito baixos. Neste estudo, previsões de níveis de cheia são efetuadas, tanto na forma numérica tradicional quanto na forma de categorias, para as quais utiliza-se um sistema especialista baseado em regras e inferências difusas. Metodologias e procedimentos computacionais para aprendizado, simulação e consulta são idealizados, e então desenvolvidos sob forma de um aplicativo (SELF – Sistema Especialista com uso de Lógica “Fuzzy”), com objetivo de pesquisa e operação. As comparações, com base nos aspectos de utilização para a previsão, de sistemas especialistas difusos e modelos empíricos lineares, revelam forte analogia, apesar das diferenças teóricas fundamentais existentes. As metodologias são aplicadas para previsão na bacia do rio Camaquã (15543 km2), para alcances entre 10 e 48 horas. Dificuldades práticas à aplicação são identificadas, resultando em soluções as quais constituem-se em avanços do conhecimento e da técnica. Previsões, tanto na forma numérica quanto categorizada são executadas com sucesso, com uso dos novos recursos. As avaliações e comparações das previsões são feitas utilizandose um novo grupo de estatísticas, derivadas das freqüências simultâneas de ocorrência de valores observados e preditos na mesma categoria, durante a simulação. Os efeitos da variação da densidade da rede são analisados, verificando-se que sistemas de previsão pluvio-hidrométrica em tempo atual são possíveis, mesmo com pequeno número de postos de aquisição de dados de chuva, para previsões sob forma de categorias difusas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente estudo apresenta um modelo de previsão do preço e do volume comercializado no mercado transoceânico de minério de ferro. Para tanto, foi desenvolvido um modelo VAR, utilizando, além das variáveis endógenas com um lag de diferença, o preço do petróleo Brent e um índice de produção industrial. Após testar raiz unitária das variáveis e constatar que nenhuma era estacionária, o teste de cointegração atestou que existia relação de longo prazo entre as mesmas que era estacionária, afastando a possibilidade de uma regressão espúria. Como resultado, a modelagem VAR apresentou um modelo consistente, com elevada aderência para a previsão do preço e do volume negociado de minério de ferro no mercado transoceânico, não obstante ele tenha apresentado alguma imprecisão no curto prazo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis elaborates the creation of value in private equity and in particular analyzes value creation in 3G Capital’s acquisition of Burger King. In this sense, a specific model is applied that composes value creation into several drivers, in order to answer the question of how value creation can be addressed in private equity investments. Although previous research by Achleitner et al. (2010) introduced a specific model that addresses value creation in private equity, the respective model was neither applied to an individual company, nor linked to indirect drivers that explain the dynamics and rationales for the creation of value. In turn this paper applies the quantitative model to an ongoing private equity investment and thereby provides different extensions to turn the model into a better forecasting model for ongoing investments, instead of only analyzing a deal that has already been divested from an ex post perspective. The chosen research approach is a case study about the Burger King buyout that first includes an extensive review about the current status of academic literature, second a quantitative calculation and qualitative interpretation of different direct value drivers, third a qualitative breakdown of indirect drivers, and lastly a recapitulating discussion about value creation and value drivers. Presenting a very successful private equity investment and elaborately demonstrating the dynamics and mechanisms that drive value creation in this case, provides important implications for other private equity firms as well as public firms in order to develop their proprietary approach towards value creation.