978 resultados para Advanced Transaction Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Biliary tract cancer is an uncommon cancer with a poor outcome. We assembled data from the National Cancer Research Institute (UK) ABC-02 study and 10 international studies to determine prognostic outcome characteristics for patients with advanced disease. METHODS: Multivariable analyses of the final dataset from the ABC-02 study were carried out. All variables were simultaneously included in a Cox proportional hazards model, and backward elimination was used to produce the final model (using a significance level of 10%), in which the selected variables were associated independently with outcome. This score was validated externally by receiver operating curve (ROC) analysis using the independent international dataset. RESULTS: A total of 410 patients were included from the ABC-02 study and 753 from the international dataset. An overall survival (OS) and progression-free survival (PFS) Cox model was derived from the ABC-02 study. White blood cells, haemoglobin, disease status, bilirubin, neutrophils, gender, and performance status were considered prognostic for survival (all with P < 0.10). Patients with metastatic disease {hazard ratio (HR) 1.56 [95% confidence interval (CI) 1.20-2.02]} and Eastern Cooperative Oncology Group performance status (ECOG PS) 2 had worse survival [HR 2.24 (95% CI 1.53-3.28)]. In a dataset restricted to patients who received cisplatin and gemcitabine with ECOG PS 0 and 1, only haemoglobin, disease status, bilirubin, and neutrophils were associated with PFS and OS. ROC analysis suggested the models generated from the ABC-02 study had a limited prognostic value [6-month PFS: area under the curve (AUC) 62% (95% CI 57-68); 1-year OS: AUC 64% (95% CI 58-69)]. CONCLUSION: These data propose a set of prognostic criteria for outcome in advanced biliary tract cancer derived from the ABC-02 study that are validated in an international dataset. Although these findings establish the benchmark for the prognostic evaluation of patients with ABC and confirm the value of longheld clinical observations, the ability of the model to correctly predict prognosis is limited and needs to be improved through identification of additional clinical and molecular markers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seaports play an important part in the wellbeing of a nation. Many nations are highly dependent on foreign trade and most trade is done using sea vessels. This study is part of a larger research project, where a simulation model is required in order to create further analyses on Finnish macro logistical networks. The objective of this study is to create a system dynamic simulation model, which gives an accurate forecast for the development of demand of Finnish seaports up to 2030. The emphasis on this study is to show how it is possible to create a detailed harbor demand System Dynamic model with the help of statistical methods. The used forecasting methods were ARIMA (autoregressive integrated moving average) and regression models. The created simulation model gives a forecast with confidence intervals and allows studying different scenarios. The building process was found to be a useful one and the built model can be expanded to be more detailed. Required capacity for other parts of the Finnish logistical system could easily be included in the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity distribution network operation (NO) models are challenged as they are expected to continue to undergo changes during the coming decades in the fairly developed and regulated Nordic electricity market. Network asset managers are to adapt to competitive technoeconomical business models regarding the operation of increasingly intelligent distribution networks. Factors driving the changes for new business models within network operation include: increased investments in distributed automation (DA), regulative frameworks for annual profit limits and quality through outage cost, increasing end-customer demands, climatic changes and increasing use of data system tools, such as Distribution Management System (DMS). The doctoral thesis addresses the questions a) whether there exist conditions and qualifications for competitive markets within electricity distribution network operation and b) if so, identification of limitations and required business mechanisms. This doctoral thesis aims to provide an analytical business framework, primarily for electric utilities, for evaluation and development purposes of dedicated network operation models to meet future market dynamics within network operation. In the thesis, the generic build-up of a business model has been addressed through the use of the strategicbusiness hierarchy levels of mission, vision and strategy for definition of the strategic direction of the business followed by the planning, management and process execution levels of enterprisestrategy execution. Research questions within electricity distribution network operation are addressed at the specified hierarchy levels. The results of the research represent interdisciplinary findings in the areas of electrical engineering and production economics. The main scientific contributions include further development of the extended transaction cost economics (TCE) for government decisions within electricity networks and validation of the usability of the methodology for the electricity distribution industry. Moreover, DMS benefit evaluations in the thesis based on the outage cost calculations propose theoretical maximum benefits of DMS applications equalling roughly 25% of the annual outage costs and 10% of the respective operative costs in the case electric utility. Hence, the annual measurable theoretical benefits from the use of DMS applications are considerable. The theoretical results in the thesis are generally validated by surveys and questionnaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to compare the hydrographically conditioned digital elevation models (HCDEMs) generated from data of VNIR (Visible Near Infrared) sensor of ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), of SRTM (Shuttle Radar Topography Mission) and topographical maps from IBGE in a scale of 1:50,000, processed in the Geographical Information System (GIS), aiming the morphometric characterization of watersheds. It was taken as basis the Sub-basin of São Bartolomeu River, obtaining morphometric characteristics from HCDEMs. Root Mean Square Error (RMSE) and cross validation were the statistics indexes used to evaluate the quality of HCDEMs. The percentage differences in the morphometric parameters obtained from these three different data sets were less than 10%, except for the mean slope (21%). In general, it was observed a good agreement between HCDEMs generated from remote sensing data and IBGE maps. The result of HCDEM ASTER was slightly higher than that from HCDEM SRTM. The HCDEM ASTER was more accurate than the HCDEM SRTM in basins with high altitudes and rugged terrain, by presenting frequency altimetry nearest to HCDEM IBGE, considered standard in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tavoitteena on kehittää ABB:lle palvelutuote, jota voidaan tarjota voimalaitosasiakkaille. Uuden palvelutuotteen tulee vastata ABB:n uuden strategian linjauksiin. Palvelulla tarjotaan asiakkaille 1.1.2015 voimaan tulleen energiatehokkuuslain määrittelemien pakollisten toimenpiteiden suoritusta. Työssä kerätään, käsitellään ja analysoidaan tietoa voimalaitosasiakkaille suunnatun palvelun tuotteistamisprosessin päätöksenteon tueksi. Palvelutuotteen kehittämistä varten tutkitaan ABB:n nykyisiä palvelutuotteita, osaamista ja referenssi projekteja, energiatehokkuuslakia, voimalaitosten energiatehokkuus-potentiaalia ja erilaisia energiakatselmusmalleja. Päätöksenteon tueksi tehdään referenssiprojektina energia-analyysi voimalaitokselle, jossa voimalaitoksesta tehdään ipsePRO simulointiohjelmalla mallinnus. Mallinnuksen ja koeajojen avulla tutkitaan voimalaitoksen minimikuorman optimointia. Markkinatutkimuksessa selvitetään lainsäädännön vaikutusta, nykyistä markkinatilannetta, potentiaalisia asiakkaita, kilpailijoita ja ABB:n mahdollisuuksia toimia alalla SWOT–analyysin avulla. Tutkimuksen tulosten perusteella tehdään päätös tuotteistaa voimalaitoksille palvelutuote, joka sisältää kaikki toimet energiatehokkuuslain asettamien vaatimusten täyttämiseen yrityksen energiakatselmuksen vastuuhenkilön, energiakatselmuksen ja kohdekatselmuksien teon osalta. Lisäksi työn aikana Energiavirasto myönsi ABB:lle pätevyyden toimia yrityksen energiakatselmuksen vastuuhenkilönä, mikä on edellytyksenä palvelun tarjoamiselle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les systèmes Matériels/Logiciels deviennent indispensables dans tous les aspects de la vie quotidienne. La présence croissante de ces systèmes dans les différents produits et services incite à trouver des méthodes pour les développer efficacement. Mais une conception efficace de ces systèmes est limitée par plusieurs facteurs, certains d'entre eux sont: la complexité croissante des applications, une augmentation de la densité d'intégration, la nature hétérogène des produits et services, la diminution de temps d’accès au marché. Une modélisation transactionnelle (TLM) est considérée comme un paradigme prometteur permettant de gérer la complexité de conception et fournissant des moyens d’exploration et de validation d'alternatives de conception à des niveaux d’abstraction élevés. Cette recherche propose une méthodologie d’expression de temps dans TLM basée sur une analyse de contraintes temporelles. Nous proposons d'utiliser une combinaison de deux paradigmes de développement pour accélérer la conception: le TLM d'une part et une méthodologie d’expression de temps entre différentes transactions d’autre part. Cette synergie nous permet de combiner dans un seul environnement des méthodes de simulation performantes et des méthodes analytiques formelles. Nous avons proposé un nouvel algorithme de vérification temporelle basé sur la procédure de linéarisation des contraintes de type min/max et une technique d'optimisation afin d'améliorer l'efficacité de l'algorithme. Nous avons complété la description mathématique de tous les types de contraintes présentées dans la littérature. Nous avons développé des méthodes d'exploration et raffinement de système de communication qui nous a permis d'utiliser les algorithmes de vérification temporelle à différents niveaux TLM. Comme il existe plusieurs définitions du TLM, dans le cadre de notre recherche, nous avons défini une méthodologie de spécification et simulation pour des systèmes Matériel/Logiciel basée sur le paradigme de TLM. Dans cette méthodologie plusieurs concepts de modélisation peuvent être considérés séparément. Basée sur l'utilisation des technologies modernes de génie logiciel telles que XML, XSLT, XSD, la programmation orientée objet et plusieurs autres fournies par l’environnement .Net, la méthodologie proposée présente une approche qui rend possible une réutilisation des modèles intermédiaires afin de faire face à la contrainte de temps d’accès au marché. Elle fournit une approche générale dans la modélisation du système qui sépare les différents aspects de conception tels que des modèles de calculs utilisés pour décrire le système à des niveaux d’abstraction multiples. En conséquence, dans le modèle du système nous pouvons clairement identifier la fonctionnalité du système sans les détails reliés aux plateformes de développement et ceci mènera à améliorer la "portabilité" du modèle d'application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pollution of water with pesticides has become a threat to the man, material and environment. The pesticides released to the environment reach the water bodies through run off. Industrial wastewater from pesticide manufacturing industries contains pesticides at higher concentration and hence a major source of water pollution. Pesticides create a lot of health and environmental hazards which include diseases like cancer, liver and kidney disorders, reproductive disorders, fatal death, birth defects etc. Conventional wastewater treatment plants based on biological treatment are not efficient to remove these compounds to the desired level. Most of the pesticides are phyto-toxic i.e., they kill the microorganism responsible for the degradation and are recalcitrant in nature. Advanced oxidation process (AOP) is a class of oxidation techniques where hydroxyl radicals are employed for oxidation of pollutants. AOPs have the ability to totally mineralise the organic pollutants to CO2 and water. Different methods are employed for the generation of hydroxyl radicals in AOP systems. Acetamiprid is a neonicotinoid insecticide widely used to control sucking type insects on crops such as leafy vegetables, citrus fruits, pome fruits, grapes, cotton, ornamental flowers. It is now recommended as a substitute for organophosphorous pesticides. Since its use is increasing, its presence is increasingly found in the environment. It has high water solubility and is not easily biodegradable. It has the potential to pollute surface and ground waters. Here, the use of AOPs for the removal of acetamiprid from wastewater has been investigated. Five methods were selected for the study based on literature survey and preliminary experiments conducted. Fenton process, UV treatment, UV/ H2O2 process, photo-Fenton and photocatalysis using TiO2 were selected for study. Undoped TiO2 and TiO2 doped with Cu and Fe were prepared by sol-gel method. Characterisation of the prepared catalysts was done by X-ray diffraction, scanning electron microscope, differential thermal analysis and thermogravimetric analysis. Influence of major operating parameters on the removal of acetamiprid has been investigated. All the experiments were designed using central compoiste design (CCD) of response surface methodology (RSM). Model equations were developed for Fenton, UV/ H2O2, photo-Fenton and photocatalysis for predicting acetamiprid removal and total organic carbon (TOC) removal for different operating conditions. Quality of the models were analysed by statistical methods. Experimental validations were also done to confirm the quality of the models. Optimum conditions obtained by experiment were verified with that obtained using response optimiser. Fenton Process is the simplest and oldest AOP where hydrogen peroxide and iron are employed for the generation of hydroxyl radicals. Influence of H2O2 and Fe2+ on the acetamiprid removal and TOC removal by Fenton process were investigated and it was found that removal increases with increase in H2O2 and Fe2+ concentration. At an initial concentration of 50 mg/L acetamiprid, 200 mg/L H2O2 and 20 mg/L Fe2+ at pH 3 was found to be optimum for acetamiprid removal. For UV treatment effect of pH was studied and it was found that pH has not much effect on the removal rate. Addition of H2O2 to UV process increased the removal rate because of the hydroxyl radical formation due to photolyis of H2O2. An H2O2 concentration of 110 mg/L at pH 6 was found to be optimum for acetamiprid removal. With photo-Fenton drastic reduction in the treatment time was observed with 10 times reduction in the amount of reagents required. H2O2 concentration of 20 mg/L and Fe2+ concentration of 2 mg/L was found to be optimum at pH 3. With TiO2 photocatalysis improvement in the removal rate was noticed compared to UV treatment. Effect of Cu and Fe doping on the photocatalytic activity under UV light was studied and it was observed that Cu doping enhanced the removal rate slightly while Fe doping has decreased the removal rate. Maximum acetamiprid removal was observed for an optimum catalyst loading of 1000 mg/L and Cu concentration of 1 wt%. It was noticed that mineralisation efficiency of the processes is low compared to acetamiprid removal efficiency. This may be due to the presence of stable intermediate compounds formed during degradation Kinetic studies were conducted for all the treatment processes and it was found that all processes follow pseudo-first order kinetics. Kinetic constants were found out from the experimental data for all the processes and half lives were calculated. The rate of reaction was in the order, photo- Fenton>UV/ H2O2>Fenton> TiO2 photocatalysis>UV. Operating cost was calculated for the processes and it was found that photo-Fenton removes the acetamiprid at lowest operating cost in lesser time. A kinetic model was developed for photo-Fenton process using the elementary reaction data and mass balance equations for the species involved in the process. Variation of acetamiprid concentration with time for different H2O2 and Fe2+ concentration at pH 3 can be found out using this model. The model was validated by comparing the simulated concentration profiles with that obtained from experiments. This study established the viability of the selected AOPs for the removal of acetamiprid from wastewater. Of the studied AOPs photo- Fenton gives the highest removal efficiency with lowest operating cost within shortest time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The basic premise of transaction-cost theory is that the decision to outsource, rather than to undertake work in-house, is determined by the relative costs incurred in each of these forms of economic organization. In construction the "make or buy" decision invariably leads to a contract. Reducing the costs of entering into a contractual relationship (transaction costs) raises the value of production and is therefore desirable. Commonly applied methods of contractor selection may not minimise the costs of contracting. Research evidence suggests that although competitive tendering typically results in the lowest bidder winning the contract this may not represent the lowest project cost after completion. Multi-parameter and quantitative models for contractor selection have been developed to identify the best (or least risky) among bidders. A major area in which research is still needed is in investigating the impact of different methods of contractor selection on the costs of entering into a contract and the decision to outsource.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a method for building multiple-model structures is presented. A clustering algorithm that uses data from the system is employed to define the architecture of the multiple-model, including the size of the region covered by each model, and the number of models. A heating ventilation and air conditioning system is used as a testbed of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using a water balance modelling framework, this paper analyses the effects of urban design on the water balance, with a focus on evapotranspiration and storm water. First, two quite different urban water balance models are compared: Aquacycle which has been calibrated for a suburban catchment in Canberra, Australia, and the single-source urban evapotranspiration-interception scheme (SUES), an energy-based approach with a biophysically advanced representation of interception and evapotranspiration. A fair agreement between the two modelled estimates of evapotranspiration was significantly improved by allowing the vegetation cover (leaf area index, LAI) to vary seasonally, demonstrating the potential of SUES to quantify the links between water sensitive urban design and microclimates and the advantage of comparing the two modelling approaches. The comparison also revealed where improvements to SUES are needed, chiefly through improved estimates of vegetation cover dynamics as input to SUES, and more rigorous parameterization of the surface resistance equations using local-scale suburban flux measurements. Second, Aquacycle is used to identify the impact of an array of water sensitive urban design features on the water balance terms. This analysis confirms the potential to passively control urban microclimate by suburban design features that maximize evapotranspiration, such as vegetated roofs. The subsequent effects on daily maximum air temperatures are estimated using an atmospheric boundary layer budget. Potential energy savings of about 2% in summer cooling are estimated from this analysis. This is a clear ‘return on investment’ of using water to maintain urban greenspace, whether as parks distributed throughout an urban area or individual gardens or vegetated roofs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project, using PRACE (Partnership for Advanced Computing in Europe) resources, constructed and ran an ensemble of atmosphere-only global climate model simulations, using the Met Office Unified Model GA3 configuration. Each simulation is 27 years in length for both the present climate and an end-of-century future climate, at resolutions of N96 (130 km), N216 (60 km) and N512 (25 km), in order to study the impact of model resolution on high impact climate features such as tropical cyclones. Increased model resolution is found to improve the simulated frequency of explicitly tracked tropical cyclones, and correlations of interannual variability in the North Atlantic and North West Pacific lie between 0.6 and 0.75. Improvements in the deficit of genesis in the eastern North Atlantic as resolution increases appear to be related to the representation of African Easterly Waves and the African Easterly Jet. However, the intensity of the modelled tropical cyclones as measured by 10 m wind speed remain weak, and there is no indication of convergence over this range of resolutions. In the future climate ensemble, there is a reduction of 50% in the frequency of Southern Hemisphere tropical cyclones, while in the Northern Hemisphere there is a reduction in the North Atlantic, and a shift in the Pacific with peak intensities becoming more common in the Central Pacific. There is also a change in tropical cyclone intensities, with the future climate having fewer weak storms and proportionally more stronger storms