906 resultados para Timed and Probabilistic Automata
Resumo:
Simulation models are widely employed to make probability forecasts of future conditions on seasonal to annual lead times. Added value in such forecasts is reflected in the information they add, either to purely empirical statistical models or to simpler simulation models. An evaluation of seasonal probability forecasts from the Development of a European Multimodel Ensemble system for seasonal to inTERannual prediction (DEMETER) and ENSEMBLES multi-model ensemble experiments is presented. Two particular regions are considered: Nino3.4 in the Pacific and the Main Development Region in the Atlantic; these regions were chosen before any spatial distribution of skill was examined. The ENSEMBLES models are found to have skill against the climatological distribution on seasonal time-scales. For models in ENSEMBLES that have a clearly defined predecessor model in DEMETER, the improvement from DEMETER to ENSEMBLES is discussed. Due to the long lead times of the forecasts and the evolution of observation technology, the forecast-outcome archive for seasonal forecast evaluation is small; arguably, evaluation data for seasonal forecasting will always be precious. Issues of information contamination from in-sample evaluation are discussed and impacts (both positive and negative) of variations in cross-validation protocol are demonstrated. Other difficulties due to the small forecast-outcome archive are identified. The claim that the multi-model ensemble provides a ‘better’ probability forecast than the best single model is examined and challenged. Significant forecast information beyond the climatological distribution is also demonstrated in a persistence probability forecast. The ENSEMBLES probability forecasts add significantly more information to empirical probability forecasts on seasonal time-scales than on decadal scales. Current operational forecasts might be enhanced by melding information from both simulation models and empirical models. Simulation models based on physical principles are sometimes expected, in principle, to outperform empirical models; direct comparison of their forecast skill provides information on progress toward that goal.
Resumo:
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
Resumo:
The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.
Resumo:
Using lessons from idealised predictability experiments, we discuss some issues and perspectives on the design of operational seasonal to inter-annual Arctic sea-ice prediction systems. We first review the opportunities to use a hierarchy of different types of experiment to learn about the predictability of Arctic climate. We also examine key issues for ensemble system design, such as: measuring skill, the role of ensemble size and generation of ensemble members. When assessing the potential skill of a set of prediction experiments, using more than one metric is essential as different choices can significantly alter conclusions about the presence or lack of skill. We find that increasing both the number of hindcasts and ensemble size is important for reliably assessing the correlation and expected error in forecasts. For other metrics, such as dispersion, increasing ensemble size is most important. Probabilistic measures of skill can also provide useful information about the reliability of forecasts. In addition, various methods for generating the different ensemble members are tested. The range of techniques can produce surprisingly different ensemble spread characteristics. The lessons learnt should help inform the design of future operational prediction systems.
Resumo:
The present study investigates the parsing of pre-nominal relative clauses (RCs) in children for the first time with a realtime methodology that reveals moment-to-moment processing patterns as the sentence unfolds. A self-paced listening experiment with Turkish-speaking children (aged 5–8) and adults showed that both groups display a sign of processing cost both in subject and object RCs at different points through the flow of the utterance when integrating the cues that are uninformative (i.e., ambiguous in function) and that are structurally and probabilistically unexpected. Both groups show a processing facilitation as soon as the morphosyntactic dependencies are completed and parse the unbounded dependencies rapidly using the morphosyntactic cues rather than waiting for the clause-final filler. These findings show that five-year-old children show similar patterns to adults in processing the morphosyntactic cues incrementally and in forming expectations about the rest of the utterance on the basis of the probabilistic model of their language.
Resumo:
An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition
Resumo:
In 2013 the Warsaw International Mechanism (WIM) for loss and damage (L&D) associated with climate change impacts was established under the United Nations Framework Convention on Climate Change (UNFCCC). For scientists, L&D raises ques- tions around the extent that such impacts can be attributed to anthropogenic climate change, which may generate complex results and be controversial in the policy arena. This is particularly true in the case of probabilistic event attribution (PEA) science, a new and rapidly evolving field that assesses whether changes in the probabilities of extreme events are attributable to GHG emissions. If the potential applications of PEA are to be considered responsibly, dialogue between scientists and policy makers is fundamental. Two key questions are considered here through a literature review and key stakeholder interviews with representatives from the science and policy sectors underpinning L&D. These provided the opportunity for in-depth insights into stakeholders’ views on firstly, how much is known and understood about PEA by those associated with the L&D debate? Secondly, how might PEA inform L&D and wider climate policy? Results show debate within the climate science community, and limited understanding among other stakeholders, around the sense in which extreme events can be attributed to climate change. However, stake- holders do identify and discuss potential uses for PEA in the WIM and wider policy, but it remains difficult to explore precise applications given the ambiguity surrounding L&D. This implies a need for stakeholders to develop greater understandings of alternative conceptions of L&D and the role of science, and also identify how PEA can best be used to support policy, and address associated challenges.
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.
Resumo:
Objective: To assess time trends in the contribution of processed foods to food purchases made by Brazilian households and to explore the potential impact on the overall quality of the diet. Design: Application of a new classification of foodstuffs based on extent and purpose of food processing to data collected by comparable probabilistic household budget surveys. The classification assigns foodstuffs to the following groups: unprocessed/minimally processed foods (Group 1); processed culinary ingredients (Group 2); or ultra-processed ready-to-eat or ready-to-heat food products (Group 3). Setting: Eleven metropolitan areas of Brazil. Subjects: Households; n 13 611 in 1987-8, n 16 014 in 1995-5 and n 13 848 in 2002-3. Results: Over the last three decades, the household consumption of Group 1 and Group 2 foods has been steadily replaced by consumption of Group 3 ultra-processed food products, both overall and in lower- and upper-income groups. In the 2002-3 survey, Group 3 items represented more than one-quarter of total energy (more than one-third for higher-income households). The overall nutrient profile of Group 3 items, compared with that of Group 1 and Group 2 items, revealed more added sugar, more saturated fat, more sodium, less fibre and much higher energy density. Conclusions: The high energy density and the unfavourable nutrition profiling of Group 3 food products, and also their potential harmful effects on eating and drinking behaviours, indicate that governments and health authorities should use all possible methods, including legislation and statutory regulation, to halt and reverse the replacement of minimally processed foods and processed culinary ingredients by ultra-processed food products.
Resumo:
The Prospective and Retrospective Memory Questionnaire (PRMQ) has been shown to have acceptable reliability and factorial, predictive, and concurrent validity. However, the PRMQ has never been administered to a probability sample survey representative of all ages in adulthood, nor have previous studies controlled for factors that are known to influence metamemory, such as affective status. Here, the PRMQ was applied in a survey adopting a probabilistic three-stage cluster sample representative of the population of Sao Paulo, Brazil, according to gender, age (20-80 years), and economic status (n=1042). After excluding participants who had conditions that impair memory (depression, anxiety, used psychotropics, and/or had neurological/psychiatric disorders), in the remaining 664 individuals we (a) used confirmatory factor analyses to test competing models of the latent structure of the PRMQ, and (b) studied effects of gender, age, schooling, and economic status on prospective and retrospective memory complaints. The model with the best fit confirmed the same tripartite structure (general memory factor and two orthogonal prospective and retrospective memory factors) previously reported. Women complained more of general memory slips, especially those in the first 5 years after menopause, and there were more complaints of prospective than retrospective memory, except in participants with lower family income.
Resumo:
Aims: In our previous work, we reported that the insulin potentiating effect on melatonin synthesis is regulated by a post-transcriptional mechanism. However, the major proteins of the insulin signaling pathway (ISP) and the possible pathway component recruited on the potentiating effect of insulin had not been characterized. A second question raised was whether windows of sensitivity to insulin exist in the pineal gland due to insulin rhythmic secretion pattern. Main methods: Melatonin content from norepinephrine(NE)-synchronized pineal gland cultures was quantified by high performance liquid chromatography with electrochemical detection and arylalkylamine-N-acetyltransferase (AANAT) activity was assayed by radiometry. Immunoblotting and immunoprecipitation techniques were performed to establish the ISP proteins expression and the formation of 14-3-3: AANAT complex, respectively. Key findings: The temporal insulin susceptibility protocol revealed two periods of insulin potentiating effect, one at the beginning and another one at the end of the in vitro induced ""night"". In some Timed-insulin Stimulation (TSs), insulin also promoted a reduction on melatonin synthesis, showing its dual action in cultured pineal glands. The major ISP components, such as IR beta, IGF-1R, IRS-1, IRS-2 and PI3K(p85), as well tyrosine phosphorylation of pp85 were characterized within pineal glands. Insulin is not involved in the 14-3-3:AANAT complex formation. The blockage of PI3K by LY 294002 reduced melatonin synthesis and AANAT activity. Significance: The present study demonstrated windows of differential insulin sensitivity, a functional ISP and the PI3K-dependent insulin potentiating effect on NE-mediated melatonin synthesis, supporting the hypothesis of a crosstalk between noradrenergic and insulin pathways in the rat pineal gland. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
A novel cryptography method based on the Lorenz`s attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.
Resumo:
The main idea of this research to solve the problem of inventory management for the paper industry SPM PVT limited. The aim of this research was to find a methodology by which the inventory of raw material could be kept at minimum level by means of buffer stock level.The main objective then lies in finding the minimum level of buffer stock according to daily consumption of raw material, finding the Economic Order Quantity (EOQ) reorders point and how much order will be placed in a year to control the shortage of raw material.In this project, we discuss continuous review model (Deterministic EOQ models) that includes the probabilistic demand directly in the formulation. According to the formula, we see the reorder point and the order up to model. The problem was tackled mathematically as well as simulation modeling was used where mathematically tractable solution was not possible.The simulation modeling was done by Awesim software for developing the simulation network. This simulation network has the ability to predict the buffer stock level based on variable consumption of raw material and lead-time. The data collection for this simulation network is taken from the industrial engineering personnel and the departmental studies of the concerned factory. At the end, we find the optimum level of order quantity, reorder point and order days.
Resumo:
This paper is concerned with the cost efficiency in achieving the Swedish national air quality objectives under uncertainty. To realize an ecologically sustainable society, the parliament has approved a set of interim and long-term pollution reduction targets. However, there are considerable quantification uncertainties on the effectiveness of the proposed pollution reduction measures. In this paper, we develop a multivariate stochastic control framework to deal with the cost efficiency problem with multiple pollutants. Based on the cost and technological data collected by several national authorities, we explore the implications of alternative probabilistic constraints. It is found that a composite probabilistic constraint induces considerably lower abatement cost than separable probabilistic restrictions. The trend is reinforced by the presence of positive correlations between reductions in the multiple pollutants.