946 resultados para Worst-case expected risk


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fieldbus networks aim at the interconnection of field devices such as sensors, actuators and small controllers. Therefore, they are an effective technology upon which Distributed Computer Controlled Systems (DCCS) can be built. DCCS impose strict timeliness requirements to the communication network. In essence, by timeliness requirements we mean that traffic must be sent and received within a bounded interval, otherwise a timing fault is said to occur. P-NET is a multi-master fieldbus standard based on a virtual token passing scheme. In P-NET each master is allowed to transmit only one message per token visit, which means that in the worst-case the communication response time could be derived considering that the token is fully utilised by all stations. However, such analysis can be proved to be quite pessimistic. In this paper we propose a more sophisticated P-NET timing analysis model, which considers the actual token utilisation by different masters. The major contribution of this model is to provide a less pessimistic, and thus more accurate, analysis for the evaluation of the worst-case communication response time in P-NET fieldbus networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Many-core” systems based on the Network-on- Chip (NoC) architecture have brought into the fore-front various opportunities and challenges for the deployment of real-time systems. Such real-time systems need timing guarantees to be fulfilled. Therefore, calculating upper-bounds on the end-to-end communication delay between system components is of primary interest. In this work, we identify the limitations of an existing approach proposed by [1] and propose different techniques to overcome these limitations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling the fundamental performance limits of Wireless Sensor Networks (WSNs) is of paramount importance to understand their behavior under the worst-case conditions and to make the appropriate design choices. This is particular relevant for time-sensitive WSN applications, where the timing behavior of the network protocols (message transmission must respect deadlines) impacts on the correct operation of these applications. In that direction this paper contributes with a methodology based on Network Calculus, which enables quick and efficient worst-case dimensioning of static or even dynamically changing cluster-tree WSNs where the data sink can either be static or mobile. We propose closed-form recurrent expressions for computing the worst-case end-to-end delays, buffering and bandwidth requirements across any source-destination path in a cluster-tree WSN. We show how to apply our methodology to the case of IEEE 802.15.4/ZigBee cluster-tree WSNs. Finally, we demonstrate the validity and analyze the accuracy of our methodology through a comprehensive experimental study using commercially available technology, namely TelosB motes running TinyOS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-sensitive Wireless Sensor Network (WSN) applications require finite delay bounds in critical situations. This paper provides a methodology for the modeling and the worst-case dimensioning of cluster-tree WSNs. We provide a fine model of the worst-case cluster-tree topology characterized by its depth, the maximum number of child routers and the maximum number of child nodes for each parent router. Using Network Calculus, we derive “plug-and-play” expressions for the endto- end delay bounds, buffering and bandwidth requirements as a function of the WSN cluster-tree characteristics and traffic specifications. The cluster-tree topology has been adopted by many cluster-based solutions for WSNs. We demonstrate how to apply our general results for dimensioning IEEE 802.15.4/Zigbee cluster-tree WSNs. We believe that this paper shows the fundamental performance limits of cluster-tree wireless sensor networks by the provision of a simple and effective methodology for the design of such WSNs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The heat waves of 2003 in Western Europe and 2010 in Russia, commonly labelled as rare climatic anomalies outside of previous experience, are often taken as harbingers of more frequent extremes in the global warming-influenced future. However, a recent reconstruction of spring–summer temperatures for WE resulted in the likelihood of significantly higher temperatures in 1540. In order to check the plausibility of this result we investigated the severity of the 1540 drought by putting forward the argument of the known soil desiccation-temperature feedback. Based on more than 300 first-hand documentary weather report sources originating from an area of 2 to 3 million km2, we show that Europe was affected by an unprecedented 11-month-long Megadrought. The estimated number of precipitation days and precipitation amount for Central and Western Europe in 1540 is significantly lower than the 100-year minima of the instrumental measurement period for spring, summer and autumn. This result is supported by independent documentary evidence about extremely low river flows and Europe-wide wild-, forest- and settlement fires. We found that an event of this severity cannot be simulated by state-of-the-art climate models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the standard Vehicle Routing Problem (VRP), we route a fleet of vehicles to deliver the demands of all customers such that the total distance traveled by the fleet is minimized. In this dissertation, we study variants of the VRP that minimize the completion time, i.e., we minimize the distance of the longest route. We call it the min-max objective function. In applications such as disaster relief efforts and military operations, the objective is often to finish the delivery or the task as soon as possible, not to plan routes with the minimum total distance. Even in commercial package delivery nowadays, companies are investing in new technologies to speed up delivery instead of focusing merely on the min-sum objective. In this dissertation, we compare the min-max and the standard (min-sum) objective functions in a worst-case analysis to show that the optimal solution with respect to one objective function can be very poor with respect to the other. The results motivate the design of algorithms specifically for the min-max objective. We study variants of min-max VRPs including one problem from the literature (the min-max Multi-Depot VRP) and two new problems (the min-max Split Delivery Multi-Depot VRP with Minimum Service Requirement and the min-max Close-Enough VRP). We develop heuristics to solve these three problems. We compare the results produced by our heuristics to the best-known solutions in the literature and find that our algorithms are effective. In the case where benchmark instances are not available, we generate instances whose near-optimal solutions can be estimated based on geometry. We formulate the Vehicle Routing Problem with Drones and carry out a theoretical analysis to show the maximum benefit from using drones in addition to trucks to reduce delivery time. The speed-up ratio depends on the number of drones loaded onto one truck and the speed of the drone relative to the speed of the truck.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chloropropanols, including 3-monochloropropane-1,2-diol (3-MCPD) and 1,3-dichloropropan-2-ol (1,3-DCP), comprise a group of chemical contaminants with carcinogenic and genotoxic properties. They have been found in a variety of processed foods and food ingredients, such as hydrolyzed vegetable protein, soy sauce, cereal-based products, malt-derived ingredients, and smoked foods. This study aimed to assess the dietary exposure to 3-MCPD and 1,3-DCP in Brazil and verify whether the presence of these substances in foods could represent health risks. The intake was calculated by combining data on food consumption, provided by the Consumer Expenditure Survey 2008-2009, with the levels of contaminant occurrence determined by gas chromatography-mass spectrometry. The exposure to 3-MCPD ranged from 0.06 to 0.51 µg.kg bw-1.day-1 considering average and high consumers, while the intake of 1,3-DCP was estimated to be 0.0036 µg.kg bw-1.day-1 in the worst case scenario evaluated. Based on these results, it was verified that the Brazilians' exposure to chloropropanols does not present a significant health risk. However, the consumption of specific foods containing high levels of 3-MCPD could exceed the provisional maximum tolerable daily intake of 2 µg.kg bw-1 established for this compound and, therefore, represent a potential concern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By employing Moody’s corporate default and rating transition data spanning the last 90 years we explore how much capital banks should hold against their corporate loan portfolios to withstand historical stress scenarios. Specifically, we will focus on the worst case scenario over the observation period, the Great Depression. We find that migration risk and the length of the investment horizon are critical factors when determining bank capital needs in a crisis. We show that capital may need to rise more than three times when the horizon is increased from 1 year, as required by current and future regulation, to 3 years. Increases are still important but of a lower magnitude when migration risk is introduced in the analysis. Further, we find that the new bank capital requirements under the so-called Basel 3 agreement would enable banks to absorb Great Depression-style losses. But, such losses would dent regulatory capital considerably and far beyond the capital buffers that have been proposed to ensure that banks survive crisis periods without government support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pseudo-total (i.e. aqua regia extractable) and gastric-bioaccessible (i.e. glycine + HCl extractable) concentrations of Ca, Co, Cr, Cu, Fe, Mn, Ni, Pb and Zn were determined in a total of 48 samples collected from six community urban gardens of different characteristics in the city of Madrid (Spain). Calcium carbonate appears to be the soil property that determines the bioaccessibility of a majority of those elements, and the lack of influence of organic matter, pH and texture can be explained by their low levels in the samples (organic matter) or their narrow range of variation (pH and texture). A conservative risk assessment with bioaccessible concentrations in two scenarios, i.e. adult urban farmers and children playing in urban gardens, revealed acceptable levels of risk, but with large differences between urban gardens depending on their history of land use and their proximity to busy areas in the city center. Only in a worst-case scenario in which children who use urban gardens as recreational areas also eat the produce grown in them would the risk exceed the limits of acceptability