950 resultados para Probabilistic metrics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous work has shown that robot navigation systems that employ an architecture based upon the idiotypic network theory of the immune system have an advantage over control techniques that rely on reinforcement learning only. This is thought to be a result of intelligent behaviour selection on the part of the idiotypic robot. In this paper an attempt is made to imitate idiotypic dynamics by creating controllers that use reinforcement with a number of different probabilistic schemes to select robot behaviour. The aims are to show that the idiotypic system is not merely performing some kind of periodic random behaviour selection, and to try to gain further insight into the processes that govern the idiotypic mechanism. Trials are carried out using simulated Pioneer robots that undertake navigation exercises. Results show that a scheme that boosts the probability of selecting highly-ranked alternative behaviours to 50% during stall conditions comes closest to achieving the properties of the idiotypic system, but remains unable to match it in terms of all round performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Studies on infant dietary intake do not generally focus on the types of liquids consumed. Objective: To document by age and breastfeeding status, the types of liquids present in the diet of Mexican children under 1 year of age (< 1 y) who participated in the National Health and Nutrition Survey 2012 (ENSANUT-2012). Methods: Analysis of the infant < 1 y feeding practices from the ENSANUT-2012 survey in non-breastfed (non-BF) and breastfed (BF) infants by status quo for the consumption of liquids grouped in: water, formula, fortified LICONSA milk, nutritive liquids (NL; thin cereal-based gruel with water or milk and coffee with milk) and non-nutritive liquids (non-NL) as sugared water, water-based drinks, tea, beans or chicken broth, aguamiel and coffee. In this infants < 1 y we analyzed the not grouped consumption of liquids in the first three days of life (newborns) from the mother's recall. Percentage and confidence intervals (95% CI) were calculated adjusting for survey design. Statistical differences were analyzed by Z test. Results: We observed a high consumption of human milk followed by formula (56.7%) and water (51.1%) in infants under 6 months of age (< 6 mo). The proportion of non-BF infants consuming non-NL was higher than for BF infants (p < 0.05). More than 60% of older infants (6 mo and < 1 y) consumed formula and were non-BF. In newborns formula consumption was predominant, followed by tea or infusion and water. Conclusions: Non-breast milk liquids are present undesirably in Mexican infants' diet and non-NL are consumed earlier than NL, revealing inadequate early dietary practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.

To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.

To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The U.S. Nuclear Regulatory Commission implemented a safety goal policy in response to the 1979 Three Mile Island accident. This policy addresses the question “How safe is safe enough?” by specifying quantitative health objectives (QHOs) for comparison with results from nuclear power plant (NPP) probabilistic risk analyses (PRAs) to determine whether proposed regulatory actions are justified based on potential safety benefit. Lessons learned from recent operating experience—including the 2011 Fukushima accident—indicate that accidents involving multiple units at a shared site can occur with non-negligible frequency. Yet risk contributions from such scenarios are excluded by policy from safety goal evaluations—even for the nearly 60% of U.S. NPP sites that include multiple units. This research develops and applies methods for estimating risk metrics for comparison with safety goal QHOs using models from state-of-the-art consequence analyses to evaluate the effect of including multi-unit accident risk contributions in safety goal evaluations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dependence of some species on landscape structure has been proved in numerous studies. So far, however, little progress has been made in the integration of landscape metrics in the prediction of species associated with coastal features. Specific landscape metrics were tested as predictors of coastal shape using three coastal features of the Iberian Peninsula (beaches, capes and gulfs) at different scales. We used the landscape metrics in combination with environmental variables to model the niche and find suitable habitats for a seagrass species (Cymodocea nodosa) throughout its entire range of distribution. Landscape metrics able to capture variation in the coastline enhanced significantly the accuracy of the models, despite the limitations caused by the scale of the study. We provided the first global model of the factors that can be shaping the environmental niche and distribution of C. nodosa throughout its range. Sea surface temperature and salinity were the most relevant variables. We identified areas that seem unsuitable for C. nodosa as well as those suitable habitats not occupied by the species. We also present some preliminary results of testing historical biogeographical hypotheses derived from distribution predictions under Last Glacial Maximum conditions and genetic diversity data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determination of combustion metrics for a diesel engine has the potential of providing feedback for closed-loop combustion phasing control to meet current and upcoming emission and fuel consumption regulations. This thesis focused on the estimation of combustion metrics including start of combustion (SOC), crank angle location of 50% cumulative heat release (CA50), peak pressure crank angle location (PPCL), and peak pressure amplitude (PPA), peak apparent heat release rate crank angle location (PACL), mean absolute pressure error (MAPE), and peak apparent heat release rate amplitude (PAA). In-cylinder pressure has been used in the laboratory as the primary mechanism for characterization of combustion rates and more recently in-cylinder pressure has been used in series production vehicles for feedback control. However, the intrusive measurement with the in-cylinder pressure sensor is expensive and requires special mounting process and engine structure modification. As an alternative method, this work investigated block mounted accelerometers to estimate combustion metrics in a 9L I6 diesel engine. So the transfer path between the accelerometer signal and the in-cylinder pressure signal needs to be modeled. Depending on the transfer path, the in-cylinder pressure signal and the combustion metrics can be accurately estimated - recovered from accelerometer signals. The method and applicability for determining the transfer path is critical in utilizing an accelerometer(s) for feedback. Single-input single-output (SISO) frequency response function (FRF) is the most common transfer path model; however, it is shown here to have low robustness for varying engine operating conditions. This thesis examines mechanisms to improve the robustness of FRF for combustion metrics estimation. First, an adaptation process based on the particle swarm optimization algorithm was developed and added to the single-input single-output model. Second, a multiple-input single-output (MISO) FRF model coupled with principal component analysis and an offset compensation process was investigated and applied. Improvement of the FRF robustness was achieved based on these two approaches. Furthermore a neural network as a nonlinear model of the transfer path between the accelerometer signal and the apparent heat release rate was also investigated. Transfer path between the acoustical emissions and the in-cylinder pressure signal was also investigated in this dissertation on a high pressure common rail (HPCR) 1.9L TDI diesel engine. The acoustical emissions are an important factor in the powertrain development process. In this part of the research a transfer path was developed between the two and then used to predict the engine noise level with the measured in-cylinder pressure as the input. Three methods for transfer path modeling were applied and the method based on the cepstral smoothing technique led to the most accurate results with averaged estimation errors of 2 dBA and a root mean square error of 1.5dBA. Finally, a linear model for engine noise level estimation was proposed with the in-cylinder pressure signal and the engine speed as components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Event extraction from texts aims to detect structured information such as what has happened, to whom, where and when. Event extraction and visualization are typically considered as two different tasks. In this paper, we propose a novel approach based on probabilistic modelling to jointly extract and visualize events from tweets where both tasks benefit from each other. We model each event as a joint distribution over named entities, a date, a location and event-related keywords. Moreover, both tweets and event instances are associated with coordinates in the visualization space. The manifold assumption that the intrinsic geometry of tweets is a low-rank, non-linear manifold within the high-dimensional space is incorporated into the learning framework using a regularization. Experimental results show that the proposed approach can effectively deal with both event extraction and visualization and performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In restructured power systems, generation and commercialization activities became market activities, while transmission and distribution activities continue as regulated monopolies. As a result, the adequacy of transmission network should be evaluated independent of generation system. After introducing the constrained fuzzy power flow (CFPF) as a suitable tool to quantify the adequacy of transmission network to satisfy 'reasonable demands for the transmission of electricity' (as stated, for instance, at European Directive 2009/72/EC), the aim is now showing how this approach can be used in conjunction with probabilistic criteria in security analysis. In classical security analysis models of power systems are considered the composite system (generation plus transmission). The state of system components is usually modeled with probabilities and loads (and generation) are modeled by crisp numbers, probability distributions or fuzzy numbers. In the case of CFPF the component’s failure of the transmission network have been investigated. In this framework, probabilistic methods are used for failures modeling of the transmission system components and possibility models are used to deal with 'reasonable demands'. The enhanced version of the CFPF model is applied to an illustrative case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mobile and wireless networks have long exploited mobility predictions, focused on predicting the future location of given users, to perform more efficient network resource management. In this paper, we present a new approach in which we provide predictions as a probability distribution of the likelihood of moving to a set of future locations. This approach provides wireless services a greater amount of knowledge and enables them to perform more effectively. We present a framework for the evaluation of this new type of predictor, and develop 2 new predictors, HEM and G-Stat. We evaluate our predictors accuracy in predicting future cells for mobile users, using two large geolocation data sets, from MDC [11], [12] and Crawdad [13]. We show that our predictors can successfully predict with as low as an average 2.2% inaccuracy in certain scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we discuss the temporal aspects of indexing and classification in information systems. Basing this discussion off of the three sources of research of scheme change: of indexing: (1) analytical research on the types of scheme change and (2) empirical data on scheme change in systems and (3) evidence of cataloguer decision-making in the context of scheme change. From this general discussion we propose two constructs along which we might craft metrics to measure scheme change: collocative integrity and semantic gravity. The paper closes with a discussion of these constructs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The topic of seismic loss assessment not only incorporates many aspects of the earthquake engineering, but also entails social factors, public policies and business interests. Because of its multidisciplinary character, this process may be complex to challenge, and sound discouraging to neophytes. In this context, there is an increasing need of deriving simplified methodologies to streamline the process and provide tools for decision-makers and practitioners. This dissertation investigates different possible applications both in the area of modelling of seismic losses, both in the analysis of observational seismic data. Regarding the first topic, the PRESSAFE-disp method is proposed for the fast evaluation of the fragility curves of precast reinforced-concrete (RC) structures. Hence, a direct application of the method to the productive area of San Felice is studied to assess the number of collapses under a specific seismic scenario. In particular, with reference to the 2012 events, two large-scale stochastic models are outlined. The outcomes of the framework are promising, in good agreement with the observed damage scenario. Furthermore, a simplified displacement-based methodology is outlined to estimate different loss performance metrics for the decision-making phase of the seismic retrofit of a single RC building. The aim is to evaluate the seismic performance of different retrofit options, for a comparative analysis of their effectiveness and the convenience. Finally, a contribution to the analysis of the observational data is presented in the last part of the dissertation. A specific database of losses of precast RC buildings damaged by the 2012 Earthquake is created. A statistical analysis is performed, allowing deriving several consequence functions. The outcomes presented may be implemented in probabilistic seismic risk assessments to forecast the losses at the large scale. Furthermore, these may be adopted to establish retrofit policies to prevent and reduce the consequences of future earthquakes in industrial areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, the scientific community has devoted a consistent effort to the sustainable development of the waste management sector and resource efficiency in building infrastructures. Waste is the fourth largest source sector of emissions and the municipal solid waste management system is considered as the most complex system to manage, due to its diverse composition and fragmentation of producers and responsibilities. Nevertheless, given the deep complexity that characterize the waste management sector, sustainability is still a challenging task. Interestingly, open issues arise when dealing with the sustainability of the waste sector. In this thesis, some recent advances in the waste management sector have been presented. Specifically, through the analysis of four author publications this thesis attempted to fill the gap in the following open issues: (i) the waste collection and generation of waste considering the pillars of sustainability; (ii) the environmental and social analysis in designing building infrastructures; (iv) the role of the waste collection in boosting sustainable systems of waste management; (v) the ergonomics impacts of waste collection. For this purpose, four author publications in international peer – reviewed journals were selected among the wholly author's contributions (i.e., final publication stage).