862 resultados para stochastic regression, consistency
Resumo:
A two-stage mixing process for concrete involves mixing a slurry of cementitious materials and water, then adding the slurry to coarse and fine aggregate to form concrete. Some research has indicated that this process might facilitate dispersion of cementitious materials and improve cement hydration, the characteristics of the interfacial transition zone (ITZ) between aggregate and paste, and concrete homogeneity. The goal of the study was to find optimal mixing procedures for production of a homogeneous and workable mixture and quality concrete using a two-stage mixing operation. The specific objectives of the study are as follows: (1) To achieve optimal mixing energy and time for a homogeneous cementitious material, (2) To characterize the homogeneity and flow property of the pastes, (3) To investigate effective methods for coating aggregate particles with cement slurry, (4) To study the effect of the two-stage mixing procedure on concrete properties, (5) To obtain the improved production rates. Parameters measured for Phase I included: heat of hydration, maturity, and rheology tests were performed on the fresh paste samples, and compressive strength, degree of hydration, and scanning electron microscope (SEM) imaging tests were conducted on the cured specimens. For Phases II and III tests included slump and air content on fresh concrete and compressive and tensile strengths, rapid air void analysis, and rapid chloride permeability on hardened concrete.
Resumo:
Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.
Resumo:
We present a novel scheme for the appearance of stochastic resonance when the dynamics of a Brownian particle takes place in a confined medium. The presence of uneven boundaries, giving rise to an entropic contribution to the potential, may upon application of a periodic driving force result in an increase of the spectral amplification at an optimum value of the ambient noise level. The entropic stochastic resonance, characteristic of small-scale systems, may constitute a useful mechanism for the manipulation and control of single molecules and nanodevices.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
We describe the case of a man with a history of complex partial seizures and severe language, cognitive and behavioural regression during early childhood (3.5 years), who underwent epilepsy surgery at the age of 25 years. His early epilepsy had clinical and electroencephalogram features of the syndromes of epilepsy with continuous spike waves during sleep and acquired epileptic aphasia (Landau-Kleffner syndrome), which we considered initially to be of idiopathic origin. Seizures recurred at 19 years and presurgical investigations at 25 years showed a lateral frontal epileptic focus with spread to Broca's area and the frontal orbital regions. Histopathology revealed a focal cortical dysplasia, not visible on magnetic resonance imaging. The prolonged but reversible early regression and the residual neuropsychological disorders during adulthood were probably the result of an active left frontal epilepsy, which interfered with language and behaviour during development. Our findings raise the question of the role of focal cortical dysplasia as an aetiology in the syndromes of epilepsy with continuous spike waves during sleep and acquired epileptic aphasia.
Resumo:
Low-copy-number molecules are involved in many functions in cells. The intrinsic fluctuations of these numbers can enable stochastic switching between multiple steady states, inducing phenotypic variability. Herein we present a theoretical and computational study based on Master Equations and Fokker-Planck and Langevin descriptions of stochastic switching for a genetic circuit of autoactivation. We show that in this circuit the intrinsic fluctuations arising from low-copy numbers, which are inherently state-dependent, drive asymmetric switching. These theoretical results are consistent with experimental data that have been reported for the bistable system of the gallactose signaling network in yeast. Our study unravels that intrinsic fluctuations, while not required to describe bistability, are fundamental to understand stochastic switching and the dynamical relative stability of multiple states.
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
The objective of this work was to estimate the stability and adaptability of pod and seed yield in runner peanut genotypes based on the nonlinear regression and AMMI analysis. Yield data from 11 trials, distributed in six environments and three harvests, carried out in the Northeast region of Brazil during the rainy season were used. Significant effects of genotypes (G), environments (E), and GE interactions were detected in the analysis, indicating different behaviors among genotypes in favorable and unfavorable environmental conditions. The genotypes BRS Pérola Branca and LViPE‑06 are more stable and adapted to the semiarid environment, whereas LGoPE‑06 is a promising material for pod production, despite being highly dependent on favorable environments.
Resumo:
This paper presents the general regression neural networks (GRNN) as a nonlinear regression method for the interpolation of monthly wind speeds in complex Alpine orography. GRNN is trained using data coming from Swiss meteorological networks to learn the statistical relationship between topographic features and wind speed. The terrain convexity, slope and exposure are considered by extracting features from the digital elevation model at different spatial scales using specialised convolution filters. A database of gridded monthly wind speeds is then constructed by applying GRNN in prediction mode during the period 1968-2008. This study demonstrates that using topographic features as inputs in GRNN significantly reduces cross-validation errors with respect to low-dimensional models integrating only geographical coordinates and terrain height for the interpolation of wind speed. The spatial predictability of wind speed is found to be lower in summer than in winter due to more complex and weaker wind-topography relationships. The relevance of these relationships is studied using an adaptive version of the GRNN algorithm which allows to select the useful terrain features by eliminating the noisy ones. This research provides a framework for extending the low-dimensional interpolation models to high-dimensional spaces by integrating additional features accounting for the topographic conditions at multiple spatial scales. Copyright (c) 2012 Royal Meteorological Society.
Resumo:
Peer-reviewed
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Decisions taken in modern organizations are often multi-dimensional, involving multiple decision makers and several criteria measured on different scales. Multiple Criteria Decision Making (MCDM) methods are designed to analyze and to give recommendations in this kind of situations. Among the numerous MCDM methods, two large families of methods are the multi-attribute utility theory based methods and the outranking methods. Traditionally both method families require exact values for technical parameters and criteria measurements, as well as for preferences expressed as weights. Often it is hard, if not impossible, to obtain exact values. Stochastic Multicriteria Acceptability Analysis (SMAA) is a family of methods designed to help in this type of situations where exact values are not available. Different variants of SMAA allow handling all types of MCDM problems. They support defining the model through uncertain, imprecise, or completely missing values. The methods are based on simulation that is applied to obtain descriptive indices characterizing the problem. In this thesis we present new advances in the SMAA methodology. We present and analyze algorithms for the SMAA-2 method and its extension to handle ordinal preferences. We then present an application of SMAA-2 to an area where MCDM models have not been applied before: planning elevator groups for high-rise buildings. Following this, we introduce two new methods to the family: SMAA-TRI that extends ELECTRE TRI for sorting problems with uncertain parameter values, and SMAA-III that extends ELECTRE III in a similar way. An efficient software implementing these two methods has been developed in conjunction with this work, and is briefly presented in this thesis. The thesis is closed with a comprehensive survey of SMAA methodology including a definition of a unified framework.