15 resultados para Statistical parameters
Resumo:
A unique property of body area networks (BANs) is the mobility of the network as the user moves freely around. This mobility represents a significant challenge for BANs, since, in order to operate efficiently, they need to be able to adapt to the changing propagation environment. A method is presented that allows BAN nodes to classify the current operating environment in terms of multipath conditions, based on received signal strength indicator values during normal packet transmissions. A controlled set of measurements was carried out to study the effect different environments inflict on on-body link signal strength in a 2.45 GHz BAN. The analysis shows that, by using two statistical parameters, gathered over a period of one second, BAN nodes can successfully classify the operating environment for over 90% of the time.
Resumo:
This paper investigated the problem of confined flow under dams and water retaining structuresusing stochastic modelling. The approach advocated in the study combined a finite elementsmethod based on the equation governing the dynamics of incompressible fluid flow through aporous medium with a random field generator that generates random hydraulic conductivity basedon lognormal probability distribution. The resulting model was then used to analyse confined flowunder a hydraulic structure. Cases for a structure provided with cutoff wall and when the wall didnot exist were both tested. Various statistical parameters that reflected different degrees ofheterogeneity were examined and the changes in the mean seepage flow, the mean uplift forceand the mean exit gradient observed under the structure were analysed. Results reveal that underheterogeneous conditions, the reduction made by the sheetpile in the uplift force and exit hydraulicgradient may be underestimated when deterministic solutions are used.
Resumo:
This paper reports a study carried out to develop a self-compacting fibre reinforced concrete containing a high fibre content with slurry infiltrated fibre concrete (SIFCON). The SIFCON was developed with 10% of steel fibres which are infiltrated by self-compacting cement slurry without any vibration. Traditionally, the infiltration of the slurry into the layer of fibres is carried out under intensive vibration. A two-level fractional factorial design was used to optimise the properties of cement-based slurries with four independent variables, such as dosage of silica fume, dosage of superplasticiser, sand content, and water/cement ratio (W/C). Rheometer, mini-slump test, Lombardi plate cohesion meter, J-fibre penetration test, and induced bleeding were used to assess the behaviour of fresh cement slurries. The compressive strengths at 7 and 28 days were also measured. The statistical models are valid for slurries made with W/C of 0.40 to 0.50, 50 to 100% of sand by mass of cement, 5 to 10% of silica fume by mass of cement, and SP dosage of 0.6 to 1.2% by mass of cement. This model makes it possible to evaluate the effect of individual variables on measured parameters of fresh cement slurries. The proposed models offered useful information to understand trade-offs between mix variables and compare the responses obtained from various test methods in order to optimise self-compacting SIFCON.
Resumo:
Self-compacting concrete (SCC) is generally designed with a relatively higher content of finer, which includes cement, and dosage of superplasticizer than the conventional concrete. The design of the current SCC leads to high compressive strength, which is already used in special applications, where the high cost of materials can be tolerated. Using SCC, which eliminates the need for vibration, leads to increased speed of casting and thus reduces labour requirement, energy consumption, construction time, and cost of equipment. In order to obtain and gain maximum benefit from SCC it has to be used for wider applications. The cost of materials will be decreased by reducing the cement content and using a minimum amount of admixtures. This paper reviews statistical models obtained from a factorial design which was carried out to determine the influence of four key parameters on filling ability, passing ability, segregation and compressive strength. These parameters are important for the successful development of medium strength self-compacting concrete (MS-SCC). The parameters considered in the study were the contents of cement and pulverised fuel ash (PFA), water-to-powder ratio (W/P), and dosage of superplasticizer (SP). The responses of the derived statistical models are slump flow, fluidity loss, rheological parameters, Orimet time, V-funnel time, L-box, JRing combined to Orimet, JRing combined to cone, fresh segregation, and compressive strength at 7, 28 and 90 days. The models are valid for mixes made with 0.38 to 0.72 W/P ratio, 60 to 216 kg/m3 of cement content, 183 to 317 kg/m3 of PFA and 0 to 1% of SP, by mass of powder. The utility of such models to optimize concrete mixes to achieve good balance between filling ability, passing ability, segregation, compressive strength, and cost is discussed. Examples highlighting the usefulness of the models are presented using isoresponse surfaces to demonstrate single and coupled effects of mix parameters on slump flow, loss of fluidity, flow resistance, segregation, JRing combined to Orimet, and compressive strength at 7 and 28 days. Cost analysis is carried out to show trade-offs between cost of materials and specified consistency levels and compressive strength at 7 and 28 days that can be used to identify economic mixes. The paper establishes the usefulness of the mathematical models as a tool to facilitate the test protocol required to optimise medium strength SCC.
Resumo:
There is an increasing need to identify the rheological properties of cement grout using a simple test to determine the fluidity, and other properties of underwater applications such as washout resistance and compressive strength. This paper reviews statistical models developed using a factorial design that was carried out to model the influence of key parameters on properties affecting the performance of underwater cement grout. Such responses of fluidity included minislump and flow time measured by Marsh cone, washout resistance, unit weight, and compressive strength. The models are valid for mixes with 0.35–0.55 water-to-binder ratio (W/B), 0.053–0.141% of antiwashout admixture (AWA), by mass of water, and 0.4–1.8% (dry extract) of superplasticizer (SP), by mass of binder. Two types of underwater grout were tested: the first one made with cement and the second one made with 20% of pulverised fuel ash (PFA) replacement, by mass of binder. Also presented are the derived models that enable the identification of underlying primary factors and their interactions that influence the modelled responses of underwater cement grout. Such parameters can be useful to reduce the test protocol needed for proportioning of underwater cement grout. This paper attempts also to demonstrate the usefulness of the models to better understand trade-offs between parameters and compare the responses obtained from the various test methods that are highlighted.
Resumo:
Aiming to establish a rigorous link between macroscopic random motion (described e.g. by Langevin-type theories) and microscopic dynamics, we have undertaken a kinetic-theoretical study of the dynamics of a classical test-particle weakly coupled to a large heat-bath in thermal equilibrium. Both subsystems are subject to an external force field. From the (time-non-local) generalized master equation a Fokker-Planck-type equation follows as a "quasi-Markovian" approximation. The kinetic operator thus defined is shown to be ill-defined; in specific, it does not preserve the positivity of the test-particle distribution function f(x, v; t). Adopting an alternative approach, previously introduced for quantum open systems, is proposed to lead to a correct kinetic operator, which yields all the expected properties. A set of explicit expressions for the diffusion and drift coefficients are obtained, allowing for modelling macroscopic diffusion and dynamical friction phenomena, in terms of an external field and intrinsic physical parameters.
Resumo:
Summary: We present a new R package, diveRsity, for the calculation of various diversity statistics, including common diversity partitioning statistics (?, G) and population differentiation statistics (D, GST ', ? test for population heterogeneity), among others. The package calculates these estimators along with their respective bootstrapped confidence intervals for loci, sample population pairwise and global levels. Various plotting tools are also provided for a visual evaluation of estimated values, allowing users to critically assess the validity and significance of statistical tests from a biological perspective. diveRsity has a set of unique features, which facilitate the use of an informed framework for assessing the validity of the use of traditional F-statistics for the inference of demography, with reference to specific marker types, particularly focusing on highly polymorphic microsatellite loci. However, the package can be readily used for other co-dominant marker types (e.g. allozymes, SNPs). Detailed examples of usage and descriptions of package capabilities are provided. The examples demonstrate useful strategies for the exploration of data and interpretation of results generated by diveRsity. Additional online resources for the package are also described, including a GUI web app version intended for those with more limited experience using R for statistical analysis. © 2013 British Ecological Society.
Resumo:
This paper investigated the influence of three micro electrodischarge milling process parameters, which were feed rate, capacitance, and voltage. The response variables were average surface roughness (R a ), maximum peak-to-valley roughness height (R y ), tool wear ratio (TWR), and material removal rate (MRR). Statistical models of these output responses were developed using three-level full factorial design of experiment. The developed models were used for multiple-response optimization by desirability function approach to obtain minimum R a , R y , TWR, and maximum MRR. Maximum desirability was found to be 88%. The optimized values of R a , R y , TWR, and MRR were 0.04, 0.34 μm, 0.044, and 0.08 mg min−1, respectively for 4.79 μm s−1 feed rate, 0.1 nF capacitance, and 80 V voltage. Optimized machining parameters were used in verification experiments, where the responses were found very close to the predicted values.
Resumo:
Hidden Markov models (HMMs) are widely used models for sequential data. As with other probabilistic graphical models, they require the specification of precise probability values, which can be too restrictive for some domains, especially when data are scarce or costly to acquire. We present a generalized version of HMMs, whose quantification can be done by sets of, instead of single, probability distributions. Our models have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. Efficient inference algorithms are developed to address standard HMM usage such as the computation of likelihoods and most probable explanations. Experiments with real data show that the use of imprecise probabilities leads to more reliable inferences without compromising efficiency.
Resumo:
We present the study of absolute magnitude (H) and slope parameter (G) of 170,000 asteroids observed by the Pan-STARRS1 telescope during the period of 15 months within its 3-year all-sky survey mission. The exquisite photometry with photometric errors below 0.04 mag and well-defined filter and photometric system allowed to derive H and G with statistical and systematic errors. Our new approach lies in the Monte Carlo technique simulating rotation periods, amplitudes, and colors, and deriving most-likely H, G and their systematic errors. Comparison of H_M by Muinonen's phase function (Muinonen et al., 2010) with the Minor Planet Center database revealed a negative offset of 0.22±0.29 meaning that Pan-STARRS1 asteroids are fainter. We showed that the absolute magnitude derived by Muinonen's function is systematically larger on average by 0.14±0.29 and by 0.30±0.16 when assuming fixed slope parameter (G=0.15, G_{12}=0.53) than Bowell's absolute magnitude (Bowell et al., 1989). We also derived slope parameters of asteroids of known spectral types and showed a good agreement with the previous studies within the derived uncertainties. However, our systematic errors on G and G_{12} are significantly larger than in previous work, which is caused by poor temporal and phase coverage of vast majority of the detected asteroids. This disadvantage will vanish when full survey data will be available and ongoing extended and enhanced mission will provide new data.
Resumo:
We present the results of a Monte Carlo technique to calculate the absolute magnitudes (H) and slope parameters (G) of about 240,000 asteroids observed by the Pan-STARRS1 telescope during the first 15 months of its 3-year all-sky survey mission. The system's exquisite photometry with photometric errors asteroids rotation period, amplitude and color to derive the most-likely H and G, but its major advantage is in estimating realistic statistical+systematic uncertainties and errors on each parameter. The method was confirmed by comparison with the well-established and accurate results for about 500 asteroids provided by Pravec et al. (2012) and then applied to determining H and G for the Pan-STARRS1 asteroids using both the Muinonen et al. (2010) and Bowell et al. (1989) phase functions. Our results confirm the bias in MPC photometry discovered by ( Jurić et al., 2002).
Resumo:
In this paper, we study the achievable ergodic sum-rate of multiuser multiple-input multiple-output downlink systems in Rician fading channels. We first derive a lower bound on the average signal-to-leakage-and-noise ratio by using the Mullen’s inequality, and then use it to analyze the effect of channel mean information on the achievable ergodic sum-rate. A novel statistical-eigenmode space-division multiple-access (SESDMA) downlink transmission scheme is then proposed. For this scheme, we derive an exact analytical closed-form expression for the achievable ergodic rate and present tractable tight upper and lower bounds. Based on our analysis, we gain valuable insights into the system parameters, such as the number of transmit antennas, the signal-to-noise ratio (SNR) and Rician K-factor on the system sum-rate. Results show that the sum-rate converges to a saturation value in the high SNR regime and tends to a lower limit for the low Rician K-factor case. In addition, we compare the achievable ergodic sum-rate between SE-SDMA and zeroforcing beamforming with perfect channel state information at the base station. Our results reveal that the rate gap tends to zero in the high Rician K-factor regime. Finally, numerical results are presented to validate our analysis.
Resumo:
Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.
Resumo:
This paper investigates the characteristics of the shadowed fading observed in off-body communications channels at 5.8 GHz. This is realized with the aid of the $\kappa-\mu$ / gamma composite fading model which assumes that the transmitted signal undergoes $\kappa-\mu$ fading which is subject to \emph{multiplicative} shadowing. Based on this, the total power of the multipath components, including both the dominant and scattered components, is subject to non-negligible variations that follow the gamma distribution. For this model, we present an integral form of the probability density function (PDF) as well as important analytic expressions for the PDF, cumulative distribution function, moments and moment generating function. In the case of indoor off-body communications, the corresponding measurements were carried out in the context of four explicit individual scenarios namely: line of sight (LOS) and non-LOS (NLOS) walking, rotational and random movements. The measurements were repeated within three different indoor environments and considered three different hypothetical body worn node locations. With the aid of these results, the parameters for the $\kappa-\mu$ / gamma composite fading model were estimated and analyzed extensively. Interestingly, for the majority of the indoor environments and movement scenarios, the parameter estimates suggested that dominant signal components existed even when the direct signal path was obscured by the test subject's body. Additionally, it is shown that the $\kappa-\mu$ / gamma composite fading model provides an adequate fit to the fading effects involved in off-body communications channels. Using the Kullback-Leibler divergence, we have also compared our results with another recently proposed shadowed fading model, namely the $\kappa-\mu$ / lognormal LOS shadowed fading model. It was found that the $\kappa-\mu$ / gamma composite fading model provided a better fit for the majority of the scenarios considered in this study.