262 resultados para randomness
Resumo:
Most of the cities in India are undergoing rapid development in recent decades, and many rural localities are undergoing transformation to urban hotspots. These developments have associated land use/land cover (LULC) change that effects runoff response from catchments, which is often evident in the form of increase in runoff peaks, volume and velocity in drain network. Often most of the existing storm water drains are in dilapidated stage owing to improper maintenance or inadequate design. The drains are conventionally designed using procedures that are based on some anticipated future conditions. Further, values of parameters/variables associated with design of the network are traditionally considered to be deterministic. However, in reality, the parameters/variables have uncertainty due to natural and/or inherent randomness. There is a need to consider the uncertainties for designing a storm water drain network that can effectively convey the discharge. The present study evaluates performance of an existing storm water drain network in Bangalore, India, through reliability analysis by Advance First Order Second Moment (AFOSM) method. In the reliability analysis, parameters that are considered to be random variables are roughness coefficient, slope and conduit dimensions. Performance of the existing network is evaluated considering three failure modes. The first failure mode occurs when runoff exceeds capacity of the storm water drain network, while the second failure mode occurs when the actual flow velocity in the storm water drain network exceeds the maximum allowable velocity for erosion control, whereas the third failure mode occurs when the minimum flow velocity is less than the minimum allowable velocity for deposition control. In the analysis, runoff generated from subcatchments of the study area and flow velocity in storm water drains are estimated using Storm Water Management Model (SWMM). Results from the study are presented and discussed. The reliability values are low under the three failure modes, indicating a need to redesign several of the conduits to improve their reliability. This study finds use in devising plans for expansion of the Bangalore storm water drain system. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Starting with a micropolar formulation, known to account for nonlocal microstructural effects at the continuum level, a generalized Langevin equation (GLE) for a particle, describing the predominant motion of a localized region through a single displacement degree of freedom, is derived. The GLE features a memory-dependent multiplicative or internal noise, which appears upon recognizing that the microrotation variables possess randomness owing to an uncertainty principle. Unlike its classical version, the present GLE qualitatively reproduces the experimentally measured fluctuations in the steady-state mean square displacement of scattering centers in a polyvinyl alcohol slab. The origin of the fluctuations is traced to nonlocal spatial interactions within the continuum, a phenomenon that is ubiquitous across a broad class of response regimes in solids and fluids. This renders the proposed GLE a potentially useful model in such cases.
Resumo:
The information-theoretic approach to security entails harnessing the correlated randomness available in nature to establish security. It uses tools from information theory and coding and yields provable security, even against an adversary with unbounded computational power. However, the feasibility of this approach in practice depends on the development of efficiently implementable schemes. In this paper, we review a special class of practical schemes for information-theoretic security that are based on 2-universal hash families. Specific cases of secret key agreement and wiretap coding are considered, and general themes are identified. The scheme presented for wiretap coding is modular and can be implemented easily by including an extra preprocessing layer over the existing transmission codes.
Resumo:
Composite materials are very useful in structural engineering particularly in weight sensitive applications. Two different test models of the same structure made from composite materials can display very different dynamic behavior due to large uncertainties associated with composite material properties. Also, composite structures can suffer from pre-existing imperfections like delaminations, voids or cracks during fabrication. In this paper, we show that modeling and material uncertainties in composite structures can cause considerable problein in damage assessment. A recently developed C-0 shear deformable locking free refined composite plate element is employed in the numerical simulations to alleviate modeling uncertainty. A qualitative estimate of the impact of modeling uncertainty on the damage detection problem is made. A robust Fuzzy Logic System (FLS) with sliding window defuzzifier is used for delamination damage detection in composite plate type structures. The FLS is designed using variations in modal frequencies due to randomness in material properties. Probabilistic analysis is performed using Monte Carlo Simulation (MCS) on a composite plate finite element model. It is demonstrated that the FLS shows excellent robustness in delamination detection at very high levels of randomness in input data. (C) 2016 Elsevier Ltd. All rights reserved.
Resumo:
A three-phase confocal elliptical cylinder model is proposed for fiber-reinforced composites, in terms of which a generalized self-consistent method is developed for fiber-reinforced composites accounting for variations in fiber section shapes and randomness in fiber section orientation. The reasonableness of the fiber distribution function in the present model is shown. The dilute, self-consistent, differential and Mori-Tanaka methods are also extended to consider randomness in fiber section orientation in a statistical sense. A full comparison is made between various micromechanics methods and with the Hashin and Shtrikman's bounds. The present method provides convergent and reasonable results for a full range of variations in fiber section shapes (from circular fibers to ribbons), for a complete spectrum of the fiber volume fraction (from 0 to 1, and the latter limit shows the correct asymptotic behavior in the fully packed case) and for extreme types of the inclusion phases (from voids to rigid inclusions). A very different dependence of the five effective moduli on fiber section shapes is theoretically predicted, and it provides a reasonable explanation on the poor correlation between previous theory and experiment in the case of longitudinal shear modulus.
Resumo:
The initial-value problem of a forced Burgers equation is numerically solved by the Fourier expansion method. It is found that its solutions finally reach a steady state of 'laminar flow' which has no randomness and is stable to disturbances. Hence, strictly speaking, the so-called Burgers turbulence is not a turbulence. A new one-dimensional model is proposed to simulate the Navier-Stokes turbulence. A series of numerical experiments on this one-dimensional turbulence is made and is successful in obtaining Kolmogorov's (1941) k exp(-5/3) inertial-range spectrum. The (one-dimensional) Kolmogorov constant ranges from 0.5 to 0.65.
Resumo:
[ES] La programación dinámica es un método de optimización de sistemas o de su representación matemática, donde se opera por fases, es decir, las decisiones se toman en forma secuencial.
Resumo:
A dynamic model for the ice-induced vibration (IIV) of structures is developed in the present study. Ice properties have been taken into account, such as the discrete failure, the dependence of the crushing strength on the ice velocity, and the randomness of ice failure. The most important prediction of the model is to capture the resonant frequency lock-in, which is analog to that in the vortex-induced vibration. Based on the model, the mechanism of resonant IIV is discussed. It is found that the dependence of the ice crushing strength on the ice velocity plays an important role in the resonant frequency lock-in of IIV. In addition, an intermittent stochastic resonant vibration is simulated from the model. These predictions are supported by the laboratory and field observations reported. The present model is more productive than the previous models of IIV.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we
•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;
•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;
•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;
•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;
•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.
The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we
•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;
•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.
The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.
Resumo:
The induced magnetic uniaxial anisotropy of Ni-Fe alloy films has been shown to be related to the crystal structure of the film. By use of electron diffraction, the crystal structure or vacuum-deposited films was determined over the composition range 5% to 85% Ni, with substrate temperature during deposition at various temperatures in the range 25° to 500° C. The phase diagram determined in this way has boundaries which are in fair agreement with the equilibrium boundaries for bulk material above 400°C. The (α+ ɤ) mixture phase disappears below 100°C.
The measurement of uniaxial anisotropy field for 25% Ni-Fe alloy films deposited at temperatures in the range -80°C to 375°C has been carried out. Comparison of the crystal structure phase diagram with the present data and those published by Wilts indicates that the anisotropy is strongly sensitive to crystal structure. Others have proposed pair ordering as an important source of anisotropy because of an apparent peak in the anisotropy energy at about 50% Ni composition. The present work shows no such peak, and leads to the conclusion that pair ordering cannot be a dominant contributor.
Width of the 180° domain wall in 76% Ni-Fe alloy films as a function of film thickness up to 1800 Å was measured using the defocused mode of Lorentz microscopy. For the thinner films, the measured wall widths are in good agreement with earlier data obtained by Fuchs. For films thicker than 800 Å, the wall width increases with film thickness to about 9000 Å at 1800 Å film thickness. Similar measurements for polycrystalline Co films with thickness from 200 to 1500 Å have been made. The wall width increases from 3000 Å at 400 Å film thickness to about 6000 Å at 1500 Å film thickness. The wall widths for Ni-Fe and Co films are much greater than predicted by present theories. The validity of the classical determination of wall width is discussed, and the comparison of the present data with theoretical results is given.
Finally, an experimental study of ripple by Lorentz microscopy in Ni-Fe alloy films has been carried out. The following should be noted: (1) the only practical way to determine experimentally a meaningful wavelength is to find a well-defined ripple periodicity by visual inspection of a photomicrograph. (2) The average wavelength is of the order of 1µ. This value is in reasonable agreement with the main wavelength predicted by the theories developed by others. The dependence of wavelength on substrate deposition temperature, alloy composition and the external magnetic field has been also studied and the results are compared with theoretical predictions. (3) The experimental fact that the ripple structure could not be observed in completely epitaxial films gives confirmation that the ripple results from the randomness of crystallite orientation. Furthermore, the experimental observation that the ripple disappeared in the range 71 and 75% Ni supports the theory that the ripple amplitude is directly dependent on the crystalline anisotropy. An attempt to experimentally determine the order of magnitude of the ripple angle was carried out. The measured angle was about 0.02 rad. The discrepancy between the experimental data and the theoretical prediction is serious. The accurate experimental determination of ripple angle is an unsolved problem.
Resumo:
The study of pair-wise interactions between swimming microorganisms is fundamental to the understanding of the rheological and transport properties of semi-dilute suspensions. In this paper, the hydrodynamic interaction of two ciliated microorganisms is investigated numerically using a boundary-element method, and the microorganisms are modeled as spherical squirmers that swim by time-dependent surface deformations. The results show that the inclusion of the unsteady terms in the ciliary propulsion model has a large impact on the trajectories of the interacting cells, and causes a significant change in scattering angles with potential important consequences on the diffusion properties of semi-dilute suspensions. Furthermore, the analysis of the shear stress acting on the surface of the microorganisms revealed that the duration and the intensity of the near-field interaction are significantly modified by the presence of unsteadiness. This observation may account for the hydrodynamic nature of randomness in some biological reactions, and supersedes the distinction between intrinsic randomness and hydrodynamic interactions, adding a further element to the understanding and modeling of interacting microorganisms.
Resumo:
An existing hybrid finite element (FE)/statistical energy analysis (SEA) approach to the analysis of the mid- and high frequency vibrations of a complex built-up system is extended here to a wider class of uncertainty modeling. In the original approach, the constituent parts of the system are considered to be either deterministic, and modeled using FE, or highly random, and modeled using SEA. A non-parametric model of randomness is employed in the SEA components, based on diffuse wave theory and the Gaussian Orthogonal Ensemble (GOE), and this enables the mean and variance of second order quantities such as vibrational energy and response cross-spectra to be predicted. In the present work the assumption that the FE components are deterministic is relaxed by the introduction of a parametric model of uncertainty in these components. The parametric uncertainty may be modeled either probabilistically, or by using a non-probabilistic approach such as interval analysis, and it is shown how these descriptions can be combined with the non-parametric uncertainty in the SEA subsystems to yield an overall assessment of the performance of the system. The method is illustrated by application to an example built-up plate system which has random properties, and benchmark comparisons are made with full Monte Carlo simulations. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper is concerned with the development of efficient algorithms for propagating parametric uncertainty within the context of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) approach to the analysis of complex vibro-acoustic systems. This approach models the system as a combination of SEA subsystems and FE components; it is assumed that the FE components have fully deterministic properties, while the SEA subsystems have a high degree of randomness. The method has been recently generalised by allowing the FE components to possess parametric uncertainty, leading to two ensembles of uncertainty: a non-parametric one (SEA subsystems) and a parametric one (FE components). The SEA subsystems ensemble is dealt with analytically, while the effect of the additional FE components ensemble can be dealt with by Monte Carlo Simulations. However, this approach can be computationally intensive when applied to complex engineering systems having many uncertain parameters. Two different strategies are proposed: (i) the combination of the hybrid FE/SEA method with the First Order Reliability Method which allows the probability of the non-parametric ensemble average of a response variable exceeding a barrier to be calculated and (ii) the combination of the hybrid FE/SEA method with Laplace's method which allows the evaluation of the probability of a response variable exceeding a limit value. The proposed approaches are illustrated using two built-up plate systems with uncertain properties and the results are validated against direct integration, Monte Carlo simulations of the FE and of the hybrid FE/SEA models. © 2013 Elsevier Ltd.
Resumo:
Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved. Classical methods such as Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are ubiquitous in statistics. However, these techniques are only able to reveal linear re-lationships in data. Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale. In a separate strand of recent research, randomized methods have been proposed to construct features that help reveal nonlinear patterns in data. For basic tasks such as regression or classification, random features exhibit little or no loss in performance, while achieving drastic savings in computational requirements. In this paper we leverage randomness to design scalable new variants of nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such as spectral clustering or LDA. We demonstrate our algorithms through experiments on real- world data, on which we compare against the state-of-the-art. A simple R implementation of the presented algorithms is provided.