973 resultados para verifiable random function
Resumo:
The effect of structure height on the lightning striking distance is estimated using a lightning strike model that takes into account the effect of connecting leaders. According to the results, the lightning striking distance may differ significantly from the values assumed in the IEC standard for structure heights beyond 30m. However, for structure heights smaller than about 30m, the results show that the values assumed by IEC do not differ significantly from the predictions based on a lightning attachment model taking into account the effect of connecting leaders. However, since IEC assumes a smaller striking distance than the ones predicted by the adopted model one can conclude that the safety is not compromised in adhering to the IEC standard. Results obtained from the model are also compared with Collection Volume Method (CVM) and other commonly used lightning attachment models available in the literature. The results show that in the case of CVM the calculated attractive distances are much larger than the ones obtained using the physically based lightning attachment models. This indicates the possibility of compromising the lightning protection procedures when using CVM. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The healing times for the growth of thin films on patterned substrates are studied using simulations of two discrete models of surface growth: the Family model and the Das Sarma-Tamborenea (DT) model. The healing time, defined as the time at which the characteristics of the growing interface are ``healed'' to those obtained in growth on a flat substrate, is determined via the study of the nearest-neighbor height difference correlation function. Two different initial patterns are considered in this work: a relatively smooth tent-shaped triangular substrate and an atomically rough substrate with singlesite pillars or grooves. We find that the healing time of the Family and DT models on aL x L triangular substrate is proportional to L-z, where z is the dynamical exponent of the models. For the Family model, we also analyze theoretically, using a continuum description based on the linear Edwards-Wilkinson equation, the time evolution of the nearest-neighbor height difference correlation function in this system. The correlation functions obtained from continuum theory and simulation are found to be consistent with each other for the relatively smooth triangular substrate. For substrates with periodic and random distributions of pillars or grooves of varying size, the healing time is found to increase linearly with the height (depth) of pillars (grooves). We show explicitly that the simulation data for the Family model grown on a substrate with pillars or grooves do not agree with results of a calculation based on the continuum Edwards-Wilkinson equation. This result implies that a continuum description does not work when the initial pattern is atomically rough. The observed dependence of the healing time on the substrate size and the initial height (depth) of pillars (grooves) can be understood from the details of the diffusion rule of the atomistic model. The healing time of both models for pillars is larger than that for grooves with depth equal to the height of the pillars. The calculated healing time for both Family and DT models is found to depend on how the pillars and grooves are distributed over the substrate. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The average time tau(r) for one end of a long, self-avoiding polymer to interact for the first time with a flat penetrable surface to which it is attached at the other end is shown here to scale essentially as the square of the chain's contour length N. This result is obtained within the framework of the Wilemski-Fixman approximation to diffusion-limited reactions, in which the reaction time is expressed as a time correlation function of a ``sink'' term. In the present work, this sink-sink correlation function is calculated using perturbation expansions in the excluded volume and the polymer-surface interactions, with renormalization group methods being used to resum the expansion into a power law form. The quadratic dependence of tau(r) on N mirrors the behavior of the average time tau(c) of a free random walk to cyclize, but contrasts with the cyclization time of a free self-avoiding walk (SAW), for which tau(r) similar to N-2.2. A simulation study by Cheng and Makarov J. Phys. Chem. B 114, 3321 (2010)] of the chain-end reaction time of an SAW on a flat impenetrable surface leads to the same N-2.2 behavior, which is surprising given the reduced conformational space a tethered polymer has to explore in order to react. (C) 2014 AIP Publishing LLC.
Resumo:
We have synthesized Ag-Cu alloy nanoparticles of four different compositions by using the laser ablation technique with the target under aqueous medium. Following this, we report a morphological transition in the nanoparticles from a normal two-phase microstructure to a structure with random segregation and finally a core shell structure at small sizes as a function of Cu concentration. To illustrate the composition dependence of morphology, we report observations carried out on nanoparticles of two different sizes: similar to 5 and similar to 20 nm. The results could be rationalized through the thermodynamic modeling of free energy of phase mixing and wettability of the alloying phases.
Resumo:
We study the equilibrium properties of an Ising model on a disordered random network where the disorder can be quenched or annealed. The network consists of fourfold coordinated sites connected via variable length one-dimensional chains. Our emphasis is on nonuniversal properties and we consider the transition temperature and other equilibrium thermodynamic properties, including those associated with one-dimensional fluctuations arising from the chains. We use analytic methods in the annealed case, and a Monte Carlo simulation for the quenched disorder. Our objective is to study the difference between quenched and annealed results with a broad random distribution of interaction parameters. The former represents a situation where the time scale associated with the randomness is very long and the corresponding degrees of freedom can be viewed as frozen, while the annealed case models the situation where this is not so. We find that the transition temperature and the entropy associated with one-dimensional fluctuations are always higher for quenched disorder than in the annealed case. These differences increase with the strength of the disorder up to a saturating value. We discuss our results in connection to physical systems where a broad distribution of interaction strengths is present.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We study models of interacting fermions in one dimension to investigate the crossover from integrability to nonintegrability, i.e., quantum chaos, as a function of system size. Using exact diagonalization of finite-sized systems, we study this crossover by obtaining the energy level statistics and Drude weight associated with transport. Our results reinforce the idea that for system size L -> infinity nonintegrability sets in for an arbitrarily small integrability-breaking perturbation. The crossover value of the perturbation scales as a power law similar to L-eta when the integrable system is gapless. The exponent eta approximate to 3 appears to be robust to microscopic details and the precise form of the perturbation. We conjecture that the exponent in the power law is characteristic of the random matrix ensemble describing the nonintegrable system. For systems with a gap, the crossover scaling appears to be faster than a power law.
Resumo:
A discrete-time dynamics of a non-Markovian random walker is analyzed using a minimal model where memory of the past drives the present dynamics. In recent work N. Kumar et al., Phys. Rev. E 82, 021101 (2010)] we proposed a model that exhibits asymptotic superdiffusion, normal diffusion, and subdiffusion with the sweep of a single parameter. Here we propose an even simpler model, with minimal options for the walker: either move forward or stay at rest. We show that this model can also give rise to diffusive, subdiffusive, and superdiffusive dynamics at long times as a single parameter is varied. We show that in order to have subdiffusive dynamics, the memory of the rest states must be perfectly correlated with the present dynamics. We show explicitly that if this condition is not satisfied in a unidirectional walk, the dynamics is only either diffusive or superdiffusive (but not subdiffusive) at long times.
Resumo:
Despite decades of research, it remains to be established whether the transformation of a liquid into a glass is fundamentally thermodynamic or dynamic in origin. Although observations of growing length scales are consistent with thermodynamic perspectives, the purely dynamic approach of the Dynamical Facilitation (DF) theory lacks experimental support. Further, for vitrification induced by randomly freezing a subset of particles in the liquid phase, simulations support the existence of an underlying thermodynamic phase transition, whereas the DF theory remains unexplored. Here, using video microscopy and holographic optical tweezers, we show that DF in a colloidal glass-forming liquid grows with density as well as the fraction of pinned particles. In addition, we observe that heterogeneous dynamics in the form of string-like cooperative motion emerges naturally within the framework of facilitation. Our findings suggest that a deeper understanding of the glass transition necessitates an amalgamation of existing theoretical approaches.
Resumo:
This paper proposes a novel experimental test procedure to estimate the reliability of structural dynamical systems under excitations specified via random process models. The samples of random excitations to be used in the test are modified by the addition of an artificial control force. An unbiased estimator for the reliability is derived based on measured ensemble of responses under these modified inputs based on the tenets of Girsanov transformation. The control force is selected so as to reduce the sampling variance of the estimator. The study observes that an acceptable choice for the control force can be made solely based on experimental techniques and the estimator for the reliability can be deduced without taking recourse to mathematical model for the structure under study. This permits the proposed procedure to be applied in the experimental study of time-variant reliability of complex structural systems that are difficult to model mathematically. Illustrative example consists of a multi-axes shake table study on bending-torsion coupled, geometrically non-linear, five-storey frame under uni/bi-axial, non-stationary, random base excitation. Copyright (c) 2014 John Wiley & Sons, Ltd.
Resumo:
Using numerical diagonalization we study the crossover among different random matrix ensembles (Poissonian, Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE) and Gaussian symplectic ensemble (GSE)) realized in two different microscopic models. The specific diagnostic tool used to study the crossovers is the level spacing distribution. The first model is a one-dimensional lattice model of interacting hard-core bosons (or equivalently spin 1/2 objects) and the other a higher dimensional model of non-interacting particles with disorder and spin-orbit coupling. We find that the perturbation causing the crossover among the different ensembles scales to zero with system size as a power law with an exponent that depends on the ensembles between which the crossover takes place. This exponent is independent of microscopic details of the perturbation. We also find that the crossover from the Poissonian ensemble to the other three is dominated by the Poissonian to GOE crossover which introduces level repulsion while the crossover from GOE to GUE or GOE to GSE associated with symmetry breaking introduces a subdominant contribution. We also conjecture that the exponent is dependent on whether the system contains interactions among the elementary degrees of freedom or not and is independent of the dimensionality of the system.
Resumo:
We apply the objective method of Aldous to the problem of finding the minimum-cost edge cover of the complete graph with random independent and identically distributed edge costs. The limit, as the number of vertices goes to infinity, of the expected minimum cost for this problem is known via a combinatorial approach of Hessler and Wastlund. We provide a proof of this result using the machinery of the objective method and local weak convergence, which was used to prove the (2) limit of the random assignment problem. A proof via the objective method is useful because it provides us with more information on the nature of the edge's incident on a typical root in the minimum-cost edge cover. We further show that a belief propagation algorithm converges asymptotically to the optimal solution. This can be applied in a computational linguistics problem of semantic projection. The belief propagation algorithm yields a near optimal solution with lesser complexity than the known best algorithms designed for optimality in worst-case settings.
Beadex Function in the Motor Neurons Is Essential for Female Reproduction in Drosophila melanogaster
Resumo:
Drosophila melanogaster has served as an excellent model system for understanding the neuronal circuits and molecular mechanisms regulating complex behaviors. The Drosophila female reproductive circuits, in particular, are well studied and can be used as a tool to understand the role of novel genes in neuronal function in general and female reproduction in particular. In the present study, the role of Beadex, a transcription co-activator, in Drosophila female reproduction was assessed by generation of mutant and knock down studies. Null allele of Beadex was generated by transposase induced excision of P-element present within an intron of Beadex gene. The mutant showed highly compromised reproductive abilities as evaluated by reduced fecundity and fertility, abnormal oviposition and more importantly, the failure of sperm release from storage organs. However, no defect was found in the overall ovariole development. Tissue specific, targeted knock down of Beadex indicated that its function in neurons is important for efficient female reproduction, since its neuronal knock down led to compromised female reproductive abilities, similar to Beadex null females. Further, different neuronal class specific knock down studies revealed that Beadex function is required in motor neurons for normal fecundity and fertility of females. Thus, the present study attributes a novel and essential role for Beadex in female reproduction through neurons.
Resumo:
Designing a robust algorithm for visual object tracking has been a challenging task since many years. There are trackers in the literature that are reasonably accurate for many tracking scenarios but most of them are computationally expensive. This narrows down their applicability as many tracking applications demand real time response. In this paper, we present a tracker based on random ferns. Tracking is posed as a classification problem and classification is done using ferns. We used ferns as they rely on binary features and are extremely fast at both training and classification as compared to other classification algorithms. Our experiments show that the proposed tracker performs well on some of the most challenging tracking datasets and executes much faster than one of the state-of-the-art trackers, without much difference in tracking accuracy.