970 resultados para random number generator


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interaction between the hepatitis C virus (HCV) envelope protein E2 and the host receptor CD81 is essential for HCV entry into target cells. The number of E2-CD81 complexes necessary for HCV entry has remained difficult to estimate experimentally. Using the recently developed cell culture systems that allow persistent HCV infection in vitro, the dependence of HCV entry and kinetics on CD81 expression has been measured. We reasoned that analysis of the latter experiments using a mathematical model of viral kinetics may yield estimates of the number of E2-CD81 complexes necessary for HCV entry. Here, we constructed a mathematical model of HCV viral kinetics in vitro, in which we accounted explicitly for the dependence of HCV entry on CD81 expression. Model predictions of viral kinetics are in quantitative agreement with experimental observations. Specifically, our model predicts triphasic viral kinetics in vitro, where the first phase is characterized by cell proliferation, the second by the infection of susceptible cells and the third by the growth of cells refractory to infection. By fitting model predictions to the above data, we were able to estimate the threshold number of E2-CD81 complexes necessary for HCV entry into human hepatoma-derived cells. We found that depending on the E2-CD81 binding affinity, between 1 and 13 E2-CD81 complexes are necessary for HCV entry. With this estimate, our model captured data from independent experiments that employed different HCV clones and cells with distinct CD81 expression levels, indicating that the estimate is robust. Our study thus quantifies the molecular requirements of HCV entry and suggests guidelines for intervention strategies that target the E2-CD81 interaction. Further, our model presents a framework for quantitative analyses of cell culture studies now extensively employed to investigate HCV infection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rainbow connection number, rc(G), of a connected graph G is the minimum number of colors needed to color its edges, so that every pair of vertices is connected by at least one path in which no two edges are colored the same. Our main result is that rc(G) <= inverted right perpendicularn/2inverted left perpendicular for any 2-connected graph with at least three vertices. We conjecture that rc(G) <= n/kappa + C for a kappa-connected graph G of order n, where C is a constant, and prove the conjecture for certain classes of graphs. We also prove that rc(G) < (2 + epsilon)n/kappa + 23/epsilon(2) for any epsilon > 0.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of curtailing the number of control actions using fuzzy expert approach for voltage/reactive power dispatch. It presents an approach using fuzzy set theory for reactive power control with the purpose of improving the voltage profile of a power system. To minimize the voltage deviations from pre-desired values of all the load buses, using the sensitivities with respect to reactive power control variables form the basis of the proposed Fuzzy Logic Control (FLC). Control variables considered are switchable VAR compensators, On Load Tap Changing (OLTC) transformers and generator excitations. Voltage deviations and controlling variables are translated into fuzzy set notations to formulate the relation between voltage deviations and controlling ability of controlling devices. The developed fuzzy system is tested on a few simulated practical Indian power systems and modified IEEE-30 bus system. The performance of the fuzzy system is compared with conventional optimization technique and results obtained are encouraging. Results obtained for a modified IEEE-30 bus test system and a 205-node equivalent EHV system a part of Indian southern grid are presented for illustration purposes. The proposed fuzzy-expert technique is found suitable for on-line applications in energy control centre as the solution is obtained fast with significant speedups with few number of controllers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method for placement of Phasor Measurement Units, ensuring the monitoring of vulnerable buses which are obtained based on transient stability analysis of the overall system. Real-time monitoring of phase angles across different nodes, which indicates the proximity to instability, the very purpose will be well defined if the PMUs are placed at buses which are more vulnerable. The issue is to identify the key buses where the PMUs should be placed when the transient stability prediction is taken into account considering various disturbances. Integer Linear Programming technique with equality and inequality constraints is used to find out the optimal placement set with key buses identified from transient stability analysis. Results on IEEE-14 bus system are presented to illustrate the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given two independent Poisson point processes Phi((1)), Phi((2)) in R-d, the AB Poisson Boolean model is the graph with the points of Phi((1)) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centered at these points contains at least one point of Phi((2)). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d >= 2 and derive bounds fora critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and tau n in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest-neighbor distance and almost-sure asymptotic bounds for the connectivity threshold.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Particle simulations based on the discrete element method are used to examine the effect of base roughness on the granular flow down an inclined plane. The base is composed of a random configuration of fixed particles, and the base roughness is decreased by decreasing the ratio of diameters of the base and moving particles. A discontinuous transition from a disordered to an ordered flow state is observed when the ratio of diameters of base and moving particles is decreased below a critical value. The ordered flowing state consists of hexagonally close packed layers of particles sliding over each other. The ordered state is denser (higher volume fraction) and has a lower coordination number than the disordered state, and there are discontinuous changes in both the volume fraction and the coordination number at transition. The Bagnold law, which states that the stress is proportional to the square of the strain rate, is valid in both states. However, the Bagnold coefficients in the ordered flowing state are lower, by more than two orders of magnitude, in comparison to those of the disordered state. The critical ratio of base and moving particle diameters is independent of the angle of inclination, and varies very little when the height of the flowing layer is doubled from about 35 to about 70 particle diameters. While flow in the disordered state ceases when the angle of inclination decreases below 20 degrees, there is flow in the ordered state at lower angles of inclination upto 14 degrees. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4710543]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The repeated or closely spaced eigenvalues and corresponding eigenvectors of a matrix are usually very sensitive to a perturbation of the matrix, which makes capturing the behavior of these eigenpairs very difficult. Similar difficulty is encountered in solving the random eigenvalue problem when a matrix with random elements has a set of clustered eigenvalues in its mean. In addition, the methods to solve the random eigenvalue problem often differ in characterizing the problem, which leads to different interpretations of the solution. Thus, the solutions obtained from different methods become mathematically incomparable. These two issues, the difficulty of solving and the non-unique characterization, are addressed here. A different approach is used where instead of tracking a few individual eigenpairs, the corresponding invariant subspace is tracked. The spectral stochastic finite element method is used for analysis, where the polynomial chaos expansion is used to represent the random eigenvalues and eigenvectors. However, the main concept of tracking the invariant subspace remains mostly independent of any such representation. The approach is successfully implemented in response prediction of a system with repeated natural frequencies. It is found that tracking only an invariant subspace could be sufficient to build a modal-based reduced-order model of the system. Copyright (C) 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rainbow connection number of a connected graph is the minimum number of colors needed to color its edges, so that every pair of its vertices is connected by at least one path in which no two edges are colored the same. In this article we show that for every connected graph on n vertices with minimum degree delta, the rainbow connection number is upper bounded by 3n/(delta + 1) + 3. This solves an open problem from Schiermeyer (Combinatorial Algorithms, Springer, Berlin/Hiedelberg, 2009, pp. 432437), improving the previously best known bound of 20n/delta (J Graph Theory 63 (2010), 185191). This bound is tight up to additive factors by a construction mentioned in Caro et al. (Electr J Combin 15(R57) (2008), 1). As an intermediate step we obtain an upper bound of 3n/(delta + 1) - 2 on the size of a connected two-step dominating set in a connected graph of order n and minimum degree d. This bound is tight up to an additive constant of 2. This result may be of independent interest. We also show that for every connected graph G with minimum degree at least 2, the rainbow connection number, rc(G), is upper bounded by Gc(G) + 2, where Gc(G) is the connected domination number of G. Bounds of the form diameter(G)?rc(G)?diameter(G) + c, 1?c?4, for many special graph classes follow as easy corollaries from this result. This includes interval graphs, asteroidal triple-free graphs, circular arc graphs, threshold graphs, and chain graphs all with minimum degree delta at least 2 and connected. We also show that every bridge-less chordal graph G has rc(G)?3.radius(G). In most of these cases, we also demonstrate the tightness of the bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Novel random copolymers containing dithienylcyclopentadienone, thiophene and benzothiadiazole were synthesized and photovoltaic properties of these materials were evaluated. Thermal, structural, optical and electrochemical characterization of the synthesized copolymers was carried out. These thermally stable copolymers are solution processable unlike the homopolymer. The absorption spectra indicated that with the incorporation of alkyl chains in the thiophene moiety, the onset of absorption increases and hence band gap decreases (1.47 eV to 1.41 eV). Bulk heterojunction solar cells were fabricated with the blend of copolymer and phenyl-C61-butyric acid methyl ester (PCBM) as the active material and device parameters were extracted. The copolymer consists of alkyl thiophene exhibit higher open circuit voltage than the copolymer consisting of thiophene moiety. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A moving magnet linear motor compressor or pressure wave generator (PWG) of 2 cc swept volume with dual opposed piston configuration has been developed to operate miniature pulse tube coolers. Prelimnary experiments yielded only a no-load cold end temperature of 180 K. Auxiliary tests and the interpretation of detailed modeling of a PWG suggest that much of the PV power has been lost in the form of blow-by at piston seals due to large and non-optimum clearance seal gap between piston and cylinder. The results of experimental parameters simulated using Sage provide the optimum seal gap value for maximizing the delivered PV power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mobile P2P technology provides a scalable approach for content delivery to a large number of users on their mobile devices. In this work, we study the dissemination of a single item of content (e. g., an item of news, a song or a video clip) among a population of mobile nodes. Each node in the population is either a destination (interested in the content) or a potential relay (not yet interested in the content). There is an interest evolution process by which nodes not yet interested in the content (i.e., relays) can become interested (i.e., become destinations) on learning about the popularity of the content (i.e., the number of already interested nodes). In our work, the interest in the content evolves under the linear threshold model. The content is copied between nodes when they make random contact. For this we employ a controlled epidemic spread model. We model the joint evolution of the copying process and the interest evolution process, and derive joint fluid limit ordinary differential equations. We then study the selection of parameters under the content provider's control, for the optimization of various objective functions that aim at maximizing content popularity and efficient content delivery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article considers a class of deploy and search strategies for multi-robot systems and evaluates their performance. The application framework used is deployment of a system of autonomous mobile robots equipped with required sensors in a search space to gather information. The lack of information about the search space is modelled as an uncertainty density distribution. The agents are deployed to maximise single-step search effectiveness. The centroidal Voronoi configuration, which achieves a locally optimal deployment, forms the basis for sequential deploy and search (SDS) and combined deploy and search (CDS) strategies. Completeness results are provided for both search strategies. The deployment strategy is analysed in the presence of constraints on robot speed and limit on sensor range for the convergence of trajectories with corresponding control laws responsible for the motion of robots. SDS and CDS strategies are compared with standard greedy and random search strategies on the basis of time taken to achieve reduction in the uncertainty density below a desired level. The simulation experiments reveal several important issues related to the dependence of the relative performances of the search strategies on parameters such as the number of robots, speed of robots and their sensor range limits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.