978 resultados para Safe Minimum Standard
Resumo:
We consider the problem of computing a minimum cycle basis in a directed graph G. The input to this problem is a directed graph whose arcs have positive weights. In this problem a {- 1, 0, 1} incidence vector is associated with each cycle and the vector space over Q generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of weights of the cycles is minimum is called a minimum cycle basis of G. The current fastest algorithm for computing a minimum cycle basis in a directed graph with m arcs and n vertices runs in O(m(w+1)n) time (where w < 2.376 is the exponent of matrix multiplication). If one allows randomization, then an (O) over tilde (m(3)n) algorithm is known for this problem. In this paper we present a simple (O) over tilde (m(2)n) randomized algorithm for this problem. The problem of computing a minimum cycle basis in an undirected graph has been well-studied. In this problem a {0, 1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of the graph. The fastest known algorithm for computing a minimum cycle basis in an undirected graph runs in O(m(2)n + mn(2) logn) time and our randomized algorithm for directed graphs almost matches this running time.
Resumo:
The current paper suggests a new procedure for designing helmets for head impact protection for users such as motorcycle riders. According to the approach followed here, a helmet is mounted on a featureless Hybrid 3 headform that is used in assessing vehicles for compliance to the FMVSS 201 regulation in the USA for upper interior head impact safety. The requirement adopted in the latter standard, i.e. not exceeding a threshold HIC(d) limit of 1000, is applied in the present study as a likely criterion for adjudging the efficacy of helmets. An impact velocity of 6 m/s (13.5 mph) for the helmet-headform system striking a rigid target can probably be acceptable for ascertaining a helmet's effectiveness as a countermeasure for minimizing the risk of severe head injury. The proposed procedure is demonstrated with the help of a validated LS-DYNA model of a featureless Hybrid 3 headform in conjunction with a helmet model comprising an outer polypropylene shell to the inner surface of which is bonded a protective polyurethane foam padding of a given thickness. Based on simulation results of impact on a rigid surface, it appears that a minimum foam padding thickness of 40 mm is necessary for obtaining an acceptable value of HIC(d).
Resumo:
We investigate e(+)e(-) -> gamma gamma process within the Seiberg-Witten expanded noncommutative standard model (NCSM) scenario in the presence of anomalous triple gauge boson couplings. This study is done with and without initial beam polarization and we restrict ourselves to leading order effects of noncommutativity i.e. O(Theta). The noncommutative (NC) corrections are sensitive to the electric component ((Theta) over bar (E)) of NC parameter. We include the effects of Earth's rotation in our analysis. This study is done by investigating the effects of noncommutativity on different time averaged cross section observables. We have also defined forward backward asymmetries which will be exclusively sensitive to anomalous couplings. We have looked into the sensitivity of these couplings at future experiments at the International Linear Collider (ILC). This analysis is done under realistic ILC conditions with the center of mass energy (cm.) root s = 800 GeV and integrated luminosity L = 500 fb(-1). The scale of noncommutativity is assumed to be Lambda = 1 TeV. The limits on anomalous couplings of the order 10(-1) from forward backward asymmetries while much stringent limits of the order 10(-2) from total cross section are obtained if no signal beyond SM is seen. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This article aims to obtain damage-tolerant designs with minimum weight for a laminated composite structure using genetic algorithm. Damage tolerance due to impacts in a laminated composite structure is enhanced by dispersing the plies such that too many adjacent plies do not have the same angle. Weight of the structure is minimized and the Tsai-Wu failure criterion is considered for the safe design. Design variables considered are the number of plies and ply orientation. The influence of dispersed ply angles on the weight of the structure for a given loading conditions is studied by varying the angles in the range of 0 degrees-45 degrees, 0 degrees-60 degrees and 0 degrees-90 degrees at intervals of 5 degrees and by using specific ply angles tailored to loading conditions. A comparison study is carried out between the conventional stacking sequence and the stacking sequence with dispersed ply angles for damage-tolerant weight minimization and some useful designs are obtained. Unconventional stacking sequence is more damage tolerant than the conventional stacking sequence is demonstrated by performing a finite element analysis under both tensile as well as compressive loading conditions. Moreover, a new mathematical function called the dispersion function is proposed to measure the dispersion of ply angles in a laminate. The approach for dispersing ply angles to achieve damage tolerance is especially suited for composite material design space which has multiple local minima.
Resumo:
In this paper we study constrained maximum entropy and minimum divergence optimization problems, in the cases where integer valued sufficient statistics exists, using tools from computational commutative algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. We give an implicit description of maximum entropy models by embedding them in algebraic varieties for which we give a Grobner basis method to compute it. In the cases of minimum KL-divergence models we show that implicitization preserves specialization of prior distribution. This result leads us to a Grobner basis method to embed minimum KL-divergence models in algebraic varieties. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Proper analysis for safe design of tailings earthen dam is necessary under static loading and more so under earthquake conditions to reduce damages of important geotechnical structure. This paper presents both static and seismic analyses of a typical section of tailings earthen dam constructed by downstream method and located at a site in eastern part India to store non-radioactive nuclear waste material. The entire analysis is performed using geotechnical softwares FLAC(3D) and TALREN 4. Results are obtained for various possible conditions of the reservoir to investigate the stability under both static and seismic loading condition using 1989 Loma Prieta earthquake acceleration-time history. FLAC(3D) analyses indicate the critical maximum displacement at crest of the proposed tailings dam section is 5.5 cm under the static loading but it increases to about 16.24 cm under seismic loading. The slope stability analyses provide the minimum value of factor of safety for seismic loading as 1.5 as compared to 2.31 for static loading. Amplification of base seismic acceleration is also observed. The liquefaction potential analysis in FLAC(3D) indicates considerable loss of shear strength in the tailings portion of the proposed earthen dam section with significantly high values of pore pressure ratio.
Resumo:
Image-guided diffuse optical tomography has the advantage of reducing the total number of optical parameters being reconstructed to the number of distinct tissue types identified by the traditional imaging modality, converting the optical image-reconstruction problem from underdetermined in nature to overdetermined. In such cases, the minimum required measurements might be far less compared to those of the traditional diffuse optical imaging. An approach to choose these optimally based on a data-resolution matrix is proposed, and it is shown that such a choice does not compromise the reconstruction performance. (C) 2013 Optical Society of America
Resumo:
We investigate the effect of a prescribed tangential velocity on the drag force on a circular cylinder in a spanwise uniform cross flow. Using a combination of theoretical and numerical techniques we make an attempt at determining the optimal tangential velocity profiles which will reduce the drag force acting on the cylindrical body while minimizing the net power consumption characterized through a non-dimensional power loss coefficient (C-PL). A striking conclusion of our analysis is that the tangential velocity associated with the potential flow, which completely suppresses the drag force, is not optimal for both small and large, but finite Reynolds number. When inertial effects are negligible (R e << 1), theoretical analysis based on two-dimensional Oseen equations gives us the optimal tangential velocity profile which leads to energetically efficient drag reduction. Furthermore, in the limit of zero Reynolds number (Re -> 0), minimum power loss is achieved for a tangential velocity profile corresponding to a shear-free perfect slip boundary. At finite Re, results from numerical simulations indicate that perfect slip is not optimum and a further reduction in drag can be achieved for reduced power consumption. A gradual increase in the strength of a tangential velocity which involves only the first reflectionally symmetric mode leads to a monotonic reduction in drag and eventual thrust production. Simulations reveal the existence of an optimal strength for which the power consumption attains a minima. At a Reynolds number of 100, minimum value of the power loss coefficient (C-PL = 0.37) is obtained when the maximum in tangential surface velocity is about one and a half times the free stream uniform velocity corresponding to a percentage drag reduction of approximately 77 %; C-PL = 0.42 and 0.50 for perfect slip and potential flow cases, respectively. Our results suggest that potential flow tangential velocity enables energetically efficient propulsion at all Reynolds numbers but optimal drag reduction only for Re -> infinity. The two-dimensional strategy of reducing drag while minimizing net power consumption is shown to be effective in three dimensions via numerical simulation of flow past an infinite circular cylinder at a Reynolds number of 300. Finally a strategy of reducing drag, suitable for practical implementation and amenable to experimental testing, through piecewise constant tangential velocities distributed along the cylinder periphery is proposed and analysed.
Resumo:
Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.
Resumo:
Type Ia supernovae, sparked off by exploding white dwarfs of mass close to the Chandrasekhar limit, play the key role in understanding the expansion rate of the Universe. However, recent observations of several peculiar type Ia supernovae argue for its progenitor mass to be significantly super-Chandrasekhar. We show that strongly magnetized white dwarfs not only can violate the Chandrasekhar mass limit significantly, but exhibit a different mass limit. We establish from a foundational level that the generic mass limit of white dwarfs is 2.58 solar mass. This explains the origin of overluminous peculiar type Ia supernovae. Our finding further argues for a possible second standard candle, which has many far reaching implications, including a possible reconsideration of the expansion history of the Universe. DOI: 10.1103/PhysRevLett.110.071102
Resumo:
We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.
Resumo:
Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.
Resumo:
A low thermal diffusivity of adsorption beds induces a large thermal gradient across cylindrical adsorbers used in adsorption cooling cycles. This reduces the concentration difference across which a thermal compressor operates. Slow adsorption kinetics in conjunction with the void volume effect further diminishes throughputs from those adsorption thermal compressors. The problem can be partially alleviated by increasing the desorption temperatures. The theme of this paper is the determination the minimum desorption temperature required for a given set of evaporating/condensing temperatures for an activated carbon + HFC 134a adsorption cooler. The calculation scheme is validated from experimental data. Results from a parametric analysis covering a range of evaporating/condensing/desorption temperatures are presented. It is found that the overall uptake efficiency and Carnot COP characterize these bounds. A design methodology for adsorber sizing is evolved. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Future space-based gravity wave (GW) experiments such as the Big Bang Observatory (BBO), with their excellent projected, one sigma angular resolution, will measure the luminosity distance to a large number of GW sources to high precision, and the redshift of the single galaxies in the narrow solid angles towards the sources will provide the redshifts of the gravity wave sources. One sigma BBO beams contain the actual source in only 68% of the cases; the beams that do not contain the source may contain a spurious single galaxy, leading to misidentification. To increase the probability of the source falling within the beam, larger beams have to be considered, decreasing the chances of finding single galaxies in the beams. Saini et al. T.D. Saini, S.K. Sethi, and V. Sahni, Phys. Rev. D 81, 103009 (2010)] argued, largely analytically, that identifying even a small number of GW source galaxies furnishes a rough distance-redshift relation, which could be used to further resolve sources that have multiple objects in the angular beam. In this work we further develop this idea by introducing a self-calibrating iterative scheme which works in conjunction with Monte Carlo simulations to determine the luminosity distance to GW sources with progressively greater accuracy. This iterative scheme allows one to determine the equation of state of dark energy to within an accuracy of a few percent for a gravity wave experiment possessing a beam width an order of magnitude larger than BBO (and therefore having a far poorer angular resolution). This is achieved with no prior information about the nature of dark energy from other data sets such as type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, etc. DOI:10.1103/PhysRevD.87.083001
Resumo:
We address the question, does a system A being entangled with another system B, put any constraints on the Heisenberg uncertainty relation (or the Schrodinger-Robertson inequality)? We find that the equality of the uncertainty relation cannot be reached for any two noncommuting observables, for finite dimensional Hilbert spaces if the Schmidt rank of the entangled state is maximal. One consequence is that the lower bound of the uncertainty relation can never be attained for any two observables for qubits, if the state is entangled. For infinite-dimensional Hilbert space too, we show that there is a class of physically interesting entangled states for which no two noncommuting observables can attain the minimum uncertainty equality.