875 resultados para Probability of choice
Resumo:
The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).
Resumo:
Background: Taxol (generic name paclitaxel), a plant-derived antineoplastic agent, used widely against breast, ovarian and lung cancer, was originally isolated from the bark of the Pacific yew, Taxus brevifolia. The limited supply of the drug has prompted efforts to find alternative sources, such as chemical synthesis, tissue and cell cultures of the Taxus species both of which are expensive and yield low levels. Fermentation processes with microorganisms would be the methods of choice to lower the costs and increase yields. Previously we have reported that F. solani isolated from T. celebica produced taxol and its precursor baccatin III in liquid grown cultures J Biosci 33: 259-67, 2008. This study was performed to evaluate the inhibition of proliferation and induction of apoptosis of cancer cell lines by the fungal taxol and fungal baccatin III of F. solani isolated from T. celebica. Methods: Cell lines such as HeLa, HepG2, Jurkat, Ovcar3 and T47D were cultured individually and treated with fungal taxol, baccatin III with or without caspase inhibitors according to experimental requirements. Their efficacy on apoptotic induction was examined. Results: Both fungal taxol and baccatin III inhibited cell proliferation of a number of cancer cell lines with IC50 ranging from 0.005 to 0.2 mu M for fungal taxol and 2 to 5 mu M for fungal baccatin III. They also induced apoptosis in JR4-Jurkat cells with a possible involvement of anti-apoptotic Bcl2 and loss in mitochondrial membrane potential, and was unaffected by inhibitors of caspase-9,-2 or -3 but was prevented in presence of caspase-10 inhibitor. DNA fragmentation was also observed in cells treated with fungal taxol and baccatin III. Conclusions: The cytotoxic activity exhibited by fungal taxol and baccatin III involves the same mechanism, dependent on caspase-10 and membrane potential loss of mitochondria, with taxol having far greater cytotoxic potential.
Resumo:
The industrial production and commercial applications of titanium dioxide nanoparticles have increased considerably in recent times, which has increased the probability of environmental contamination with these agents and their adverse effects on living systems. This study was designed to assess the genotoxicity potential of TiO2 NPs at high exposure concentrations, its bio-uptake, and the oxidative stress it generated, a recognised cause of genotoxicity. Allium cepa root tips were treated with TiO2 NP dispersions at four different concentrations (12.5, 25, 50, 100 mu g/mL). A dose dependant decrease in the mitotic index (69 to 21) and an increase in the number of distinctive chromosomal aberrations were observed. Optical, fluorescence and confocal laser scanning microscopy revealed chromosomal aberrations, including chromosomal breaks and sticky, multipolar, and laggard chromosomes, and micronucleus formation. The chromosomal aberrations and DNA damage were also validated by the comet assay. The bio-uptake of TiO2 in particulate form was the key cause of reactive oxygen species generation, which in turn was probably the cause of the DNA aberrations and genotoxicity observed in this study.
Resumo:
Formal synthesis, of an actin binding macrolide rhizopodin was achieved in 19 longest linear steps. The key features of the synthesis include a stereoselective Mukaiyama aldol reaction, dual role of a Nagao auxiliary (first, as a chiral auxiliary of choice for installing hydroxy centers and, later, as an acylating agent to form an amide bond with an amino alcohol), late stage oxazole formation, and Stille coupling reactions.
Resumo:
Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.
Resumo:
Strong atmospheric turbulence is a major hindrance in wireless optical communication systems. In this paper, the performance of a wireless optical communication system is analyzed using different modulation formats such as, binary phase shift keying-subcarrier intensity modulation (BPSK-SIM), differential phase shift keying (DPSK), differential phase shift keying-subcarrier intensity modulation (DPSK-SIM), Mary pulse position modulation (M-PPM) and polarization shift keying (PoISK). The atmospheric channel is modeled for strong atmospheric turbulences with combined effect of turbulence and pointing errors. Novel closed-form analytical expressions for average bit error rate (BER), channel capacity and outage probability for the various modulation techniques, viz. BPSK-SIM, DPSK, DPSK-SIM, PoISK and M-PPM are derived. The simulated results for BER, channel capacity and outage probability of various modulation techniques are plotted and analyzed. (C) 2014 Elsevier GmbH. All rights reserved.
Resumo:
Developments in the statistical extreme value theory, which allow non-stationary modeling of changes in the frequency and severity of extremes, are explored to analyze changes in return levels of droughts for the Colorado River. The transient future return levels (conditional quantiles) derived from regional drought projections using appropriate extreme value models, are compared with those from observed naturalized streamflows. The time of detection is computed as the time at which significant differences exist between the observed and future extreme drought levels, accounting for the uncertainties in their estimates. Projections from multiple climate model-scenario combinations are considered; no uniform pattern of changes in drought quantiles is observed across all the projections. While some projections indicate shifting to another stationary regime, for many projections which are found to be non-stationary, detection of change in tail quantiles of droughts occurs within the 21st century with no unanimity in the time of detection. Earlier detection is observed in droughts levels of higher probability of exceedance. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
During the early stages of operation, high-tech startups need to overcome the liability of newness and manage high degree of uncertainty. Several high-tech startups fail due to inability to deal with skeptical customers, underdeveloped markets and limited resources in selling an offering that has no precedent. This paper leverages the principles of effectuation (a logic of entrepreneurial decision making under uncertainty) to explain the journey from creation to survival of high-tech startups in an emerging economy. Based on the 99tests.com case study, this paper suggests that early stage high-tech startups in emerging economies can increase their probability of survival by adopting the principles of effectuation.
Resumo:
We show that in studies of light quark- and gluon-initiated jet discrimination, it is important to include the information on softer reconstructed jets (associated jets) around a primary hard jet. This is particularly relevant while adopting a small radius parameter for reconstructing hadronic jets. The probability of having an associated jet as a function of the primary jet transverse momentum (PT) and radius, the minimum associated jet pi, and the association radius is computed up to next-to-double logarithmic accuracy (NDLA), and the predictions are compared with results from Herwig++, Pythia6 and Pythia8 Monte Carlos (MC). We demonstrate the improvement in quark-gluon discrimination on using the associated jet rate variable with the help of a multivariate analysis. The associated jet rates are found to be only mildly sensitive to the choice of parton shower and hadronization algorithms, as well as to the effects of initial state radiation and underlying event. In addition, the number of k(t) subjets of an anti-k(t) jet is found to be an observable that leads to a rather uniform prediction across different MC's, broadly being in agreement with predictions in NDLA, as compared to the often used number of charged tracks observable.
Resumo:
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10 degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four post Lest rig. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
We consider a quantum particle, moving on a lattice with a tight-binding Hamiltonian, which is subjected to measurements to detect its arrival at a particular chosen set of sites. The projective measurements are made at regular time intervals tau, and we consider the evolution of the wave function until the time a detection occurs. We study the probabilities of its first detection at some time and, conversely, the probability of it not being detected (i.e., surviving) up to that time. We propose a general perturbative approach for understanding the dynamics which maps the evolution operator, which consists of unitary transformations followed by projections, to one described by a non-Hermitian Hamiltonian. For some examples of a particle moving on one-and two-dimensional lattices with one or more detection sites, we use this approach to find exact expressions for the survival probability and find excellent agreement with direct numerical results. A mean-field model with hopping between all pairs of sites and detection at one site is solved exactly. For the one-and two-dimensional systems, the survival probability is shown to have a power-law decay with time, where the power depends on the initial position of the particle. Finally, we show an interesting and nontrivial connection between the dynamics of the particle in our model and the evolution of a particle under a non-Hermitian Hamiltonian with a large absorbing potential at some sites.
Resumo:
Self-assembly of nano sized particles during natural drying causes agglomeration and shell formation at the surface of micron sized droplets. The shell undergoes sol-gel transition leading to buckling at the weakest point on the surface and produces different types of structures. Manipulation of the buckling rate with inclusion of surfactant (sodium dodecyl sulphate, SDS) and salt (anilinium hydrochloride, AHC) to the nano-sized particle dispersion (nanosilica) is reported here in an acoustically levitated single droplet. Buckling in levitated droplets is a cumulative, complicated function of acoustic streaming, chemistry, agglomeration rate, porosity, radius of curvature, and elastic energy of shell. We put forward our hypothesis on how buckling occurs and can be suppressed during natural drying of the droplets. Global precipitation of aggregates due to slow drying of surfactant-added droplets (no added salts) enhances the rigidity of the shell formed and hence reduces the buckling probability of the shell. On the contrary, adsorption of SDS aggregates on salt ions facilitates the buckling phenomenon with an addition of minute concentration of the aniline salt to the dispersion. Variation in the concentration of the added particles (SDS/AHC) also leads to starkly different morphologies and transient behaviour of buckling (buckling modes like paraboloid, ellipsoid, and buckling rates). Tuning of the buckling rate causes a transition in the final morphology from ring and bowl shapes to cocoon type of structure. (C) 2015 AIP Publishing LLC.
Resumo:
UHV power transmission lines have high probability of shielding failure due to their higher height, larger exposure area and high operating voltage. Lightning upward leader inception and propagation is an integral part of lightning shielding failure analysis and need to be studied in detail. In this paper a model for lightning attachment has been proposed based on the present knowledge of lightning physics. Leader inception is modeled based on the corona charge present near the conductor region and the propagation model is based on the correlation between the lightning induced voltage on the conductor and the drop along the upward leader channel. The inception model developed is compared with previous inception models and the results obtained using the present and previous models are comparable. Lightning striking distances (final jump) for various return stroke current were computed for different conductor heights. The computed striking distance values showed good correlation with the values calculated using the equation proposed by the IEEE working group for the applicable conductor heights of up to 8 m. The model is applied to a 1200 kV AC power transmission line and inception of the upward leader is analyzed for this configuration.
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.
Resumo:
This study presents a comprehensive evaluation of five widely used multisatellite precipitation estimates (MPEs) against 1 degrees x 1 degrees gridded rain gauge data set as ground truth over India. One decade observations are used to assess the performance of various MPEs (Climate Prediction Center (CPC)-South Asia data set, CPC Morphing Technique (CMORPH), Precipitation Estimation From Remotely Sensed Information Using Artificial Neural Networks, Tropical Rainfall Measuring Mission's Multisatellite Precipitation Analysis (TMPA-3B42), and Global Precipitation Climatology Project). All MPEs have high detection skills of rain with larger probability of detection (POD) and smaller ``missing'' values. However, the detection sensitivity differs from one product (and also one region) to the other. While the CMORPH has the lowest sensitivity of detecting rain, CPC shows highest sensitivity and often overdetects rain, as evidenced by large POD and false alarm ratio and small missing values. All MPEs show higher rain sensitivity over eastern India than western India. These differential sensitivities are found to alter the biases in rain amount differently. All MPEs show similar spatial patterns of seasonal rain bias and root-mean-square error, but their spatial variability across India is complex and pronounced. The MPEs overestimate the rainfall over the dry regions (northwest and southeast India) and severely underestimate over mountainous regions (west coast and northeast India), whereas the bias is relatively small over the core monsoon zone. Higher occurrence of virga rain due to subcloud evaporation and possible missing of small-scale convective events by gauges over the dry regions are the main reasons for the observed overestimation of rain by MPEs. The decomposed components of total bias show that the major part of overestimation is due to false precipitation. The severe underestimation of rain along the west coast is attributed to the predominant occurrence of shallow rain and underestimation of moderate to heavy rain by MPEs. The decomposed components suggest that the missed precipitation and hit bias are the leading error sources for the total bias along the west coast. All evaluation metrics are found to be nearly equal in two contrasting monsoon seasons (southwest and northeast), indicating that the performance of MPEs does not change with the season, at least over southeast India. Among various MPEs, the performance of TMPA is found to be better than others, as it reproduced most of the spatial variability exhibited by the reference.