253 resultados para prediction problems
Resumo:
A new formulation is suggested for the fixed end-point regulator problem, which, in conjunction with the recently developed integration-free algorithms, provides an efficient means of obtaining numerical solutions to such problems.
Resumo:
Background: Dengue virus along with the other members of the flaviviridae family has reemerged as deadly human pathogens. Understanding the mechanistic details of these infections can be highly rewarding in developing effective antivirals. During maturation of the virus inside the host cell, the coat proteins E and M undergo conformational changes, altering the morphology of the viral coat. However, due to low resolution nature of the available 3-D structures of viral assemblies, the atomic details of these changes are still elusive. Results: In the present analysis, starting from C alpha positions of low resolution cryo electron microscopic structures the residue level details of protein-protein interaction interfaces of dengue virus coat proteins have been predicted. By comparing the preexisting structures of virus in different phases of life cycle, the changes taking place in these predicted protein-protein interaction interfaces were followed as a function of maturation process of the virus. Besides changing the current notion about the presence of only homodimers in the mature viral coat, the present analysis indicated presence of a proline-rich motif at the protein-protein interaction interface of the coat protein. Investigating the conservation status of these seemingly functionally crucial residues across other members of flaviviridae family enabled dissecting common mechanisms used for infections by these viruses. Conclusions: Thus, using computational approach the present analysis has provided better insights into the preexisting low resolution structures of virus assemblies, the findings of which can be made use of in designing effective antivirals against these deadly human pathogens.
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.
Analytical prediction of break-out noise from a reactive rectangular plenum with four flexible walls
Resumo:
This paper describes an analytical calculation of break-out noise from a rectangular plenum with four flexible walls by incorporating three-dimensional effects along with the acoustical and structural wave coupling phenomena. The breakout noise from rectangular plenums is important and the coupling between acoustic waves within the plenum and structural waves in the flexible plenum walls plays a critical role in prediction of the transverse transmission loss. The first step in breakout noise prediction is to calculate the inside plenum pressure field and the normal flexible plenum wall vibration by using an impedance-mobility approach, which results in a compact matrix formulation. In the impedance-mobility compact matrix (IMCM) approach, it is presumed that the coupled response can be described in terms of finite sets of the uncoupled acoustic subsystem and the structural subsystem. The flexible walls of the plenum are modeled as an unfolded plate to calculate natural frequencies and mode shapes of the uncoupled structural subsystem. The second step is to calculate the radiated sound power from the flexible walls using Kirchhoff-Helmholtz (KH) integral formulation. Analytical results are validated with finite element and boundary element (FEM-BEM) numerical models. (C) 2010 Acoustical Society of America. DOI: 10.1121/1.3463801]
Resumo:
In this paper, a novel genetic algorithm is developed by generating artificial chromosomes with probability control to solve the machine scheduling problems. Generating artificial chromosomes for Genetic Algorithm (ACGA) is closely related to Evolutionary Algorithms Based on Probabilistic Models (EAPM). The artificial chromosomes are generated by a probability model that extracts the gene information from current population. ACGA is considered as a hybrid algorithm because both the conventional genetic operators and a probability model are integrated. The ACGA proposed in this paper, further employs the ``evaporation concept'' applied in Ant Colony Optimization (ACO) to solve the permutation flowshop problem. The ``evaporation concept'' is used to reduce the effect of past experience and to explore new alternative solutions. In this paper, we propose three different methods for the probability of evaporation. This probability of evaporation is applied as soon as a job is assigned to a position in the permutation flowshop problem. Experimental results show that our ACGA with the evaporation concept gives better performance than some algorithms in the literature.
Resumo:
In the present study silver nanoparticles were rapidly synthesized at room temperature by treating silver ions with the Citrus limon (lemon) extract The effect of various process parameters like the reductant con centration mixing ratio of the reactants and the concentration of silver nitrate were studied in detail In the standardized process 10(-2) M silver nitrate solution was interacted for 411 with lemon Juice (2% citric acid concentration and 0 5% ascorbic acid concentration) in the ratio of 1 4(vol vol) The formation of silver nanoparticles was confirmed by Surface Plasmon Resonance as determined by UV-Visible spectra in the range of 400-500 nm X ray diffraction analysis revealed the distinctive facets (1 1 1 200 220 2 2 2 and 3 1 1 planes) of silver nanoparticles We found that citric acid was the principal reducing agent for the nanosynthesis process FT IR spectral studies demonstrated citric acid as the probable stabilizing agent Silver nanoparticles below 50 nm with spherical and spheroidal shape were observed from transmission electron microscopy The correlation between absorption maxima and particle sizes were derived for different UV-Visible absorption maxima (corresponding to different citric acid concentrations) employing MiePlot v 3 4 The theoretical particle size corresponding to 2% citric acid concentration was corn pared to those obtained by various experimental techniques like X ray diffraction analysis atomic force microscopy and transmission electron microscopy (C) 2010 Elsevier B V All rights reserved
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The present article deals with the development of a finite element modelling approach for the prediction of residual velocities of hard core ogival-nose projectiles following normal impact on mild steel target plates causing perforation. The impact velocities for the cases analysed are in the range 818–866.3 m/s. Assessment of finite element modelling and analysis includes a comprehensive mesh convergence study using shell elements for representing target plates and solid elements for jacketed projectiles with a copper sheath and a rigid core. Dynamic analyses were carried out with the explicit contact-impact LS-DYNA 970 solver. It has been shown that proper choice of element size and strain rate-based material modelling of target plate are crucial for obtaining test-based residual velocity.The present modelling procedure also leads to realistic representation of target plate failure and projectile sheath erosion during perforation, and confirms earlier observations that thermal effects are not significant for impact problems within the ordnance range. To the best of our knowledge, any aspect of projectile failure or degradation obtained in simulation has not been reported earlier in the literature. The validated simulation approach was applied to compute the ballistic limits and to study the effects of plate thickness and projectile diameter on residual velocity, and trends consistent with experimental data for similar situations were obtained.
Resumo:
In voiced speech analysis epochal information is useful in accurate estimation of pitch periods and the frequency response of the vocal tract system. Ideally, linear prediction (LP) residual should give impulses at epochs. However, there are often ambiguities in the direct use of LP residual since samples of either polarity occur around epochs. Further, since the digital inverse filter does not compensate the phase response of the vocal tract system exactly, there is an uncertainty in the estimated epoch position. In this paper we present an interpretation of LP residual by considering the effect of the following factors: 1) the shape of glottal pulses, 2) inaccurate estimation of formants and bandwidths, 3) phase angles of formants at the instants of excitation, and 4) zeros in the vocal tract system. A method for the unambiguous identification of epochs from LP residual is then presented. The accuracy of the method is tested by comparing the results with the epochs obtained from the estimated glottal pulse shapes for several vowel segments. The method is used to identify the closed glottis interval for the estimation of the true frequency response of the vocal tract system.
Resumo:
An a priori error analysis of discontinuous Galerkin methods for a general elliptic problem is derived under a mild elliptic regularity assumption on the solution. This is accomplished by using some techniques from a posteriori error analysis. The model problem is assumed to satisfy a GAyenrding type inequality. Optimal order L (2) norm a priori error estimates are derived for an adjoint consistent interior penalty method.
Resumo:
A performance prediction procedure is presented for low specific speed submersible pumps with a review of loss models given in the literature. Most of the loss theories discussed in this paper are one dimensional and improvements are made with good empiricism for the prediction to cover the entire range of operation of the low specific speed pumps. Loss correlations, particularly in the low flow range, are discussed. Prediction of the shape of efficiency-capacity and total head-capacity curves agrees well with the experimental results in almost the full range of operating conditions. The approach adopted in the present analysis, of estimating the losses in the individual components of a pump, provides means for improving the performance and identifying the problem areas in existing designs of the pumps. The investigation also provides a basis for selection of parameters for the optimal design of the pumps in which the maximum efficiency is an important design parameter. The scope for improvement in the prediction procedure with the nature of flow phenomena in the low flow region has been discussed in detail.
Resumo:
The performance of the 240 m2 solar pond in Bangalore is discussed. The problems of erosion of gradient zone and formation of internal convective zones is highlighted. The technique of passive salt addition is shown to be a viable alternative for salt recycling. Different techniques of heat extraction are discussed and the use of an immersed copper heat exchanger is shown to be most convenient. A two-zone model for prediction of the seasonal structure of the solar pond performance is proposed. The model is shown to simulate the seasonal structure of the observed variation of the temperature in the storage zone.
Resumo:
We consider functions that map the open unit disc conformally onto the complement of a bounded convex set. We call these functions concave univalent functions. In 1994, Livingston presented a characterization for these functions. In this paper, we observe that there is a minor flaw with this characterization. We obtain certain sharp estimates and the exact set of variability involving Laurent and Taylor coefficients for concave functions. We also present the exact set of variability of the linear combination of certain successive Taylor coefficients of concave functions.