12 resultados para alcohol-related problems

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of determining if two finite groups are isomorphic. The groups are assumed to be represented by their multiplication tables. We present an O(n) algorithm that determines if two Abelian groups with n elements each are isomorphic. This improves upon the previous upper bound of O(n log n) [Narayan Vikas, An O(n) algorithm for Abelian p-group isomorphism and an O(n log n) algorithm for Abelian group isomorphism, J. Comput. System Sci. 53 (1996) 1-9] known for this problem. We solve a more general problem of computing the orders of all the elements of any group (not necessarily Abelian) of size n in O(n) time. Our algorithm for isomorphism testing of Abelian groups follows from this result. We use the property that our order finding algorithm works for any group to design a simple O(n) algorithm for testing whether a group of size n, described by its multiplication table, is nilpotent. We also give an O(n) algorithm for determining if a group of size n, described by its multiplication table, is Abelian. (C) 2007 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lasers are very efficient in heating localized regions and hence they find a wide application in surface treatment processes. The surface of a material can be selectively modified to give superior wear and corrosion resistance. In laser surface-melting and welding problems, the high temperature gradient prevailing in the free surface induces a surface-tension gradient which is the dominant driving force for convection (known as thermo-capillary or Marangoni convection). It has been reported that the surface-tension driven convection plays a dominant role in determining the melt pool shape. In most of the earlier works on laser-melting and related problems, the finite difference method (FDM) has been used to solve the Navier Stokes equations [1]. Since the Reynolds number is quite high in these cases, upwinding has been used. Though upwinding gives physically realistic solutions even on a coarse grid, the results are inaccurate. McLay and Carey have solved the thermo-capillary flow in welding problems by an implicit finite element method [2]. They used the conventional Galerkin finite element method (FEM) which requires that the pressure be interpolated by one order lower than velocity (mixed interpolation). This restricts the choice of elements to certain higher order elements which need numerical integration for evaluation of element matrices. The implicit algorithm yields a system of nonlinear, unsymmetric equations which are not positive definite. Computations would be possible only with large mainframe computers.Sluzalec [3] has modeled the pulsed laser-melting problem by an explicit method (FEM). He has used the six-node triangular element with mixed interpolation. Since he has considered the buoyancy induced flow only, the velocity values are small. In the present work, an equal order explicit FEM is used to compute the thermo-capillary flow in the laser surface-melting problem. As this method permits equal order interpolation, there is no restriction in the choice of elements. Even linear elements such as the three-node triangular elements can be used. As the governing equations are solved in a sequential manner, the computer memory requirement is less. The finite element formulation is discussed in this paper along with typical numerical results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A variety of data structures such as inverted file, multi-lists, quad tree, k-d tree, range tree, polygon tree, quintary tree, multidimensional tries, segment tree, doubly chained tree, the grid file, d-fold tree. super B-tree, Multiple Attribute Tree (MAT), etc. have been studied for multidimensional searching and related problems. Physical data base organization, which is an important application of multidimensional searching, is traditionally and mostly handled by employing inverted file. This study proposes MAT data structure for bibliographic file systems, by illustrating the superiority of MAT data structure over inverted file. Both the methods are compared in terms of preprocessing, storage and query costs. Worst-case complexity analysis of both the methods, for a partial match query, is carried out in two cases: (a) when directory resides in main memory, (b) when directory resides in secondary memory. In both cases, MAT data structure is shown to be more efficient than the inverted file method. Arguments are given to illustrate the superiority of MAT data structure in an average case also. An efficient adaptation of MAT data structure, that exploits the special features of MAT structure and bibliographic files, is proposed for bibliographic file systems. In this adaptation, suitable techniques for fixing and ranking of the attributes for MAT data structure are proposed. Conclusions and proposals for future research are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Balance and stability are very important for everybody and especially for sports-person who undergo extreme physical activities. Balance and stability exercises not only have a great impact on the performance of the sportsperson but also play a pivotal role in their rehabilitation. Therefore, it is very essential to have knowledge about a sportsperson’s balance and also to quantify the same. In this work, we propose a system consisting of a wobble board, with a gyro enhanced orientation sensor and a motion display for visual feedback to help the sportsperson improve their stability. The display unit gives in real time the orientation of the wobble board, which can help the sportsperson to apply necessary corrective forces to maintain neutral position. The system is compact and portable. We also quantify balance and stability using power spectral density. The sportsperson is made stand on the wobble board and the angular orientation of the wobble board is recorded for each 0.1 second interval. The signal is analized using discrete Fourier transforms. The power of this signal is related to the stability of the subject. This procedure is used to measure the balance and stability of an elite cricket team. Representative results are shown below: Table 1 represents power comparison of two subjects and Table 2 represents power comparison of left leg and right leg of one subject. This procedure can also be used in clinical practice to monitor improvement in stability dysfunction of sportsperson with injuries or other related problems undergoing rehabilitation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PbS quantum dots capped with mercaptoethanol (C2H5OSH) have been synthesized in poly vinyl alcohol and used to investigate their photoluminescence (PL) response to various ions such as zinc (Zn), cadmium (Cd), mercury (Hg), silver (Ag), copper (Cu), iron (Fe), manganese (Mn), cobalt (Co), chromium (Cr) and nickel (Ni). The enhancement in the PL intensity was observed with specific ions namely Zn, Cd, Hg and Ag. Among these four ions, the PL response to Hg and Ag even at sub-micro-molar concentrations was quite high, compared to that of Zn and Cd. It was observed that the change in Pb and S molar ratio has profound effect on the sensitivity of these ions. These results indicate that the sensitivity of these QDs could be fine-tuned by controlling the S concentration at the surface. Contrary to the above, Cu quenched the photoluminescence. In Cd based QDs related ion probing, Hg and Cu was found to have quenching properties, however, our PbS QDs have quenching property only for Cu ions. This was attributed to the formation HgS at the surface that has bandgap higher than PbS. Another interesting property of PbS in PVA observed is photo-brightening mechanism due to the curing of the polymer with laser. However, the presence of excess ions at the surface changes its property to photo-darkening/brightening that depends on the direction of carrier transfer mechanism (from QDs to the surface adsorbed metal ions or vice-versa). which is an interesting feature for metal ion detectivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a self-regularized pseudo-time marching strategy for ill-posed, nonlinear inverse problems involving recovery of system parameters given partial and noisy measurements of system response. While various regularized Newton methods are popularly employed to solve these problems, resulting solutions are known to sensitively depend upon the noise intensity in the data and on regularization parameters, an optimal choice for which remains a tricky issue. Through limited numerical experiments on a couple of parameter re-construction problems, one involving the identification of a truss bridge and the other related to imaging soft-tissue organs for early detection of cancer, we demonstrate the superior features of the pseudo-time marching schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a novel genetic algorithm is developed by generating artificial chromosomes with probability control to solve the machine scheduling problems. Generating artificial chromosomes for Genetic Algorithm (ACGA) is closely related to Evolutionary Algorithms Based on Probabilistic Models (EAPM). The artificial chromosomes are generated by a probability model that extracts the gene information from current population. ACGA is considered as a hybrid algorithm because both the conventional genetic operators and a probability model are integrated. The ACGA proposed in this paper, further employs the ``evaporation concept'' applied in Ant Colony Optimization (ACO) to solve the permutation flowshop problem. The ``evaporation concept'' is used to reduce the effect of past experience and to explore new alternative solutions. In this paper, we propose three different methods for the probability of evaporation. This probability of evaporation is applied as soon as a job is assigned to a position in the permutation flowshop problem. Experimental results show that our ACGA with the evaporation concept gives better performance than some algorithms in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the effect of dual beam excitation on the photoluminescence (PL) from PbS quantum dots in polyvinyl alcohol by using two excitation lasers, namely Ar+ (514.5 nm) and He-Ne laser (670 nm). Both sources of excitation gave similar PL spectra around 1.67 eV (related to shallow traps) and 1.1 eV (related to deep traps). When both lasers were used at the same time, we found that the PL induced by each of the lasers was partly quenched by the illumination of the other laser. The proposed mechanism of this quenching effect involves traps that are populated by one specific laser excitation, being photo-ionized by the presence of the other laser. Temperature, laser intensity and modulation frequency dependent quenching efficiencies are presented in this paper. This reversible modulation has potential for optical switching and memory device applications. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general survey is presented on the generation and characteristics of Red Muds. The interrelationship between Mud properties, their disposal and utilisation is emphasised. After an outline on the possible applications for Red Muds, the problems related to important (potential) uses have been pointed out. Suggestions have been incorporated on what needs to be done to promote the utilisation of Red Muds particularly in the Indian context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Minimization problems with respect to a one-parameter family of generalized relative entropies are studied. These relative entropies, which we term relative alpha-entropies (denoted I-alpha), arise as redundancies under mismatched compression when cumulants of compressed lengths are considered instead of expected compressed lengths. These parametric relative entropies are a generalization of the usual relative entropy (Kullback-Leibler divergence). Just like relative entropy, these relative alpha-entropies behave like squared Euclidean distance and satisfy the Pythagorean property. Minimizers of these relative alpha-entropies on closed and convex sets are shown to exist. Such minimizations generalize the maximum Renyi or Tsallis entropy principle. The minimizing probability distribution (termed forward I-alpha-projection) for a linear family is shown to obey a power-law. Other results in connection with statistical inference, namely subspace transitivity and iterated projections, are also established. In a companion paper, a related minimization problem of interest in robust statistics that leads to a reverse I-alpha-projection is studied.