989 resultados para Absolute


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adsorption, electrokinetic, microflotation, and flocculation studies have been carried out on sphalerite and galena minerals using extracellular polysaccharides (ECP) isolated from Bacillus polymyxa. The adsorption density of ECP onto galena is found to be higher than that onto sphalerite. The adsorption of ECP onto sphalerite is found to increase from pH 3 to about pH 7, where a maximum is attained, and thereafter continuously decreases. With respect to galena, the adsorption density of ECP steadily increases with increased pH. The addition of ECP correspondingly reduces the negative electrophoretic mobilities of sphalerite and galena in absolute magnitude without shifting their isoelectric points. However, the magnitude of the reduction in the electrophoretic mobility values is found to be greater for galena compared to that for sphalerite. Microflotation tests show that galena is depressed while sphalerite is floated using ECP in the entire pH range investigated. Selective flotation tests on a synthetic mixture of galena and sphalerite corroborate that sphalerite could be floated from galena at pH 9-9.5 using ECP as a depressant for galena. Flocculation tests reveal that in the pH range 9-11, sphalerite is dispersed and galena is flocculated in the presence of ECP. Dissolution tests indicate release of the lattice metal ions from galena and sphalerite, while co-precipitation tests confirm chemical interaction between lead or zinc ions and ECP. Fourier transform infrared spectroscopic studies provide evidence in support of hydrogen bonding and chemical interaction for the adsorption of ECP onto galena/sphalerite surfaces. (C) 2002 Elsevier Science (USA).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The equilibrium solubilities of dihydroxy benzene isomers (resorcinol and pyrocatechol) and its mixture were experimentally determined at different temperatures (308, 318, 328, and 338 K) in the pressure range of 9.8-16.2 MPa. In the ternary system, the solubilities of pyrocatechol increased while the solubilities of resorcinol decreased relative to their binary solubilities. A new association model was developed based on the concept of formation of solvate complex molecules to correlate the solubility of the solid for mixed solids in supercritical carbon dioxide (SCCO(2)). The model equation relates the solubility of solute in terms of the cosolute composition, temperature, pressure and density of SCCO(2). The proposed model correlated the solubilities of sixteen solid systems taken from the literature and current experimental data with an average absolute relative deviation (AARD) of around 4%. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper analytical expressions for optimal Vdd and Vth to minimize energy for a given speed constraint are derived. These expressions are based on the EKV model for transistors and are valid in both strong inversion and sub threshold regions. The effect of gate leakage on the optimal Vdd and Vth is analyzed. A new gradient based algorithm for controlling Vdd and Vth based on delay and power monitoring results is proposed. A Vdd-Vth controller which uses the algorithm to dynamically control the supply and threshold voltage of a representative logic block (sum of absolute difference computation of an MPEG decoder) is designed. Simulation results using 65 nm predictive technology models are given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thin foils of copper, silver and gold were equilibrated with tetragonal GeO2 under controlled View the MathML source gas streams at 1000 K. The equilibrium concentration of germanium in the foils was determined by the X-ray fluorescence technique. The standard free energy of formation of tetragonal GeO2 was measured by a solid oxide galvanic cell. The chemical potential of germanium calculated from the experimental data and the free energies of formation of carbon monoxide and carbon dioxide was found to decrease in the sequence Ag + Ge > Au + Ge > Cu + Ge. The more negative value for the chemical potential of germanium in solid copper, compared to that in solid gold, cannot be explained in terms of the strain energy factor, electro-negativity differences or the vaporization energies of the solvent, and suggests that the d band and its hybridization with s electrons are an important factor in determining the absolute values for the chemical potential in dilute solutions. However, the variation of the chemical potential with solute concentration can be correlated to the concentration of s and p electrons in the outer shell.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The equilibrium solubilities of the solids in supercritical carbon dioxide (SCCO(2)) are considerably enhanced in the presence of cosolvents. The solubilities of m-dinitrobenzene at 308 and 318 K over a pressure range of 9.5-14.5 MPa in the presence of 1.13-2.17 mol% methanol as cosolvent were determined. The average increase in the solubilities in the presence of methanol compared to that obtained in the absence of methanol was around 35%. A new semi-empirical equation in terms of temperature, pressure, density of SCCO(2) and cosolvent composition comprising of 7 adjustable parameters was developed. The proposed model was used to correlate the solubility of the solids in SCCO(2) for the 44 systems available in the literature along with current data. The average absolute relative deviation of the experimental data from the model equation was 3.58%, which is better than the existing models. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Long-distance dispersal (LDD) events, although rare for most plant species, can strongly influence population and community dynamics. Animals function as a key biotic vector of seeds and thus, a mechanistic and quantitative understanding of how individual animal behaviors scale to dispersal patterns at different spatial scales is a question of critical importance from both basic and applied perspectives. Using a diffusion-theory based analytical approach for a wide range of animal movement and seed transportation patterns, we show that the scale (a measure of local dispersal) of the seed dispersal kernel increases with the organisms' rate of movement and mean seed retention time. We reveal that variations in seed retention time is a key determinant of various measures of LDD such as kurtosis (or shape) of the kernel, thinkness of tails and the absolute number of seeds falling beyond a threshold distance. Using empirical data sets of frugivores, we illustrate the importance of variability in retention times for predicting the key disperser species that influence LDD. Our study makes testable predictions linking animal movement behaviors and gut retention times to dispersal patterns and, more generally, highlights the potential importance of animal behavioral variability for the LDD of seeds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The enantiospecific first total synthesis of the enantiomer of the irregular sesquiterpene from Ligusticumgrayi allothapsenol, starting from the readily available monoterpene (R)-carvone, is described, which confirmed the assumed absolute configuration of the natural product.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Selectivity of the particular solvent to separate a mixture is essential for the optimal design of a separation process. Supercritical carbon dioxide (SCCO2) is widely used as a solvent in the extraction, purification and separation of specialty chemicals. The effect of the temperature and pressure on selectivity is complicated and varies from system to system. The effect of temperature and pressure on selectivity of SCCO2 for different solid mixtures available in literature was analyzed. In this work, we have developed two model equations to correlate the selectivity in terms of temperature and pressure. The model equations have correlated the selectivity of SCCO2 satisfactorily for 18 solid mixtures with an average absolute relative deviation (AARD) of around 5%. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and aim of the study: The quantification of incidentally found aortic valve calcification on computed tomography (CT) is not performed routinely, as data relating to the accuracy of aortic valve calcium for estimating the severity of aortic stenosis (AS) is neither consistent nor validated. As aortic valve calcium quantification by CT is confounded by wall and coronary ostial calcification, as well as motion artifact, the ex-vivo micro-computed tomography (micro-CT) of stenotic aortic valves allows a precise measurement of the amounts of calcium present. The study aim, using excised aortic valves from patients with confirmed AS, was to determine if the amount of calcium on micro-CT correlated with the severity of AS. Methods: Each of 35 aortic valves that had been excised from patients during surgical valve replacement were examined using micro-CT imaging. The amount of calcium present was determined by absolute and proportional values of calcium volume in the specimen. Subsequently, the correlation between calcium volume and preoperative mean aortic valve gradient (MAVG), peak transaortic velocity (V-max), and aortic valve area (AVA) on echocardiography, was evaluated. Results: The mean calcium volume across all valves was 603.2 +/- 398.5 mm(3), and the mean ratio of calcium volume to total valve volume was 0.36 +/- 0.16. The mean aortic valve gradient correlated positively with both calcium volume and ratio (r = 0.72, p <0.001). V-max also correlated positively with the calcium volume and ratio (r = 0.69 and 0.76 respectively; p <0.001). A logarithmic curvilinear model proved to be the best fit to the correlation. A calcium volume of 480 mm(3) showed sensitivity and specificity of 0.76 and 0.83, respectively, for a diagnosis of severe AS, while a calcium ratio of 0.37 yielded sensitivity and specificity of 0.82 and 0.94, respectively. Conclusion: A radiological estimation of calcium amount by volume, and its proportion to the total valve volume, were shown to serve as good predictive parameters for severe AS. An estimation of the calcium volume may serve as a complementary measure for determining the severity of AS when aortic valve calcification is identified on CT imaging. The Journal of Heart Valve Disease 2012;21:320-327

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A minimum weight design of laminated composite structures is carried out for different loading conditions and failure criteria using genetic algorithm. The phenomenological maximum stress (MS) and Tsai-Wu (TW) criteria and the micro-mechanism-based failure mechanism based (FMB) failure criteria are considered. A new failure envelope called the Most Conservative Failure Envelope (MCFE) is proposed by combining the three failure envelopes based on the lowest absolute values of the strengths predicted. The effect of shear loading on the MCFE is investigated. The interaction between the loading conditions, failure criteria, and strength-based optimal design is brought out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Fourier transform Raman and infrared (IR) spectra of the Ceramide 3 (CER3) have been recorded in the regions 200-3500 cm(-1) and 680-4000 cm(-1), respectively. We have calculated the equilibrium geometry, harmonic vibrational wavenumbers, electrostatic potential surfaces, absolute Raman scattering activities and IR absorption intensities by the density functional theory with B3LYP functionals having extended basis set 6-311G. This work is undertaken to study the vibrational spectra of CER3 completely and to identify the various normal modes with better wavenumber accuracy. Good consistency is found between the calculated results and experimental data for the IR and Raman spectra.