77 resultados para CONSTRUCTING RACISM
Resumo:
A method has been presented for constructing non-separable solutions of homogeneous linear partial differential equations of the type F(D, D′)W = 0, where D = ∂/∂x, D′ = ∂/∂y, Image where crs are constants and n stands for the order of the equation. The method has also been extended for equations of the form Φ(D, D′, D″)W = 0, where D = ∂/∂x, D′ = ∂/∂y, D″ = ∂/∂z and Image As illustration, the method has been applied to obtain nonseparable solutions of the two and three dimensional Helmholtz equations.
Resumo:
An iterative method of constructing sections of the game surfaces from the players'' extremal trajectory maps is discussed. Barrier sections are presented for aircraft pursuit-evasion at constant altitude, with one aircraft flying at sustained speed and the other varying its speed.
Resumo:
We investigate use of transverse beam polarization in probing anomalous coupling of a Higgs boson to a pair of vector bosons, at the International Linear Collider (ILC). We consider the most general form of V V H (V = W/Z) vertex consistent with Lorentz invariance and investigate its effects on the process e(+)e(-) -> f (f) over barH, f being a light fermion. Constructing observables with definite C P and naive time reversal ((T) over tilde) transformation properties, we find that transverse beam polarization helps us to improve on the sensitivity of one part of the anomalous Z Z H Coupling that is odd under C P. Even more importantly it provides the possibility of discriminating from each other, two terms in the general Z Z H vertex, both of which are even under C P and (T) over bar. Use of transversebeam polarization when combined with information from unpolarized and linearly polarized beams therefore, allows one to have completely independent probes of all the different parts of a general ZZH vertex.
Resumo:
The application of computer-aided inspection integrated with the coordinate measuring machine and laser scanners to inspect manufactured aircraft parts using robust registration of two-point datasets is a subject of active research in computational metrology. This paper presents a novel approach to automated inspection by matching shapes based on the modified iterative closest point (ICP) method to define a criterion for the acceptance or rejection of a part. This procedure improves upon existing methods by doing away with the following, viz., the need for constructing either a tessellated or smooth representation of the inspected part and requirements for an a priori knowledge of approximate registration and correspondence between the points representing the computer-aided design datasets and the part to be inspected. In addition, this procedure establishes a better measure for error between the two matched datasets. The use of localized region-based triangulation is proposed for tracking the error. The approach described improves the convergence of the ICP technique with a dramatic decrease in computational effort. Experimental results obtained by implementing this proposed approach using both synthetic and practical data show that the present method is efficient and robust. This method thereby validates the algorithm, and the examples demonstrate its potential to be used in engineering applications.
Resumo:
The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.
Resumo:
We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level. (C) 2010 American Institute of Physics. [doi:10.1063/1.3398025]
Resumo:
As the study of electrical breakdown phenomena in vacuum systems, gains more importance, a thorough understanding of the breakdown mechanism at high voltages necessitates a chamber for experimental studies. An epoxy-resin chamber has been constructed by casting ring sections which were joined together. The advantages of such a chamber over the conventional metal or glass chamber are given especially as regards the electric field configuration, high voltage lead-in, and the ease of construction. Special facilities can be incorporated while constructing the chamber which makes it more versatile; for example, in pre-breakdown current measurements, electron beam focusing studies, etc.
Resumo:
People in many countries are affected by fluorosis owing to the high levels of fluoride in drinking water. An inexpensive method for estimating the concentration of the fluoride ion in drinking water would be helpful in identifying safe sources of water and also in monitoring the performance of defluoridation techniques. For this purpose, a simple, inexpensive, and portable colorimeter has been developed in the present work. It is used in conjunction with the SPADNS method, which shows a color change in the visible region on addition of water containing fluoride to a reagent solution. Groundwater samples were collected from different parts of the state of Karnataka, India and analysed for fluoride. The results obtained using the colorimeter and the double beam spectrophotometer agreed fairly well. The costs of the colorimeter and of the chemicals required per test were about Rs. 250 (US$ 5) and Rs. 2.5 (US$ 0.05), respectively. In addition, the cost of the chemicals required for constructing the calibration curve was about Rs. 15 (US$ 0.3). (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
IMAGINE a scientist who is a follower of Mahatma Gandhi. What kind of science can he practice? Would it be different from the kind of science that is being practised? I believe it would be and will illustrate this by constructing Mahatma Gandhi's view on science and scientific research based on his writings on related subjects. To me this implies that science is affected by the scientist's subjective values. I will then trace some of the values behind science as practised today and examine their implications for .he relationship between the scientist and the society. I will also present a case for abandoning the belief that science must be universal and show the relevance of Gandhian concepts to scientists.
Resumo:
Background. Several types of networks, such as transcriptional, metabolic or protein-protein interaction networks of various organisms have been constructed, that have provided a variety of insights into metabolism and regulation. Here, we seek to exploit the reaction-based networks of three organisms for comparative genomics. We use concepts from spectral graph theory to systematically determine how differences in basic metabolism of organisms are reflected at the systems level and in the overall topological structures of their metabolic networks. Methodology/Principal Findings. Metabolome-based reaction networks of Mycobacterium tuberculosis, Mycobacterium leprae and Escherichia coli have been constructed based on the KEGG LIGAND database, followed by graph spectral analysis of the network to identify hubs as well as the sub-clustering of reactions. The shortest and alternate paths in the reaction networks have also been examined. Sub-cluster profiling demonstrates that reactions of the mycolic acid pathway in mycobacteria form a tightly connected sub-cluster. Identification of hubs reveals reactions involving glutamate to be central to mycobacterial metabolism, and pyruvate to be at the centre of the E. coli metabolome. The analysis of shortest paths between reactions has revealed several paths that are shorter than well established pathways. Conclusions. We conclude that severe downsizing of the leprae genome has not significantly altered the global structure of its reaction network but has reduced the total number of alternate paths between its reactions while keeping the shortest paths between them intact. The hubs in the mycobacterial networks that are absent in the human metabolome can be explored as potential drug targets. This work demonstrates the usefulness of constructing metabolome based networks of organisms and the feasibility of their analyses through graph spectral methods. The insights obtained from such studies provide a broad overview of the similarities and differences between organisms, taking comparative genomics studies to a higher dimension.
Resumo:
The three dimensional structure of a protein is formed and maintained by the noncovalent interactions among the amino acid residues of the polypeptide chain These interactions can be represented collectively in the form of a network So far such networks have been investigated by considering the connections based on distances between the amino acid residues Here we present a method of constructing the structure network based on interaction energies among the amino acid residues in the protein We have investigated the properties of such protein energy based networks (PENs) and have shown correlations to protein structural features such as the clusters of residues involved in stability formation of secondary and super secondary structural units Further we demonstrate that the analysis of PENs in terms of parameters such as hubs and shortest paths can provide a variety of biologically important information such as the residues crucial for stabilizing the folded units and the paths of communication between distal residues in the protein Finally the energy regimes for different levels of stabilization in the protein structure have clearly emerged from the PEN analysis
Resumo:
The boxicity of a graph G, denoted as boxi(G), is defined as the minimum integer t such that G is an intersection graph of axis-parallel t-dimensional boxes. A graph G is a k-leaf power if there exists a tree T such that the leaves of the tree correspond to the vertices of G and two vertices in G are adjacent if and only if their corresponding leaves in T are at a distance of at most k. Leaf powers are used in the construction of phylogenetic trees in evolutionary biology and have been studied in many recent papers. We show that for a k-leaf power G, boxi(G) a parts per thousand currency sign k-1. We also show the tightness of this bound by constructing a k-leaf power with boxicity equal to k-1. This result implies that there exist strongly chordal graphs with arbitrarily high boxicity which is somewhat counterintuitive.
Resumo:
The statistical properties of fractional Brownian walks are used to construct a path integral representation of the conformations of polymers with different degrees of bond correlation. We specifically derive an expression for the distribution function of the chains’ end‐to‐end distance, and evaluate it by several independent methods, including direct evaluation of the discrete limit of the path integral, decomposition into normal modes, and solution of a partial differential equation. The distribution function is found to be Gaussian in the spatial coordinates of the monomer positions, as in the random walk description of the chain, but the contour variables, which specify the location of the monomer along the chain backbone, now depend on an index h, the degree of correlation of the fractional Brownian walk. The special case of h=1/2 corresponds to the random walk. In constructing the normal mode picture of the chain, we conjecture the existence of a theorem regarding the zeros of the Bessel function.
Resumo:
A strategy for the modular construction of synthetic protein mimics based on the ability non-protein amino acids to act as stereochemical directors of polypeptide chain folding, is described. The use of alpha-aminoisobutyric acid (Aib) to construct stereochemically rigid helices has been exemplified by crystallographic and spectroscopic studies of several apolar peptides, ranging in length from seven to sixteen residues. The problem of linker design in elaborating alpha,alpha motifs has been considered. Analysis of protein crystal structure data provides a guide to choosing linking sequences. Attempts at constructing linked helical motifs using linking Gly-Pro segments have been described. The use of flexible linkers, like epsilon-aminocaproic acid has been examined and the crystallographic and solution state analysis of a linked helix motif has been presented. The use of bulky sidechain modifications on a helical scaffold, as a means of generating putative binding sites has been exemplified by a crystal structure of a peptide packed in a parallel zipper arrangement.
Resumo:
This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.