449 resultados para acid fast bacterium
Resumo:
An analysis of 503 available triosephosphate isomerase sequences revealed nine fully conserved residues. Of these, four residues-K12, H95, E97 and E165-are capable of proton transfer and are all arrayed around the dihydroxyacetone phosphate substrate in the three-dimensional structure. Specific roles have been assigned to the residues K12, H95 and E165, but the nature of the involvement of E97 has not been established. Kinetic and structural characterization is reported for the E97Q and E97D mutants of Plasmodium falciparum triosephosphate isomerase (Pf TIM). A 4000-fold reduction in k(cat) is observed for E97Q, whereas the E97D mutant shows a 100-fold reduction. The control mutant, E165A, which lacks the key catalytic base, shows an approximately 9000-fold drop in activity. The integrity of the overall fold and stability of the dimeric structure have been demonstrated by biophysical studies. Crystal structures of E97Q and E97D mutants have been determined at 2.0 angstrom resolution. In the case of the isosteric replacement of glutamic acid by glutamine in the E97Q mutant a large conformational change for the critical K12 side chain is observed, corresponding to a trans-to-gauche transition about the C gamma-C delta (chi(3)) bond. In the E97D mutant, the K12 side chain maintains the wild-type orientation, but the hydrogen bond between K12 and D97 is lost. The results are interpreted as a direct role for E97 in the catalytic proton transfer cycle. The proposed mechanism eliminates the need to invoke the formation of the energetically unfavourable imidazolate anion at H95, a key feature of the classical mechanism.
Resumo:
Wireless networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem – the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node.In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels,where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information(CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a corresponding factored class of control policies corresponding to local cooperation between nodes with a local outage constraint. The resulting optimal scheme in this class can again be computed efficiently in a decentralized manner. We demonstrate significant energy savings for both slow and fast fading channels through numerical simulations of randomly distributed networks.
Resumo:
A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d
Resumo:
Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.