992 resultados para Function space
Resumo:
The present study explores a “hydrophobic” energy function for folding simulations of the protein lattice model. The contribution of each monomer to conformational energy is the product of its “hydrophobicity” and the number of contacts it makes, i.e., E(h⃗, c⃗) = −Σi=1N cihi = −(h⃗.c⃗) is the negative scalar product between two vectors in N-dimensional cartesian space: h⃗ = (h1, … , hN), which represents monomer hydrophobicities and is sequence-dependent; and c⃗ = (c1, … , cN), which represents the number of contacts made by each monomer and is conformation-dependent. A simple theoretical analysis shows that restrictions are imposed concomitantly on both sequences and native structures if the stability criterion for protein-like behavior is to be satisfied. Given a conformation with vector c⃗, the best sequence is a vector h⃗ on the direction upon which the projection of c⃗ − c̄⃗ is maximal, where c̄⃗ is the diagonal vector with components equal to c̄, the average number of contacts per monomer in the unfolded state. Best native conformations are suggested to be not maximally compact, as assumed in many studies, but the ones with largest variance of contacts among its monomers, i.e., with monomers tending to occupy completely buried or completely exposed positions. This inside/outside segregation is reflected on an apolar/polar distribution on the corresponding sequence. Monte Carlo simulations in two dimensions corroborate this general scheme. Sequences targeted to conformations with large contact variances folded cooperatively with thermodynamics of a two-state transition. Sequences targeted to maximally compact conformations, which have lower contact variance, were either found to have degenerate ground state or to fold with much lower cooperativity.
Resumo:
Tissue engineering of cartilage, i.e., the in vitro cultivation of cartilage cells on synthetic polymer scaffolds, was studied on the Mir Space Station and on Earth. Specifically, three-dimensional cell-polymer constructs consisting of bovine articular chondrocytes and polyglycolic acid scaffolds were grown in rotating bioreactors, first for 3 months on Earth and then for an additional 4 months on either Mir (10−4–10−6 g) or Earth (1 g). This mission provided a unique opportunity to study the feasibility of long-term cell culture flight experiments and to assess the effects of spaceflight on the growth and function of a model musculoskeletal tissue. Both environments yielded cartilaginous constructs, each weighing between 0.3 and 0.4 g and consisting of viable, differentiated cells that synthesized proteoglycan and type II collagen. Compared with the Earth group, Mir-grown constructs were more spherical, smaller, and mechanically inferior. The same bioreactor system can be used for a variety of controlled microgravity studies of cartilage and other tissues. These results may have implications for human spaceflight, e.g., a Mars mission, and clinical medicine, e.g., improved understanding of the effects of pseudo-weightlessness in prolonged immobilization, hydrotherapy, and intrauterine development.
Resumo:
Recent improvements of a hierarchical ab initio or de novo approach for predicting both α and β structures of proteins are described. The united-residue energy function used in this procedure includes multibody interactions from a cumulant expansion of the free energy of polypeptide chains, with their relative weights determined by Z-score optimization. The critical initial stage of the hierarchical procedure involves a search of conformational space by the conformational space annealing (CSA) method, followed by optimization of an all-atom model. The procedure was assessed in a recent blind test of protein structure prediction (CASP4). The resulting lowest-energy structures of the target proteins (ranging in size from 70 to 244 residues) agreed with the experimental structures in many respects. The entire experimental structure of a cyclic α-helical protein of 70 residues was predicted to within 4.3 Å α-carbon (Cα) rms deviation (rmsd) whereas, for other α-helical proteins, fragments of roughly 60 residues were predicted to within 6.0 Å Cα rmsd. Whereas β structures can now be predicted with the new procedure, the success rate for α/β- and β-proteins is lower than that for α-proteins at present. For the β portions of α/β structures, the Cα rmsd's are less than 6.0 Å for contiguous fragments of 30–40 residues; for one target, three fragments (of length 10, 23, and 28 residues, respectively) formed a compact part of the tertiary structure with a Cα rmsd less than 6.0 Å. Overall, these results constitute an important step toward the ab initio prediction of protein structure solely from the amino acid sequence.
Resumo:
We examined the effects of eye position on saccades evoked by electrical stimulation of the intraparietal sulcus (IPS) of rhesus monkeys. Microstimulation evoked saccades from sites on the posterior bank, floor, and the medial bank of the IPS. The size and direction of the eye movements varied as a function of initial eye position before microstimulation. At many stimulation sites, eye position affected primarily the amplitude and not the direction of the evoked saccades. These "modified vector saccades" were characteristic of most stimulation-sensitive zones in the IPS, with the exception of a narrow strip located mainly on the floor of the sulcus. Stimulation in this "intercalated zone" evoked saccades that moved the eyes into a particular region in head-centered space, independent of the starting position of the eyes. This latter response is compatible with the stimulation site representing a goal zone in head-centered coordinates. On the other hand, the modified vector saccades observed outside the intercalated zone are indicative of a more distributed representation of head-centered space. A convergent projection from many modified vector sites onto each intercalated site may be a basis for a transition from a distributed to a more explicit representation of space in head-centered coordinates.
Resumo:
In this paper, we prove that infinite-dimensional vector spaces of α-dense curves are generated by means of the functional equations f(x)+f(2x)+⋯+f(nx)=0, with n≥2, which are related to the partial sums of the Riemann zeta function. These curves α-densify a large class of compact sets of the plane for arbitrary small α, extending the known result that this holds for the cases n=2,3. Finally, we prove the existence of a family of solutions of such functional equation which has the property of quadrature in the compact that densifies, that is, the product of the length of the curve by the nth power of the density approaches the Jordan content of the compact set which the curve densifies.
Resumo:
We introduce a novel way of measuring the entropy of a set of values undergoing changes. Such a measure becomes useful when analyzing the temporal development of an algorithm designed to numerically update a collection of values such as artificial neural network weights undergoing adjustments during learning. We measure the entropy as a function of the phase-space of the values, i.e. their magnitude and velocity of change, using a method based on the abstract measure of entropy introduced by the philosopher Rudolf Carnap. By constructing a time-dynamic two-dimensional Voronoi diagram using Voronoi cell generators with coordinates of value- and value-velocity (change of magnitude), the entropy becomes a function of the cell areas. We term this measure teleonomic entropy since it can be used to describe changes in any end-directed (teleonomic) system. The usefulness of the method is illustrated when comparing the different approaches of two search algorithms, a learning artificial neural network and a population of discovering agents. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Wigner functions play a central role in the phase space formulation of quantum mechanics. Although closely related to classical Liouville densities, Wigner functions are not positive definite and may take negative values on subregions of phase space. We investigate the accumulation of these negative values by studying bounds on the integral of an arbitrary Wigner function over noncompact subregions of the phase plane with hyperbolic boundaries. We show using symmetry techniques that this problem reduces to computing the bounds on the spectrum associated with an exactly solvable eigenvalue problem and that the bounds differ from those on classical Liouville distributions. In particular, we show that the total "quasiprobability" on such a region can be greater than 1 or less than zero. (C) 2005 American Institute of Physics.
Resumo:
The estimation of a concentration-dependent diffusion coefficient in a drying process is known as an inverse coefficient problem. The solution is sought wherein the space-average concentration is known as function of time (mass loss monitoring). The problem is stated as the minimization of a functional and gradient-based algorithms are used to solve it. Many numerical and experimental examples that demonstrate the effectiveness of the proposed approach are presented. Thin slab drying was carried out in an isothermal drying chamber built in our laboratory. The diffusion coefficients of fructose obtained with the present method are compared with existing literature results.
Analytical maps of aerodynamic damping as a function of operating condition for a compressor profile
Resumo:
This paper derives the performance union bound of space-time trellis codes in orthogonal frequency division multiplexing system (STTC-OFDM) over quasi-static frequency selective fading channels based on the distance spectrum technique. The distance spectrum is the enumeration of the codeword difference measures and their multiplicities by exhausted searching through all the possible error event paths. Exhaustive search approach can be used for low memory order STTC with small frame size. However with moderate memory order STTC and moderate frame size the computational cost of exhaustive search increases exponentially, and may become impractical for high memory order STTCs. This requires advanced computational techniques such as Genetic Algorithms (GAS). In this paper, a GA with sharing function method is used to locate the multiple solutions of the distance spectrum for high memory order STTCs. Simulation evaluates the performance union bound and the complexity comparison of non-GA aided and GA aided distance spectrum techniques. It shows that the union bound give a close performance measure at high signal-to-noise ratio (SNR). It also shows that GA sharing function method based distance spectrum technique requires much less computational time as compared with exhaustive search approach but with satisfactory accuracy.
Resumo:
Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.
Resumo:
The generation of very short range forecasts of precipitation in the 0-6 h time window is traditionally referred to as nowcasting. Most existing nowcasting systems essentially extrapolate radar observations in some manner, however, very few systems account for the uncertainties involved. Thus deterministic forecast are produced, which have a limited use when decisions must be made, since they have no measure of confidence or spread of the forecast. This paper develops a Bayesian state space modelling framework for quantitative precipitation nowcasting which is probabilistic from conception. The model treats the observations (radar) as noisy realisations of the underlying true precipitation process, recognising that this process can never be completely known, and thus must be represented probabilistically. In the model presented here the dynamics of the precipitation are dominated by advection, so this is a probabilistic extrapolation forecast. The model is designed in such a way as to minimise the computational burden, while maintaining a full, joint representation of the probability density function of the precipitation process. The update and evolution equations avoid the need to sample, thus only one model needs be run as opposed to the more traditional ensemble route. It is shown that the model works well on both simulated and real data, but that further work is required before the model can be used operationally. © 2004 Elsevier B.V. All rights reserved.
Resumo:
The article explores the possibilities of formalizing and explaining the mechanisms that support spatial and social perspective alignment sustained over the duration of a social interaction. The basic proposed principle is that in social contexts the mechanisms for sensorimotor transformations and multisensory integration (learn to) incorporate information relative to the other actor(s), similar to the "re-calibration" of visual receptive fields in response to repeated tool use. This process aligns or merges the co-actors' spatial representations and creates a "Shared Action Space" (SAS) supporting key computations of social interactions and joint actions; for example, the remapping between the coordinate systems and frames of reference of the co-actors, including perspective taking, the sensorimotor transformations required for lifting jointly an object, and the predictions of the sensory effects of such joint action. The social re-calibration is proposed to be based on common basis function maps (BFMs) and could constitute an optimal solution to sensorimotor transformation and multisensory integration in joint action or more in general social interaction contexts. However, certain situations such as discrepant postural and viewpoint alignment and associated differences in perspectives between the co-actors could constrain the process quite differently. We discuss how alignment is achieved in the first place, and how it is maintained over time, providing a taxonomy of various forms and mechanisms of space alignment and overlap based, for instance, on automaticity vs. control of the transformations between the two agents. Finally, we discuss the link between low-level mechanisms for the sharing of space and high-level mechanisms for the sharing of cognitive representations. © 2013 Pezzulo, Iodice, Ferraina and Kessler.
Resumo:
A hard combinatorial problem is investigated which has useful application in design of discrete devices: the two-block decomposition of a partial Boolean function. The key task is regarded: finding such a weak partition on the set of arguments, at which the considered function can be decomposed. Solving that task is essentially speeded up by the way of preliminary discovering traces of the sought-for partition. Efficient combinatorial operations are used by that, based on parallel execution of operations above adjacent units in the Boolean space.