857 resultados para Approach through a game
Resumo:
This paper reports the structural behavior and thermodynamics of the complexation of siRNA with poly(amidoamine) (PAMAM) dendrimers of generation 3 (G3) and 4 (G4) through fully atomistic molecular dynamics (MD) simulations accompanied by free energy calculations and inherent structure determination. We have also done simulation with one siRNA and two dendrimers (2 x G3 or 2xG4) to get the microscopic picture of various binding modes. Our simulation results reveal the formation of stable siRNA-dendrimer complex over nanosecond time scale. With the increase in dendrimcr generation, the charge ratio increases and hence the binding energy between siRNA and dendrimer also increases in accordance with available experimental measurements. Calculated radial distribution functions of amines groups of various subgenerations in a given generation of dendrimer and phosphate in backbone of siRNA reveals that one dendrimer of generation 4 shows better binding with siRNA almost wrapping the dendrimer when compared to the binding with lower generation dendrimer like G3. In contrast, two dendrimers of generation 4 show binding without siRNA wrapping the den-rimer because of repulsion between two dendrimers. The counterion distribution around the complex and the water molecules in the hydration shell of siRNA give microscopic picture of the binding dynamics. We see a clear correlation between water. counterions motions and the complexation i.e. the water molecules and counterions which condensed around siRNA are moved away from the siRNA backbone when dendrimer start binding to the siRNA back hone. As siRNA wraps/bind to the dendrimer counterions originally condensed onto siRNA (Na-1) and dendrimer (Cl-) get released. We give a quantitative estimate of the entropy of counterions and show that there is gain in entropy due to counterions release during the complexation. Furthermore, the free energy of complexation of IG3 and IG4 at two different salt concentrations shows that increase in salt concentration leads to the weakening of the binding affinity of siRNA and dendrimer.
Resumo:
The paper examines the needs, premises and criteria for effective public participation in tactical forest planning. A method for participatory forest planning utilizing the techniques of preference analysis, professional expertise and heuristic optimization is introduced. The techniques do not cover the whole process of participatory planning, but are applied as a tool constituting the numerical core for decision support. The complexity of multi-resource management is addressed by hierarchical decision analysis which assesses the public values, preferences and decision criteria toward the planning situation. An optimal management plan is sought using heuristic optimization. The plan can further be improved through mutual negotiations, if necessary. The use of the approach is demonstrated with an illustrative example, it's merits and challenges for participatory forest planning and decision making are discussed and a model for applying it in general forest planning context is depicted. By using the approach, valuable information can be obtained about public preferences and the effects of taking them into consideration on the choice of the combination of standwise treatment proposals for a forest area. Participatory forest planning calculations, carried out by the approach presented in the paper, can be utilized in conflict management and in developing compromises between competing interests.
Resumo:
The concept of carbocycle-heterocycle equivalency has been utilised to assemble the framework of fawcettimine-serratinine group of alkaloids from 1,5-cyclooctadiene through a common tricarbocyclic intermediate 3.
Resumo:
A new algorithm based on signal subspace approach is proposed for localizing a sound source in shallow water. In the first instance we assumed an ideal channel with plane parallel boundaries and known reflection properties. The sound source is assumed to emit a broadband stationary stochastic signal. The algorithm takes into account the spatial distribution of all images and reflection characteristics of the sea bottom. It is shown that both range and depth of a source can be measured accurately with the help of a vertical array of sensors. For good results the number of sensors should be greater than the number of significant images; however, localization is possible even with a smaller array but at the cost of higher side lobes. Next, we allowed the channel to be stochastically perturbed; this resulted in random phase errors in the reflection coefficients. The most singular effect of the phase errors is to introduce into the spectral matrix an extra term which may be looked upon as a signal generated coloured noise. It is shown through computer simulations that the signal peak height is reduced considerably as a consequence of random phase errors.
Resumo:
The present study analyses the memories of watching Finnish television in Estonia during the last decades of the Soviet occupation from the late 1960s until the beginning of 1990s. The study stems from a culturalist approach, perceiving television as a relevant aspect in the audiences’ everyday lives. It explores the significance of Finnish television on the society of occupied Estonia from the point of view of its historical audiences. The literature review concentrates on concepts such as the power of television, transnational media, historical audience reception and memory as an object of research. It also explains the concept of spillover, which refers to the unintentional bilateral flow of television signals from one country to another. Despite the numerous efforts of the Soviet authorities to prevent the viewing of the "bourgeois television", there still remained a small gap in the Iron Curtain. The study describes the phenomenon of watching Finnish television in Estonia. It provides understanding about the significance of watching Finnish television in Soviet Estonia through the experiences of its former audience. In addition, it explores what do people remember about watching Finnish television, and why. The empirical data was acquired from peoples’ personal memories through the analysis of private interviews and written responses during the period from February 2010 to February 2011. A total of 85 responses (5 interviews and 83 written responses) were analysed. The research employed the methods of oral history and memory studies. The main theoretical sources of the study include the works of Mati Graf and Heikki Roiko-Jokela, Hagi Šein, Sonia Livingstone, Janet Staiger and Emily Keightley. The study concludes that besides fulfilling the role of an entertainer and an informer, Finnish television enabled its Estonian audiences to gain entry into the imaginary world. Access to this imaginary world was so important, that the viewers engaged in illegal activities and gained special skills, whereby a phenomenon of "television tourism" developed. Most of the memories about Finnish television are vivid and similar. The latter indicates both the reliability and the collectiveness of such memories, which in return give shape to collective identities. Thus, for the Estonian viewers, the experience of watching Finnish television during the Soviet occupation has became part of their identity.
Resumo:
Potential transients are obtained by using “Padé approximants” (an accurate approximation procedure valid globally — not just perturbatively) for all amplitudes of concentration polarization and current densities. This is done for several mechanistic schemes under constant current conditions. We invert the non-linear current-potential relationship in the form (using the Lagrange or the Ramanujan method) of power series appropriate to the two extremes, namely near reversible and near irreversible. Transforming both into the Pad́e expressions, we construct the potential-time profile by retaining whichever is the more accurate of the two. The effectiveness of this method is demonstrated through illustrations which include couplings of homogeneous chemical reactions to the electron-transfer step.
Resumo:
The rate of breakage of feed in ball milling is usually represented in the form of a first-order rate equation. The equation was developed by treating a simple batch test mill as a well mixed reactor. Several case of deviation from the rule have been reported in the literature. This is attributed to the fact that accumulated fines interfere with the feed material and breaking events are masked by these fines. In the present paper, a new rate equation is proposed which takes into account the retarding effect of fines during milling. For this purpose the analogy of diffusion of ions through permeable membranes is adopted, with suitable modifications. The validity of the model is cross checked with the data obtained in batch grinding of ?850/+600 ?m size quartz. The proposed equation enables calculation of the rate of breakage of the feed at any instant of time.
Resumo:
Standard-cell design methodology is an important technique in semicustom-VLSI design. It lends itself to the easy automation of the crucial layout part, and many algorithms have been proposed in recent literature for the efficient placement of standard cells. While many studies have identified the Kerninghan-Lin bipartitioning method as being superior to most others, it must be admitted that the behaviour of the method is erratic, and that it is strongly dependent on the initial partition. This paper proposes a novel algorithm for overcoming some of the deficiencies of the Kernighan-Lin method. The approach is based on an analogy of the placement problem with neural networks, and, by the use of some of the organizing principles of these nets, an attempt is made to improve the behavior of the bipartitioning scheme. The results have been encouraging, and the approach seems to be promising for other NP-complete problems in circuit layout.
Resumo:
A kinetic model has been developed for dislocation bending at the growth surface in compressively stressed low-mobility films such as III-V nitrides. It is based on a reduction in the number of atoms at the growth surface. Stress and nonstress sources of driving force for such a reduction are discussed. A comparison between the derived equations and experimentally measured stress evolution data yields good agreement between the predicted and observed angles through which dislocations bend.
Resumo:
A short access to homocalystegine analogues silylated at C7 is described. The synthesis involves the desymmetrization of a (phenyldimethylsilyl)methylcycloheptatriene using osmium-mediated dihydroxylation, followed by the diol protection and a cycloaddition involving the remaining diene moiety and an acylnitroso reagent. Additions of the osmium and acylnitroso reagents were shown, through X-ray diffraction studies of the resulting major isomers, to occur anti and syn, respectively, relative to the SiCH2 substituent. N-O bond cleavage on the resulting cycloadduct then produces the aminopolyol having a silylmethyl substituent. Oxidation of the C-Si bond also afforded an access to unusual amino-heptitols having five contiguous stereogenic centers. In the course of this work, we finally observed a unusual rearrangement taking place on cycloheptanone 18 substituted by two acetyl groups and a neighboring Boc-protected amine. A profound reorganization of the substituents on the seven-membered ring effectively took place under acidic conditions (TFA) leading to the thermodynamically more stable homocalystegine-type compound., DFT calculations of the conformational energy of isomeric silyl homocalystegines indicated that the product observed upon the acid-mediated rearrangement was the most stable of a series of analogues with various distributions of substituents along the seven-membered ring backbone. A tentative mechanism is proposed to rationalize the acetate migrations and inversions of the stereochemistry at various stereocenters.
Resumo:
Mycobacterium tuberculosis is an example of an intracellular pathogen that mediates the disease state through complex interactions with the host's immune system. Not only does this organism replicate in the hostile environment prevailing within the infected macrophage, but it has also developed intricate mechanisms to inhibit several defence mechanisms of the host's immune system. It is postulated here that the mediators of these interactions with the host are products of differentially expressed genes in the pathogen, B and T fell responses of the host are hence to be used as tools to identify such gene products from an expression library of the Mycobacterium tuberculosis genome. The various pathways of generating a productive immune response that may be targeted by the pathogen are discussed.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
Perfect or even mediocre weather predictions over a long period are almost impossible because of the ultimate growth of a small initial error into a significant one. Even though the sensitivity of initial conditions limits the predictability in chaotic systems, an ensemble of prediction from different possible initial conditions and also a prediction algorithm capable of resolving the fine structure of the chaotic attractor can reduce the prediction uncertainty to some extent. All of the traditional chaotic prediction methods in hydrology are based on single optimum initial condition local models which can model the sudden divergence of the trajectories with different local functions. Conceptually, global models are ineffective in modeling the highly unstable structure of the chaotic attractor. This paper focuses on an ensemble prediction approach by reconstructing the phase space using different combinations of chaotic parameters, i.e., embedding dimension and delay time to quantify the uncertainty in initial conditions. The ensemble approach is implemented through a local learning wavelet network model with a global feed-forward neural network structure for the phase space prediction of chaotic streamflow series. Quantification of uncertainties in future predictions are done by creating an ensemble of predictions with wavelet network using a range of plausible embedding dimensions and delay times. The ensemble approach is proved to be 50% more efficient than the single prediction for both local approximation and wavelet network approaches. The wavelet network approach has proved to be 30%-50% more superior to the local approximation approach. Compared to the traditional local approximation approach with single initial condition, the total predictive uncertainty in the streamflow is reduced when modeled with ensemble wavelet networks for different lead times. Localization property of wavelets, utilizing different dilation and translation parameters, helps in capturing most of the statistical properties of the observed data. The need for taking into account all plausible initial conditions and also bringing together the characteristics of both local and global approaches to model the unstable yet ordered chaotic attractor of a hydrologic series is clearly demonstrated.
Resumo:
Electronic Exchanges are double-sided marketplaces that allows multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. In this paper, we propose a new design approach for an one-shot exchange that collects bids from buyers and sellers and clears the market at the end of the bidding period. The main principle of the approach is to decouple the allocation from pricing. It is well known that it is impossible for an exchange with voluntary participation to be efficient and budget-balanced. Budget-balance is a mandatory requirement for an exchange to operate in profit. Our approach is to allocate the trade to maximize the reported values of the agents. The pricing is posed as payoff determination problem that distributes the total payoff fairly to all agents with budget-balance imposed as a constraint. We devise an arbitration scheme by axiomatic approach to solve the payoff determination problem using the added-value concept of game theory.
Resumo:
We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data. (C) 2011 Optical Society of America