56 resultados para Additivity
Resumo:
In this study, we used data from both experiments and mathematical simulations to analyze the consequences of the interacting effects of intraguild predation (IGP), cannibalism and parasitism occurring in isolation and simultaneously in trophic interactions involving two blowfly species under shared parasitism. We conducted experiments to determine the short-term response of two blowfly species to these interactions with respect to their persistence. A mathematical model was employed to extend the results obtained from these experiments to the long-term consequences of these interactions for the persistence of the blowfly species. Our experimental results revealed that IGP attenuated the strength of the effects of cannibalism and parasitism between blowfly host species, increasing the probability of persistence of both populations. The simulations obtained from the mathematical model indicated that IGP is a key interaction for the long-term dynamics of this system. The presence of different species interacting in a tri-trophic system relaxed the severity of the effects of a particular interaction between two species, changing species abundances and promoting persistence through time. This pattern was related to indirect interactions with a third species, the parasitoid species included in this study. © 2012 The Society of Population Ecology and Springer Japan.
Resumo:
We focus on kernels incorporating different kinds of prior knowledge on functions to be approximated by Kriging. A recent result on random fields with paths invariant under a group action is generalised to combinations of composition operators, and a characterisation of kernels leading to random fields with additive paths is obtained as a corollary. A discussion follows on some implications on design of experiments, and it is shown in the case of additive kernels that the so-called class of “axis designs” outperforms Latin hypercubes in terms of the IMSE criterion.
Resumo:
The problem of rationally engineering protein molecules can be simplified where effects of mutations on protein function are additive. Crystal structures of single and double mutants in the hydrophobic core of gene V protein indicate that structural and functional effects of core mutations are additive when the regions structurally influenced by the mutations do not substantially overlap. These regions of influence can provide a simple basis for identifying sets of mutations that will show additive effects.
Resumo:
We present a new approach accounting for the nonadditivity of attractive parts of solid-fluid and fluidfluid potentials to improve the quality of the description of nitrogen and argon adsorption isotherms on graphitized carbon black in the framework of non-local density functional theory. We show that the strong solid-fluid interaction in the first monolayer decreases the fluid-fluid interaction, which prevents the twodimensional phase transition to occur. This results in smoother isotherm, which agrees much better with experimental data. In the region of multi-layer coverage the conventional non-local density functional theory and grand canonical Monte Carlo simulations are known to over-predict the amount adsorbed against experimental isotherms. Accounting for the non-additivity factor decreases the solid-fluid interaction with the increase of intermolecular interactions in the dense adsorbed fluid, preventing the over-prediction of loading in the region of multi-layer adsorption. Such an improvement of the non-local density functional theory allows us to describe experimental nitrogen and argon isotherms on carbon black quite accurately with mean error of 2.5 to 5.8% instead of 17 to 26% in the conventional technique. With this approach, the local isotherms of model pores can be derived, and consequently a more reliab * le pore size distribution can be obtained. We illustrate this by applying our theory against nitrogen and argon isotherms on a number of activated carbons. The fitting between our model and the data is much better than the conventional NLDFT, suggesting the more reliable PSD obtained with our approach.
Resumo:
The paper reviews some axioms of additivity concerning ranking methods used for generalized tournaments with possible missing values and multiple comparisons. It is shown that one of the most natural properties, called consistency, has strong links to independence of irrelevant comparisons, an axiom judged unfavourable when players have different opponents. Therefore some directions of weakening consistency are suggested, and several ranking methods, the score, generalized row sum and least squares as well as fair bets and its two variants (one of them entirely new) are analysed whether they satisfy the properties discussed. It turns out that least squares and generalized row sum with an appropriate parameter choice preserve the relative ranking of two objects if the ranking problems added have the same comparison structure.
Resumo:
Surface pressure (pi)-molecular area (A) curves were used to characterize the packing of pseudo-ternary mixed Langmuir monolayers of egg phosphatidylcholine (EPC), 1,2-dioleoyl-3-trimethylammonium propane (DOTAP) and L-alpha-dioleoyl phosphatidylethanolamine (DOPE). This pseudo-ternary mixture EPC/DOPE/DOTAP has been successfully employed in liposome formulations designed for DNA non-viral vectors. Pseudo-binary mixtures were also studied as a control. Miscibility behavior was inferred from pi-A curves applying the additivity rule by calculating the excess free energy of mixture (Delta G(Exc)). The interaction between the lipids was also deduced from the surface compressional modulus (C(s)(-1)). The deviation from ideality shows dependence on the lipid polar head type and monolayer composition. For lower DOPE concentrations, the forces are predominantly attractive. However, if the monolayer is DOPE rich, the DOTAP presence disturbs the PE-PE intermolecular interaction and the net interaction is then repulsive. The ternary monolayer EPC/DOPE/DOTAP presented itself in two configurations, modulated by the DOPE content, in a similar behavior to the DOPE/DOTAP monolayers. These results contribute to the understanding of the lipid interactions and packing in self-assembled systems associated with the in vitro and in vivo stability of liposomes. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we consider testing for additivity in a class of nonparametric stochastic regression models. Two test statistics are constructed and their asymptotic distributions are established. We also conduct a small sample study for one of the test statistics through a simulated example. (C) 2002 Elsevier Science (USA).
Resumo:
A fast and direct surface plasmon resonance (SPR) method for the kinetic analysis of the interactions between peptide antigens and immobilised monoclonal antibodies (mAb) has been established. Protocols have been developed to overcome the problems posed by the small size of the analytes (< 1600 Da). The interactions were well described by a simple 1:1 bimolecular interaction and the rate constants were self-consistent and reproducible. The key features for the accuracy of the kinetic constants measured were high buffer flow rates, medium antibody surface densities and high peptide concentrations. The method was applied to an extensive analysis of over 40 peptide analogues towards two distinct anti-FMDV antibodies, providing data in total agreement with previous competition ELISA experiments. Eleven linear 15-residue synthetic peptides, reproducing all possible combinations of the four replacements found in foot-and-mouth disease virus (FMDV) field isolate C-S30, were evaluated. The direct kinetic SPR analysis of the interactions between these peptides and three anti-site A mAbs suggested additivity in all combinations of the four relevant mutations, which was confirmed by parallel ELISA analysis. The four-point mutant peptide (A15S30) reproducing site A from the C-S30 strain was the least antigenic of the set, in disagreement with previously reported studies with the virus isolate. Increasing peptide size from 15 to 21 residues did not significantly improve antigenicity. Overnight incubation of A15S30 with mAb 4C4 in solution showed a marked increase in peptide antigenicity not observed for other peptide analogues, suggesting that conformational rearrangement could lead to a stable peptide-antibody complex. In fact, peptide cyclization clearly improved antigenicity, confirming an antigenic reversion in a multiply substituted peptide. Solution NMR studies of both linear and cyclic versions of the antigenic loop of FMDV C-S30 showed that structural features previously correlated with antigenicity were more pronounced in the cyclic peptide. Twenty-six synthetic peptides, corresponding to all possible combinations of five single-point antigenicity-enhancing replacements in the GH loop of FMDV C-S8c1, were also studied. SPR kinetic screening of these peptides was not possible due to problems mainly related to the high mAb affinities displayed by these synthetic antigens. Solution affinity SPR analysis was employed and affinities displayed were generally comparable to or even higher than those corresponding to the C-S8c1 reference peptide A15. The NMR characterisation of one of these multiple mutants in solution showed that it had a conformational behaviour quite similar to that of the native sequence A15 and the X-ray diffraction crystallographic analysis of the peptide ? mAb 4C4 complex showed paratope ? epitope interactions identical to all FMDV peptide ? mAb complexes studied so far. Key residues for these interactions are those directly involved in epitope ? paratope contacts (141Arg, 143Asp, 146His) as well as residues able to stabilise a particular peptide global folding. A quasi-cyclic conformation is held up by a hydrophobic cavity defined by residues 138, 144 and 147 and by other key intrapeptide hydrogen bonds, delineating an open turn at positions 141, 142 and 143 (corresponding to the Arg-Gly-Asp motif).
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
The shortest tube of constant diameter that can form a given knot represents the 'ideal' form of the knot. Ideal knots provide an irreducible representation of the knot, and they have some intriguing mathematical and physical features, including a direct correspondence with the time-averaged shapes of knotted DNA molecules in solution. Here we describe the properties of ideal forms of composite knots-knots obtained by the sequential tying of two or more independent knots (called factor knots) on the same string. We find that the writhe (related to the handedness of crossing points) of composite knots is the sum of that of the ideal forms of the factor knots. By comparing ideal composite knots with simulated configurations of knotted, thermally fluctuating DNA, we conclude that the additivity of writhe applies also to randomly distorted configurations of composite knots and their corresponding factor knots. We show that composite knots with several factor knots may possess distinct structural isomers that can be interconverted only by loosening the knot.
Resumo:
Price bubbles in an Arrow-Debreu valuation equilibrium in infinite-timeeconomy are a manifestation of lack of countable additivity of valuationof assets. In contrast, known examples of price bubbles in sequentialequilibrium in infinite time cannot be attributed to the lack of countableadditivity of valuation. In this paper we develop a theory of valuation ofassets in sequential markets (with no uncertainty) and study the nature ofprice bubbles in light of this theory. We consider an operator, calledpayoff pricing functional, that maps a sequence of payoffs to the minimumcost of an asset holding strategy that generates it. We show that thepayoff pricing functional is linear and countably additive on the set ofpositive payoffs if and only if there is no Ponzi scheme, and providedthat there is no restriction on long positions in the assets. In the knownexamples of equilibrium price bubbles in sequential markets valuation islinear and countably additive. The presence of a price bubble indicatesthat the asset's dividends can be purchased in sequential markers at acost lower than the asset's price. We also present examples of equilibriumprice bubbles in which valuation is nonlinear but not countably additive.
Resumo:
Glioma has been considered resistant to chemotherapy and radiation. Recently, concomitant and adjuvant chemoradiotherapy with temozolomide has become the standard treatment for newly diagnosed glioblastoma. Conversely (neo-)adjuvant PCV (procarbazine, lomustine, vincristine) failed to improve survival in the more chemoresponsive tumor entities of anaplastic oligoastrocytoma and oligodendroglioma. Preclinical investigations suggest synergism or additivity of radiotherapy and temozolomide in glioma cell lines. Although the relative contribution of the concomitant and the adjuvant chemotherapy, respectively, cannot be assessed, the early introduction of chemotherapy and the simultaneous administration with radiotherapy appear to be key for the improvement of outcome. Epigenetic inactivation of the DNA repair enzyme methylguanine methyltransferase (MGMT) seems to be the strongest predictive marker for outcome in patients treated with alkylating agent chemotherapy. Patients whose tumors do not have MGMT promoter methylation are less likely to benefit from the addition of temozolomide chemotherapy and require alternative treatment strategies. The predictive value of MGMT gene promoter methylation is being validated in ongoing trials aiming at overcoming this resistance by a dose-dense continuous temozolomide administration or in combination with MGMT inhibitors. Understanding of molecular mechanisms allows for rational targeting of specific pathways of repair, signaling, and angiogenesis. The addition of tyrosine kinase inhibitors vatalanib (PTK787) and vandetinib (ZD6474), the integrin inhibitor cilengitide, the monoclonal antibodies bevacizumab and cetuximab, the mammalian target of rapamycin inhibitors temsirolimus and everolimus, and the protein kinase C inhibitor enzastaurin, among other agents, are in clinical investigation, building on the established chemoradiotherapy regimen for newly diagnosed glioblastoma.
Resumo:
In this paper we study network structures in which the possibilities for cooperation are restricted and can not be described by a cooperative game. The benefits of a group of players depend on how these players are internally connected. One way to represent this type of situations is the so-called reward function, which represents the profits obtainable by the total coalition if links can be used to coordinate agents' actions. The starting point of this paper is the work of Vilaseca et al. where they characterized the reward function. We concentrate on those situations where there exist costs for establishing communication links. Given a reward function and a costs function, our aim is to analyze under what conditions it is possible to associate a cooperative game to it. We characterize the reward function in networks structures with costs for establishing links by means of two conditions, component permanence and component additivity. Finally, an economic application is developed to illustrate the main theoretical result.
Resumo:
We reconsider the discrete version of the axiomatic cost-sharing model. We propose a condition of (informational) coherence requiring that not all informational refinements of a given problem be solved differently from the original problem. We prove that strictly coherent linear cost-sharing rules must be simple random-order rules.