94 resultados para Microdefect densities
Resumo:
The presence of subcentres cannot be captured by an exponential function. Cubic spline functions seem more appropriate to depict the polycentricity pattern of modern urban systems. Using data from Barcelona Metropolitan Region, two possible population subcentre delimitation procedures are discussed. One, taking an estimated derivative equal to zero, the other, a density gradient equal to zero. It is argued that, in using a cubic spline function, a delimitation strategy based on derivatives is more appropriate than one based on gradients because the estimated density can be negative in sections with very low densities and few observations, leading to sudden changes in estimated gradients. It is also argued that using as a criteria for subcentre delimitation a second derivative with value zero allow us to capture a more restricted subcentre area than using as a criteria a first derivative zero. This methodology can also be used for intermediate ring delimitation.
Resumo:
It has been recently found that a number of systems displaying crackling noise also show a remarkable behavior regarding the temporal occurrence of successive events versus their size: a scaling law for the probability distributions of waiting times as a function of a minimum size is fulfilled, signaling the existence on those systems of self-similarity in time-size. This property is also present in some non-crackling systems. Here, the uncommon character of the scaling law is illustrated with simple marked renewal processes, built by definition with no correlations. Whereas processes with a finite mean waiting time do not fulfill a scaling law in general and tend towards a Poisson process in the limit of very high sizes, processes without a finite mean tend to another class of distributions, characterized by double power-law waiting-time densities. This is somehow reminiscent of the generalized central limit theorem. A model with short-range correlations is not able to escape from the attraction of those limit distributions. A discussion on open problems in the modeling of these properties is provided.
Implementation of IPM programs on European greenhouse tomato production areas: Tools and constraints
Resumo:
Whiteflies and whitefly-transmitted viruses are some of the major constraints on European tomato production. The main objectives of this study were to: identify where and why whiteflies are a major limitation on tomato crops; collect information about whiteflies and associated viruses; determine the available management tools; and identify key knowledge gaps and research priorities. This study was conducted within the framework of ENDURE (European Network for Durable Exploitation of Crop Protection Strategies). Two whitefly species are the main pests of tomato in Europe: Bemisia tabaci and Trialeurodes vaporariorum. Trialeurodes vaporariorum is widespread to all areas where greenhouse industry is present, and B. tabaci has invaded, since the early 1990’s, all the subtropical and tropical areas. Biotypes B and Q of B. tabaci are widespread and especially problematic. Other key tomato pests are Aculops lycopersici, Helicoverpa armigera, Frankliniella occidentalis, and leaf miners. Tomato crops are particularly susceptible to viruses causingTomato yellow leaf curl disease (TYLCD). High incidences of this disease are associated to high pressure of its vector, B. tabaci. The ranked importance of B. tabaci established in this study correlates with the levels of insecticide use, showing B. tabaci as one of the principal drivers behind chemical control. Confirmed cases of resistance to almost all insecticides have been reported. Integrated Pest Management based on biological control (IPM-BC) is applied in all the surveyed regions and identified as the strategy using fewer insecticides. Other IPM components include greenhouse netting and TYLCD-tolerant tomato cultivars. Sampling techniques differ between regions, where decisions are generally based upon whitefly densities and do not relate to control strategies or growing cycles. For population monitoring and control, whitefly species are always identified. In Europe IPM-BC is the recommended strategy for a sustainable tomato production. The IPM-BC approach is mainly based on inoculative releases of the parasitoids Eretmocerus mundus and Encarsia formosa and/or the polyphagous predators Macrolophus caliginosus and Nesidiocoris tenuis. However, some limitations for a wider implementation have been identified: lack of biological solutions for some pests, costs of beneficials, low farmer confidence, costs of technical advice, and low pest injury thresholds. Research priorities to promote and improve IPM-BC are proposed on the following domains: (i) emergence and invasion of new whitefly-transmitted viruses; (ii) relevance of B. tabaci biotypes regarding insecticide resistance; (iii) biochemistry and genetics of plant resistance; (iv) economic thresholds and sampling techniques of whiteflies for decision making; and (v) conservation and management of native whitefly natural enemies and improvement of biological control of other tomato pests.
Resumo:
In this work we studied the toxicity in clams from the Gulf of Gabes, Tunisia (Southern Mediterranean). Samples from two stations (M2 and S6) were collected monthly from January 2009 to September 2010, and analyzed by the official control method of mousse bioassay (MBA) for lipophilic toxins. All samples were also analyzed with the LC-MS/MS method for the determination of lipophilic toxins, namely: okadaic acid group, pectenotoxins, yessotoxins and azaspiracids, spirolides and gymnodimines (GYMs). The results showed prevalence of GYMs since it was the only toxin group identified in these samples with a maximum of 2,136 μg GYM -A kg-1 (February 2009 at M2). Furthermore, GYMs showed persistence in the area, with only one blank sample below the limit of detection. Interestingly, this blank sample was found in June 2009 after an important toxic episode which supports the recent findings regarding the high detoxification capability of clams, much faster than that reported for oysters. In comparison, good agreement was found among MBA, the LD50 value of 80-100 μg kg-1 reported for GYM- A, and quantitative results provided by LC-MS/MS. On the contrary to that previously reported for Tunisian clams, we unambiguously identified and quantified by LC-MS/MS the isomers GYM- B/C in most samples. Phytoplankton identification and enumeration of Karenia selliformis usually showed higher densities at site M2 than S6 as expected bearing in mind toxin results, although additional results would be required to improve the correlation between K. selliformis densities and quantitative results of toxins. The prevalence and persistence of GYMs in this area at high levels strongly encourages the evaluation of the chronic toxic effects of GYMs. This is especially important taking into account that relatively large quantities of GYMs can be released into the market due to the replacement of the official control method from mouse bioassay to the LC-MS/MS for lipophilic toxins (Regulation (EU) No 15/2011), and the lack of Regulation for this group of toxins.
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densitiesby generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
Les Mesures de Semblança Quàntica Molecular (MSQM) requereixen la maximització del solapament de les densitats electròniques de les molècules que es comparen. En aquest treball es presenta un algorisme de maximització de les MSQM, que és global en el límit de densitatselectròniques deformades a funcions deltes de Dirac. A partir d'aquest algorisme se'n deriva l'equivalent per a densitats no deformades
Resumo:
En aquest treball es presenta l'ús de funcions de densitat electrònica de forat de Fermi per incrementar el paper que pren una regió molecular concreta, considerada com a responsable de la reactivitat molecular, tot i mantenir la mida de la funció de densitat original. Aquestes densitats s'utilitzen per fer mesures d'autosemblança molecular quàntica i es presenten com una alternativa a l'ús de fragments moleculars aillats en estudis de relació entre estructura i propietat. El treball es complementa amb un exemple pràctic, on es correlaciona l'autosemblanca molecular a partir de densitats modificades amb l'energia d'una reacció isodòsmica
Resumo:
The contributions of the correlated and uncorrelated components of the electron-pair density to atomic and molecular intracule I(r) and extracule E(R) densities and its Laplacian functions ∇2I(r) and ∇2E(R) are analyzed at the Hartree-Fock (HF) and configuration interaction (CI) levels of theory. The topologies of the uncorrelated components of these functions can be rationalized in terms of the corresponding one-electron densities. In contrast, by analyzing the correlated components of I(r) and E(R), namely, IC(r) and EC(R), the effect of electron Fermi and Coulomb correlation can be assessed at the HF and CI levels of theory. Moreover, the contribution of Coulomb correlation can be isolated by means of difference maps between IC(r) and EC(R) distributions calculated at the two levels of theory. As application examples, the He, Ne, and Ar atomic series, the C2-2, N2, O2+2 molecular series, and the C2H4 molecule have been investigated. For these atoms and molecules, it is found that Fermi correlation accounts for the main characteristics of IC(r) and EC(R), with Coulomb correlation increasing slightly the locality of these functions at the CI level of theory. Furthermore, IC(r), EC(R), and the associated Laplacian functions, reveal the short-ranged nature and high isotropy of Fermi and Coulomb correlation in atoms and molecules
Resumo:
A topological analysis of intracule and extracule densities and their Laplacians computed within the Hartree-Fock approximation is presented. The analysis of the density distributions reveals that among all possible electron-electron interactions in atoms and between atoms in molecules only very few are located rigorously as local maxima. In contrast, they are clearly identified as local minima in the topology of Laplacian maps. The conceptually different interpretation of intracule and extracule maps is also discussed in detail. An application example to the C2H2, C2H4, and C2H6 series of molecules is presented
Resumo:
The effect of basis set superposition error (BSSE) on molecular complexes is analyzed. The BSSE causes artificial delocalizations which modify the first order electron density. The mechanism of this effect is assessed for the hydrogen fluoride dimer with several basis sets. The BSSE-corrected first-order electron density is obtained using the chemical Hamiltonian approach versions of the Roothaan and Kohn-Sham equations. The corrected densities are compared to uncorrected densities based on the charge density critical points. Contour difference maps between BSSE-corrected and uncorrected densities on the molecular plane are also plotted to gain insight into the effects of BSSE correction on the electron density
Resumo:
A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory
Resumo:
In the present paper we discuss and compare two different energy decomposition schemes: Mayer's Hartree-Fock energy decomposition into diatomic and monoatomic contributions [Chem. Phys. Lett. 382, 265 (2003)], and the Ziegler-Rauk dissociation energy decomposition [Inorg. Chem. 18, 1558 (1979)]. The Ziegler-Rauk scheme is based on a separation of a molecule into fragments, while Mayer's scheme can be used in the cases where a fragmentation of the system in clearly separable parts is not possible. In the Mayer scheme, the density of a free atom is deformed to give the one-atom Mulliken density that subsequently interacts to give rise to the diatomic interaction energy. We give a detailed analysis of the diatomic energy contributions in the Mayer scheme and a close look onto the one-atom Mulliken densities. The Mulliken density ρA has a single large maximum around the nuclear position of the atom A, but exhibits slightly negative values in the vicinity of neighboring atoms. The main connecting point between both analysis schemes is the electrostatic energy. Both decomposition schemes utilize the same electrostatic energy expression, but differ in how fragment densities are defined. In the Mayer scheme, the electrostatic component originates from the interaction of the Mulliken densities, while in the Ziegler-Rauk scheme, the undisturbed fragment densities interact. The values of the electrostatic energy resulting from the two schemes differ significantly but typically have the same order of magnitude. Both methods are useful and complementary since Mayer's decomposition focuses on the energy of the finally formed molecule, whereas the Ziegler-Rauk scheme describes the bond formation starting from undeformed fragment densities
Resumo:
The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.
Resumo:
We introduce a variation of the proof for weak approximations that issuitable for studying the densities of stochastic processes which areevaluations of the flow generated by a stochastic differential equation on a random variable that maybe anticipating. Our main assumption is that the process and the initial random variable have to be smooth in the Malliavin sense. Furthermore if the inverse of the Malliavin covariance matrix associated with the process under consideration is sufficiently integrable then approximations fordensities and distributions can also be achieved. We apply theseideas to the case of stochastic differential equations with boundaryconditions and the composition of two diffusions.