881 resultados para exponential-convexity
Resumo:
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000- fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. Keywords: haldanes, biological time, scaling, pedomorphosis
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
The bifidobacterial β-galactosidase (BbgIV) was produced in E. coli DH5α at 37 and 30 °C in a 5 L bioreactor under varied conditions of dissolved oxygen (dO2) and pH. The yield of soluble BbgIV was significantly (P < 0.05) increased once the dO2 dropped to 0–2% and remained at such low values during the exponential phase. Limited dO2 significantly (P < 0.05) increased the plasmid copy number and decreased the cells growth rate. Consequently, the BbgIV yield increased to its maximum (71–75 mg per g dry cell weight), which represented 20–25% of the total soluble proteins in the cells. In addition, the specific activity and catalytic efficiency of BbgIV were significantly (P < 0.05) enhanced under limited dO2 conditions. This was concomitant with a change in the enzyme secondary structure, suggesting a link between the enzyme structure and function. The knowledge generated from this work is very important for producing BbgIV as a biocatalyst for the development of a cost-effective process for the synthesis of prebiotic galactooligosaccharides from lactose.
Resumo:
Background: Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. Aims: We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Method: Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Results: Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. Conclusions: The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.
Resumo:
In this paper we propose and analyze a hybrid $hp$ boundary element method for the solution of problems of high frequency acoustic scattering by sound-soft convex polygons, in which the approximation space is enriched with oscillatory basis functions which efficiently capture the high frequency asymptotics of the solution. We demonstrate, both theoretically and via numerical examples, exponential convergence with respect to the order of the polynomials, moreover providing rigorous error estimates for our approximations to the solution and to the far field pattern, in which the dependence on the frequency of all constants is explicit. Importantly, these estimates prove that, to achieve any desired accuracy in the computation of these quantities, it is sufficient to increase the number of degrees of freedom in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods.
Resumo:
We study the inuence of the intrinsic curvature on the large time behaviour of the heat equation in a tubular neighbourhood of an unbounded geodesic in a two-dimensional Riemannian manifold. Since we consider killing boundary conditions, there is always an exponential-type decay for the heat semigroup. We show that this exponential-type decay is slower for positively curved manifolds comparing to the at case. As the main result, we establish a sharp extra polynomial-type decay for the heat semigroup on negatively curved manifolds comparing to the at case. The proof employs the existence of Hardy-type inequalities for the Dirichlet Laplacian in the tubular neighbourhoods on negatively curved manifolds and the method of self-similar variables and weighted Sobolev spaces for the heat equation.
Resumo:
In this article, we investigate how the choice of the attenuation factor in an extended version of Katz centrality influences the centrality of the nodes in evolving communication networks. For given snapshots of a network, observed over a period of time, recently developed communicability indices aim to identify the best broadcasters and listeners (receivers) in the network. Here we explore the attenuation factor constraint, in relation to the spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We compare three different communicability measures: standard, exponential, and relaxed (where the spectral radius bound on the attenuation factor is relaxed and the adjacency matrix is normalised, in order to maintain the convergence of the measure). Furthermore, using a vitality-based measure of both standard and relaxed communicability indices, we look at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We compare those measures with the scores produced by an iterative version of the PageRank algorithm and illustrate our findings with two examples of real-life evolving networks: the MIT reality mining data set, consisting of daily communications between 106 individuals over the period of one year, a UK Twitter mentions network, constructed from the direct \emph{tweets} between 12.4k individuals during one week, and a subset the Enron email data set.
Resumo:
We study the solutions of the Smoluchowski coagulation equation with a regularization term which removes clusters from the system when their mass exceeds a specified cutoff size, M. We focus primarily on collision kernels which would exhibit an instantaneous gelation transition in the absence of any regularization. Numerical simulations demonstrate that for such kernels with monodisperse initial data, the regularized gelation time decreasesas M increases, consistent with the expectation that the gelation time is zero in the unregularized system. This decrease appears to be a logarithmically slow function of M, indicating that instantaneously gelling kernels may still be justifiable as physical models despite the fact that they are highly singular in the absence of a cutoff. We also study the case when a source of monomers is introduced in the regularized system. In this case a stationary state is reached. We present a complete analytic description of this regularized stationary state for the model kernel, K(m1,m2)=max{m1,m2}ν, which gels instantaneously when M→∞ if ν>1. The stationary cluster size distribution decays as a stretched exponential for small cluster sizes and crosses over to a power law decay with exponent ν for large cluster sizes. The total particle density in the stationary state slowly vanishes as [(ν−1)logM]−1/2 when M→∞. The approach to the stationary state is nontrivial: Oscillations about the stationary state emerge from the interplay between the monomer injection and the cutoff, M, which decay very slowly when M is large. A quantitative analysis of these oscillations is provided for the addition model which describes the situation in which clusters can only grow by absorbing monomers.
Resumo:
Protons and electrons are being exploited in different natural charge transfer processes. Both types of charge carriers could be, therefore, responsible for charge transport in biomimetic self-assembled peptide nanostructures. The relative contribution of each type of charge carrier is studied in the present work for fi brils self-assembled from amyloid- β derived peptide molecules, in which two non-natural thiophene-based amino acids are included. It is shown that under low humidity conditions both electrons and protons contribute to the conduction, with current ratio of 1:2 respectively, while at higher relative humidity proton transport dominates the conductance. This hybrid conduction behavior leads to a bimodal exponential dependence of the conductance on the relative humidity. Furthermore, in both cases the conductance is shown to be affected by the peptide folding state under the entire relative humidity range. This unique hybrid conductivity behavior makes self-assembled peptide nanostructures powerful building blocks for the construction of electric devices that could use either or both types of charge carriers for their function.
Resumo:
Various complex oscillatory processes are involved in the generation of the motor command. The temporal dynamics of these processes were studied for movement detection from single trial electroencephalogram (EEG). Autocorrelation analysis was performed on the EEG signals to find robust markers of movement detection. The evolution of the autocorrelation function was characterised via the relaxation time of the autocorrelation by exponential curve fitting. It was observed that the decay constant of the exponential curve increased during movement, indicating that the autocorrelation function decays slowly during motor execution. Significant differences were observed between movement and no moment tasks. Additionally, a linear discriminant analysis (LDA) classifier was used to identify movement trials with a peak accuracy of 74%.
Resumo:
Binding to bovine serum albumin of monomeric (vescalagin and pedunculagin) and dimeric ellagitannins (roburin A, oenothein B, and gemin A) was investigated by isothermal titration calorimetry and fluorescence spectroscopy, which indicated two types of binding sites. Stronger and more specific sites exhibited affinity constants, K1, of 104–106 M–1 and stoichiometries, n1, of 2–13 and dominated at low tannin concentrations. Weaker and less-specific binding sites had K2 constants of 103–105 M–1 and stoichiometries, n2, of 16–30 and dominated at higher tannin concentrations. Binding to stronger sites appeared to be dependent on tannin flexibility and the presence of free galloyl groups. Positive entropies for all but gemin A indicated that hydrophobic interactions dominated during complexation. This was supported by an exponential relationship between the affinity, K1, and the modeled hydrophobic accessible surface area and by a linear relationship between K1 and the Stern–Volmer quenching constant, KSV.
Resumo:
Transient responses of electrorheological fluids to square-wave electric fields in steady shear are investigated by computational simulation method. The structure responses of the fluids to the field with high frequency are found to be very similar to that to the field with very low frequency or the sudden applied direct current field. The stress rise processes are also similar in both cases and can be described by an exponential expression. The characteristic time tau of the stress response is found to decrease with the increase of the shear rate (gamma) over dot and the area fraction of the particles phi(2). The relation between them can be roughly expressed as tau proportional to(gamma) over dot(-3/4)phi(2)(-3/2). The simulation results are compared with experimental measurements. The aggregation kinetics of the particles in steady shear is also discussed according to these results.
Resumo:
New in-situ aircraft measurements of Saharan dust originating from Mali, Mauritania and Algeria taken during the Fennec 2011 aircraft campaign over a remote part of the Sahara Desert are presented. Size distributions extending to 300 μm are shown, representing measurements extending further into the coarse mode than previously published for airborne Saharan dust. A significant coarse mode was present in the size distribution measurements with effective diameter (deff) from 2.3 to 19.4 μm and coarse mode volume median diameter (dvc) from 5.8 to 45.3 μm. The mean size distribution had a larger relative proportion of coarse mode particles than previous aircraft measurements. The largest particles (with deff >12 μm, or dvc >25 μm) were only encountered within 1 km of the ground. Number concentration, mass loading and extinction coefficient showed inverse relationships to dust age since uplift. Dust particle size showed a weak exponential relationship to dust age. Two cases of freshly uplifted dust showed quite different characteristics of size distribution and number concentration. Single Scattering Albed (SSA) values at 550 nm calculated from the measured size distributions revealed high absorption ranging from 0.70 to 0.97 depending on the refractive index. SSA was found to be strongly related to deff. New instrumentation revealed that direct measurements, behind Rosemount inlets, overestimate SSA by up to 0.11 when deff is greater than 2 μm. This is caused by aircraft inlet inefficiencies and sampling losses. Previous measurements of SSA from aircraft measurements may also have been overestimates for this reason. Radiative transfer calculations indicate that the range of SSAs during Fennec 2011 can lead to underestimates in shortwave atmospheric heating rates by 2.0 to 3.0 times if the coarse mode is neglected. This will have an impact on Saharan atmospheric dynamics and circulation,which should be taken into account by numerical weather prediction and climate models.
Resumo:
Large waves pose risks to ships, offshore structures, coastal infrastructure and ecosystems. This paper analyses 10 years of in-situ measurements of significant wave height (Hs) and maximum wave height (Hmax) from the ocean weather ship Polarfront in the Norwegian Sea. During the period 2000 to 2009, surface elevation was recorded every 0.59 s during sampling periods of 30 min. The Hmax observations scale linearly with Hs on average. A widely-used empirical Weibull distribution is found to estimate average values of Hmax/Hs and Hmax better than a Rayleigh distribution, but tends to underestimate both for all but the smallest waves. In this paper we propose a modified Rayleigh distribution which compensates for the heterogeneity of the observed dataset: the distribution is fitted to the whole dataset and improves the estimate of the largest waves. Over the 10-year period, the Weibull distribution approximates the observed Hs and Hmax well, and an exponential function can be used to predict the probability distribution function of the ratio Hmax/Hs. However, the Weibull distribution tends to underestimate the occurrence of extremely large values of Hs and Hmax. The persistence of Hs and Hmax in winter is also examined. Wave fields with Hs>12 m and Hmax>16 m do not last longer than 3 h. Low-to-moderate wave heights that persist for more than 12 h dominate the relationship of the wave field with the winter NAO index over 2000–2009. In contrast, the inter-annual variability of wave fields with Hs>5.5 m or Hmax>8.5 m and wave fields persisting over ~2.5 days is not associated with the winter NAO index.
Resumo:
To understand the evolution of well-organized social behaviour, we must first understand the mechanism by which collective behaviour establishes. In this study, the mechanisms of collective behaviour in a colony of social insects were studied in terms of the transition probability between active and inactive states, which is linked to mutual interactions. The active and inactive states of the social insects were statistically extracted from the velocity profiles. From the duration distributions of the two states, we found that 1) the durations of active and inactive states follow an exponential law, and 2) pair interactions increase the transition probability from inactive to active states. The regulation of the transition probability by paired interactions suggests that such interactions control the populations of active and inactive workers in the colony.