951 resultados para SCALING
Resumo:
The diversification of life involved enormous increases in size and complexity. The evolutionary transitions from prokaryotes to unicellular eukaryotes to metazoans were accompanied by major innovations inmetabolicdesign.Hereweshowthat thescalingsofmetabolic rate, population growth rate, and production efficiency with body size have changed across the evolutionary transitions.Metabolic rate scales with body mass superlinearly in prokaryotes, linearly in protists, and sublinearly inmetazoans, so Kleiber’s 3/4 power scaling law does not apply universally across organisms. The scaling ofmaximum population growth rate shifts from positive in prokaryotes to negative in protists and metazoans, and the efficiency of production declines across these groups.Major changes inmetabolic processes duringtheearlyevolutionof life overcameexistingconstraints, exploited new opportunities, and imposed new constraints. The 3.5 billion year history of life on earth was characterized by
Resumo:
Rensch’s rule, which states that the magnitude of sexual size dimorphism tends to increase with increasing body size, has evolved independently in three lineages of large herbivorous mammals: bovids (antelopes), cervids (deer), and macropodids (kangaroos). This pattern can be explained by a model that combines allometry,life-history theory, and energetics. The key features are thatfemale group size increases with increasing body size and that males have evolved under sexual selection to grow large enough to control these groups of females. The model predicts relationships among body size and female group size, male and female age at first breeding,death and growth rates, and energy allocation of males to produce body mass and weapons. Model predictions are well supported by data for these megaherbivores. The model suggests hypotheses for why some other sexually dimorphic taxa, such as primates and pinnipeds(seals and sea lions), do or do not conform to Rensh’s rule.
Resumo:
A model for estimating the turbulent kinetic energy dissipation rate in the oceanic boundary layer, based on insights from rapid-distortion theory, is presented and tested. This model provides a possible explanation for the very high dissipation levels found by numerous authors near the surface. It is conceived that turbulence, injected into the water by breaking waves, is subsequently amplified due to its distortion by the mean shear of the wind-induced current and straining by the Stokes drift of surface waves. The partition of the turbulent shear stress into a shear-induced part and a wave-induced part is taken into account. In this picture, dissipation enhancement results from the same mechanism responsible for Langmuir circulations. Apart from a dimensionless depth and an eddy turn-over time, the dimensionless dissipation rate depends on the wave slope and wave age, which may be encapsulated in the turbulent Langmuir number La_t. For large La_t, or any Lat but large depth, the dissipation rate tends to the usual surface layer scaling, whereas when Lat is small, it is strongly enhanced near the surface, growing asymptotically as ɛ ∝ La_t^{-2} when La_t → 0. Results from this model are compared with observations from the WAVES and SWADE data sets, assuming that this is the dominant dissipation mechanism acting in the ocean surface layer and statistical measures of the corresponding fit indicate a substantial improvement over previous theoretical models. Comparisons are also carried out against more recent measurements, showing good order-of-magnitude agreement, even when shallow-water effects are important.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
We investigate the scaling between precipitation and temperature changes in warm and cold climates using six models that have simulated the response to both increased CO2 and Last Glacial Maximum (LGM) boundary conditions. Globally, precipitation increases in warm climates and decreases in cold climates by between 1.5%/°C and 3%/°C. Precipitation sensitivity to temperature changes is lower over the land than over the ocean and lower over the tropical land than over the extratropical land, reflecting the constraint of water availability. The wet tropics get wetter in warm climates and drier in cold climates, but the changes in dry areas differ among models. Seasonal changes of tropical precipitation in a warmer world also reflect this “rich get richer” syndrome. Precipitation seasonality is decreased in the cold-climate state. The simulated changes in precipitation per degree temperature change are comparable to the observed changes in both the historical period and the LGM.
Resumo:
The leaf carbon isotope ratio (δ13C) of C3 plants is inversely related to the drawdown of CO2 concentration during photosynthesis, which increases towards drier environments. We aimed to discriminate between the hypothesis of universal scaling, which predicts between-species responses of δ13C to aridity similar to within-species responses, and biotic homoeostasis, which predicts offsets in the δ13C of species occupying adjacent ranges. The Northeast China Transect spans 130–900 mm annual precipitation within a narrow latitude and temperature range. Leaves of 171 species were sampled at 33 sites along the transect (18 at ≥ 5 sites) for dry matter, carbon (C) and nitrogen (N) content, specific leaf area (SLA) and δ13C. The δ13C of species generally followed a common relationship with the climatic moisture index (MI). Offsets between adjacent species were not observed. Trees and forbs diverged slightly at high MI. In C3 plants, δ13C predicted N per unit leaf area (Narea) better than MI. The δ13C of C4 plants was invariant with MI. SLA declined and Narea increased towards low MI in both C3 and C4 plants. The data are consistent with optimal stomatal regulation with respect to atmospheric dryness. They provide evidence for universal scaling of CO2 drawdown with aridity in C3 plants.
Resumo:
Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.
Resumo:
The turbulent structure of a stratocumulus-topped marine boundary layer over a 2-day period is observed with a Doppler lidar at Mace Head in Ireland. Using profiles of vertical velocity statistics, the bulk of the mixing is identified as cloud driven. This is supported by the pertinent feature of negative vertical velocity skewness in the sub-cloud layer which extends, on occasion, almost to the surface. Both coupled and decoupled turbulence characteristics are observed. The length and timescales related to the cloud-driven mixing are investigated and shown to provide additional information about the structure and the source of the mixing inside the boundary layer. They are also shown to place constraints on the length of the sampling periods used to derive products, such as the turbulent dissipation rate, from lidar measurements. For this, the maximum wavelengths that belong to the inertial subrange are studied through spectral analysis of the vertical velocity. The maximum wavelength of the inertial subrange in the cloud-driven layer scales relatively well with the corresponding layer depth during pronounced decoupled structure identified from the vertical velocity skewness. However, on many occasions, combining the analysis of the inertial subrange and vertical velocity statistics suggests higher decoupling height than expected from the skewness profiles. Our results show that investigation of the length scales related to the inertial subrange significantly complements the analysis of the vertical velocity statistics and enables a more confident interpretation of complex boundary layer structures using measurements from a Doppler lidar.
Resumo:
In the present investigation, a scanning electron microscopy analysis was performed to evaluate the effects of the topical application of ethylenediaminetetraacetic acid (EDTA) gel associated with Cetavlon (EDTAC) in removing the smear layer and exposing collagen fibers following root surface instrumentation. Twenty-eight teeth from adult humans, single rooted and scheduled for extraction due to periodontal reasons, were selected. Each tooth was submitted to manual (scaling and root planing) instrumentation alone or combined with ultrasonic instruments, with or without etching using a 24% EDTAC gel. Following extraction, specimens were processed and examined under a scanning electron microscope. A comparative morphological semi-quantitative analysis was performed; the intensity of the smear layer and the decalcification of cementum and dentinal surfaces were graded in 12 sets using an arbitrary scale ranging from 1 (area covered by a smear layer) to 4 (no smear layer). Root debridement with hand instruments alone or combined with ultrasonic instruments resulted in a similar smear layer covering the root surfaces. The smear layer was successfully removed from the surfaces treated with EDTAC, which exhibited numerous exposed dentinal tubules and collagen fibers. This study supports the hypothesis that manual instrumentation alone or instrumentation combined with ultrasonic instrumentation is unable to remove the smear layer, whereas the subsequent topical application of EDTAC gel effectively removes the smear layer, uncovers dentinal openings and exposes collagen fibers.
Resumo:
The critical behavior of the stochastic susceptible-infected-recovered model on a square lattice is obtained by numerical simulations and finite-size scaling. The order parameter as well as the distribution in the number of recovered individuals is determined as a function of the infection rate for several values of the system size. The analysis around criticality is obtained by exploring the close relationship between the present model and standard percolation theory. The quantity UP, equal to the ratio U between the second moment and the squared first moment of the size distribution multiplied by the order parameter P, is shown to have, for a square system, a universal value 1.0167(1) that is the same for site and bond percolation, confirming further that the SIR model is also in the percolation class.