39 resultados para Multidimensional scaling
Resumo:
The measures most frequently used to assess psychotic symptoms fail to reflect important dimensions. The Psychotic Symptom Rating Scale (PSYRATS) aims to capture the multidimensional nature of auditory hallucinations and delusions. Individuals (N = 276) who had recently relapsed with positive symptoms completed the auditory hallucinations and delusions PSYRATS scales. These scores were compared with the relevant items from the SAPS and PANSS, and with measures of current mood. Total scores and distribution of items of the PSYRATS scales are presented and correlated with other measures. Positive symptom items from the SAPS and PANSS reflected the more objective aspects of PSYRATS ratings of auditory hallucinations and delusions (frequency and conviction) but were relatively poor at measuring distress. A major strength of the PSYRATS scales is the specific measurement of the distress dimension of symptoms, which is a key target of psychological intervention. It is advised that the PSYRATS should not be used as a total score alone, whilst further research is needed to clarify the best use of potential subscales. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
Utilising the expressive power of S-Expressions in Learning Classifier Systems often prohibitively increases the search space due to increased flexibility of the endcoding. This work shows that selection of appropriate S-Expression functions through domain knowledge improves scaling in problems, as expected. It is also known that simple alphabets perform well on relatively small sized problems in a domain, e.g. ternary alphabet in the 6, 11 and 20 bit MUX domain. Once fit ternary rules have been formed it was investigated whether higher order learning was possible and whether this staged learning facilitated selection of appropriate functions in complex alphabets, e.g. selection of S-Expression functions. This novel methodology is shown to provide compact results (135-MUX) and exhibits potential for scaling well (1034-MUX), but is only a small step towards introducing abstraction to LCS.
Resumo:
The scaling of metabolic rates to body size is widely considered to be of great biological and ecological importance, and much attention has been devoted to determining its theoretical and empirical value. Most debate centers on whether the underlying power law describing metabolic rates is 2/3 (as predicted by scaling of surface area/volume relationships) or 3/4 ("Kleiber's law"). Although recent evidence suggests that empirically derived exponents vary among clades with radically different metabolic strategies, such as ectotherms and endotherms, models, such as the metabolic theory of ecology, depend on the assumption that there is at least a predominant, if not universal, metabolic scaling exponent. Most analyses claimed to support the predictions of general models, however, failed to control for phylogeny. We used phylogenetic generalized least-squares models to estimate allometric slopes for both basal metabolic rate (BMR) and field metabolic rate (FMR) in mammals. Metabolic rate scaling conformed to no single theoretical prediction, but varied significantly among phylogenetic lineages. In some lineages we found a 3/4 exponent, in others a 2/3 exponent, and in yet others exponents differed significantly from both theoretical values. Analysis of the phylogenetic signal in the data indicated that the assumptions of neither species-level analysis nor independent contrasts were met. Analyses that assumed no phylogenetic signal in the data (species-level analysis) or a strong phylogenetic signal (independent contrasts), therefore, returned estimates of allometric slopes that were erroneous in 30% and 50% of cases, respectively. Hence, quantitative estimation of the phylogenetic signal is essential for determining scaling exponents. The lack of evidence for a predominant scaling exponent in these analyses suggests that general models of metabolic scaling, and macro-ecological theories that depend on them, have little explanatory power.
Resumo:
It has been known for decades that the metabolic rate of animals scales with body mass with an exponent that is almost always <1, >2/3, and often very close to 3/4. The 3/4 exponent emerges naturally from two models of resource distribution networks, radial explosion and hierarchically branched, which incorporate a minimum of specific details. Both models show that the exponent is 2/3 if velocity of flow remains constant, but can attain a maximum value of 3/4 if velocity scales with its maximum exponent, 1/12. Quarterpower scaling can arise even when there is no underlying fractality. The canonical “fourth dimension” in biological scaling relations can result from matching the velocity of flow through the network to the linear dimension of the terminal “service volume” where resources are consumed. These models have broad applicability for the optimal design of biological and engineered systems where energy, materials, or information are distributed from a single source.
Resumo:
Over many millions of years of independent evolution, placental, marsupial and monotreme mammals have diverged conspicuously in physiology, life history and reproductive ecology. The differences in life histories are particularly striking. Compared with placentals, marsupials exhibit shorter pregnancy, smaller size of offspring at birth and longer period of lactation in the pouch. Monotremes also exhibit short pregnancy, but incubate embryos in eggs, followed by a long period of post-hatching lactation. Using a large sample of mammalian species, we show that, remarkably, despite their very different life histories, the scaling of production rates is statistically indistinguishable across mammalian lineages. Apparently all mammals are subject to the same fundamental metabolic constraints on productivity, because they share similar body designs, vascular systems and costs of producing new tissue.
Resumo:
The diversification of life involved enormous increases in size and complexity. The evolutionary transitions from prokaryotes to unicellular eukaryotes to metazoans were accompanied by major innovations inmetabolicdesign.Hereweshowthat thescalingsofmetabolic rate, population growth rate, and production efficiency with body size have changed across the evolutionary transitions.Metabolic rate scales with body mass superlinearly in prokaryotes, linearly in protists, and sublinearly inmetazoans, so Kleiber’s 3/4 power scaling law does not apply universally across organisms. The scaling ofmaximum population growth rate shifts from positive in prokaryotes to negative in protists and metazoans, and the efficiency of production declines across these groups.Major changes inmetabolic processes duringtheearlyevolutionof life overcameexistingconstraints, exploited new opportunities, and imposed new constraints. The 3.5 billion year history of life on earth was characterized by
Resumo:
Rensch’s rule, which states that the magnitude of sexual size dimorphism tends to increase with increasing body size, has evolved independently in three lineages of large herbivorous mammals: bovids (antelopes), cervids (deer), and macropodids (kangaroos). This pattern can be explained by a model that combines allometry,life-history theory, and energetics. The key features are thatfemale group size increases with increasing body size and that males have evolved under sexual selection to grow large enough to control these groups of females. The model predicts relationships among body size and female group size, male and female age at first breeding,death and growth rates, and energy allocation of males to produce body mass and weapons. Model predictions are well supported by data for these megaherbivores. The model suggests hypotheses for why some other sexually dimorphic taxa, such as primates and pinnipeds(seals and sea lions), do or do not conform to Rensh’s rule.
Resumo:
A model for estimating the turbulent kinetic energy dissipation rate in the oceanic boundary layer, based on insights from rapid-distortion theory, is presented and tested. This model provides a possible explanation for the very high dissipation levels found by numerous authors near the surface. It is conceived that turbulence, injected into the water by breaking waves, is subsequently amplified due to its distortion by the mean shear of the wind-induced current and straining by the Stokes drift of surface waves. The partition of the turbulent shear stress into a shear-induced part and a wave-induced part is taken into account. In this picture, dissipation enhancement results from the same mechanism responsible for Langmuir circulations. Apart from a dimensionless depth and an eddy turn-over time, the dimensionless dissipation rate depends on the wave slope and wave age, which may be encapsulated in the turbulent Langmuir number La_t. For large La_t, or any Lat but large depth, the dissipation rate tends to the usual surface layer scaling, whereas when Lat is small, it is strongly enhanced near the surface, growing asymptotically as ɛ ∝ La_t^{-2} when La_t → 0. Results from this model are compared with observations from the WAVES and SWADE data sets, assuming that this is the dominant dissipation mechanism acting in the ocean surface layer and statistical measures of the corresponding fit indicate a substantial improvement over previous theoretical models. Comparisons are also carried out against more recent measurements, showing good order-of-magnitude agreement, even when shallow-water effects are important.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
We consider second kind integral equations of the form x(s) - (abbreviated x - K x = y ), in which Ω is some unbounded subset of Rn. Let Xp denote the weighted space of functions x continuous on Ω and satisfying x (s) = O(|s|-p ),s → ∞We show that if the kernel k(s,t) decays like |s — t|-q as |s — t| → ∞ for some sufficiently large q (and some other mild conditions on k are satisfied), then K ∈ B(XP) (the set of bounded linear operators on Xp), for 0 ≤ p ≤ q. If also (I - K)-1 ∈ B(X0) then (I - K)-1 ∈ B(XP) for 0 < p < q, and (I- K)-1∈ B(Xq) if further conditions on k hold. Thus, if k(s, t) = O(|s — t|-q). |s — t| → ∞, and y(s)=O(|s|-p), s → ∞, the asymptotic behaviour of the solution x may be estimated as x (s) = O(|s|-r), |s| → ∞, r := min(p, q). The case when k(s,t) = к(s — t), so that the equation is of Wiener-Hopf type, receives especial attention. Conditions, in terms of the symbol of I — K, for I — K to be invertible or Fredholm on Xp are established for certain cases (Ω a half-space or cone). A boundary integral equation, which models three-dimensional acoustic propaga-tion above flat ground, absorbing apart from an infinite rigid strip, illustrates the practical application and sharpness of the above results. This integral equation mod-els, in particular, road traffic noise propagation along an infinite road surface sur-rounded by absorbing ground. We prove that the sound propagating along the rigid road surface eventually decays with distance at the same rate as sound propagating above the absorbing ground.